WO2013161345A1 - Monitoring system and monitoring method - Google Patents

Monitoring system and monitoring method Download PDF

Info

Publication number
WO2013161345A1
WO2013161345A1 PCT/JP2013/053277 JP2013053277W WO2013161345A1 WO 2013161345 A1 WO2013161345 A1 WO 2013161345A1 JP 2013053277 W JP2013053277 W JP 2013053277W WO 2013161345 A1 WO2013161345 A1 WO 2013161345A1
Authority
WO
WIPO (PCT)
Prior art keywords
monitoring
image
resolution
information
terminal device
Prior art date
Application number
PCT/JP2013/053277
Other languages
French (fr)
Japanese (ja)
Inventor
照久 高野
秋彦 香西
真史 安原
Original Assignee
日産自動車株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日産自動車株式会社 filed Critical 日産自動車株式会社
Publication of WO2013161345A1 publication Critical patent/WO2013161345A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/27Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view providing all-round vision, e.g. using omnidirectional cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R21/00Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
    • B60R2021/0027Post collision measures, e.g. notifying emergency services
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/50Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the display information being shared, e.g. external display, data transfer to other traffic participants or centralised traffic controller

Definitions

  • the present invention relates to a monitoring system and a monitoring method.
  • This application claims priority based on Japanese Patent Application No. 2012-098545 filed on Apr. 24, 2012.
  • the contents described in the application are incorporated into the present application by reference and made a part of the description of the present application.
  • a security device that detects the occurrence of abnormalities is known by installing multiple security camera devices in shopping streets, store entrances, home entrances, and other streets, and monitoring surrounding images captured by the security camera device (Patent Document 1).
  • the present invention provides a monitoring system that reduces the amount of communication data and suppresses the delay of information transmission by adjusting the amount of information of a monitoring image to be transmitted when the communication speed is equal to or lower than a predetermined value. For the purpose.
  • a first image having a first resolution with a relatively high resolution and a second image having a second resolution lower than the first resolution are set to a predetermined composition condition.
  • the central monitoring device when the communication speed is low, a monitoring image formed by mixing information with high resolution and information with low resolution on a monitoring terminal device mounted on each mobile unit is displayed in the central monitoring device. Therefore, it is possible to suppress an increase in the amount of communication data while ensuring the transmission frequency of information. That is, the central monitoring device can acquire rough information with a small amount of communication data a plurality of times instead of acquiring detailed information with a large amount of communication data once when the communication speed is low. Since the monitoring terminal device is controlled at the same time, information can be collected in real time without encouraging the congestion of the communication line that causes a decrease in the communication speed.
  • FIG. 1 is a schematic diagram showing a monitoring system according to an embodiment of the present invention. It is a block diagram which shows the monitoring system of FIG. It is a perspective view which shows the example of arrangement
  • FIG. 1 shows an example of the display image shown on the display of a central monitoring apparatus.
  • FIG. 2 shows an example of the display image shown on the display of a central monitoring apparatus.
  • FIG. 3 shows an example of the display image shown on the display of a central monitoring apparatus.
  • FIG. 4 shows an example of the display image shown on the display of a central monitoring apparatus.
  • FIG. 5 shows an example of the display image shown on the display of a central monitoring apparatus. It is FIG.
  • FIG. 6 which shows an example of the display image shown on the display of a central monitoring apparatus.
  • FIG. 7 which shows an example of the display image shown on the display of a central monitoring apparatus.
  • FIG. 8 which shows an example of the display image shown on the display of a central monitoring apparatus.
  • a monitoring system is used in a city by a supervisor such as a police station or a fire department or a security consignment company using captured images of cameras mounted on a plurality of moving bodies. It is embodied in a monitoring system 1 that centrally monitors the security of the country.
  • Each of the monitoring terminal devices 10 mounted on the plurality of moving bodies acquires the position information, the monitoring image around the moving body, and the time information at a predetermined timing, and the position information, the monitoring image, and the time
  • the monitoring information including the information is transmitted to the central monitoring device 20 installed on the monitor side via wireless communication.
  • the central monitoring device 20 accumulates monitoring information including at least a monitoring image and position information acquired from the monitoring terminal device 10, and displays the position information of the moving body on the map information in a superimposed manner via a display or the like.
  • a monitoring image and time information captured at each moving body or each position are displayed. Therefore, as shown in FIG. 1, the monitoring system 1 of this example is mounted on a moving body V and transmits monitoring information such as position information and monitoring images via a telecommunication network 30.
  • a central monitoring device 20 that acquires and processes the monitoring information.
  • the monitoring system 1 when the communication speed of information received from the monitoring terminal device 10 is equal to or lower than a predetermined threshold, the monitoring system 1 according to the present embodiment has a predetermined knitting condition defined from the viewpoint of reducing the amount of communication data.
  • a monitoring information transmission command for generating a mixed monitoring image is transmitted via wireless communication.
  • a monitoring image according to the monitoring information transmission command acquired via wireless communication is generated, and monitoring information including the monitoring image is transmitted to the supervisor side via wireless communication.
  • the supervisor side adjusts the resolution of the image information according to the communication speed reduction state, and acquires the monitoring information including the monitoring image with the communication data amount reduced.
  • the monitor can select the moving object V for which a monitoring image is to be generated based on the position of the moving object superimposed on the map information, the movement schedule of the moving object V acquired in advance, and the like.
  • the monitor can select the moving body V by pointing the target moving body V with a pointer such as a cursor or a touch pen, or by touching the touch panel display screen with a finger. Further, the monitor specifies the direction in which he / she wants to pay attention according to the location where the accident has occurred, the location designated by the notification, the location to be monitored, and other points of interest based on the position and traveling direction of the moving object V.
  • the monitor inputs to the central monitoring device 20 information specifying the selected moving object V, information specifying one or more directions to be watched, and a change in the watching direction.
  • the monitor designates the start point and end point or start point, start point, intermediate point, and end point of the range to be watched with a pointer such as a cursor or a touch pen, or touches the touch panel display screen with a finger. By doing so, the gaze direction can be input.
  • the mobile body V on which the monitoring terminal device 10 is mounted is not particularly limited as long as it travels in the target monitoring area, and includes mobile bodies V such as automobiles, motorcycles, industrial vehicles, and trams.
  • automobiles include private automobiles V2 and emergency automobiles V3, and in particular, taxis and route buses V1 that travel randomly and constantly in a predetermined area are preferably included.
  • FIG. 1 illustrates an emergency vehicle V3 such as a taxi V1, a private vehicle V2, a police car, a fire engine or an ambulance, but these are collectively referred to as a moving body V or a passenger vehicle V.
  • FIG. 2 is a block diagram illustrating a specific configuration of the central monitoring device 20 and the monitoring terminal device 10.
  • the monitoring terminal device 10 and the central monitoring device 20 can communicate via the telecommunication network 30.
  • the communication device 23 of the monitoring terminal device 10 is a communication means capable of wireless communication, and exchanges information with the communication device 23 of the central monitoring device 20 via the telecommunication network 30.
  • the telecommunications network 30 is a commercial telephone network, a mobile phone communication device can be used widely, and when the telecommunications network 30 is a dedicated telecommunications network for the monitoring system 1 of this example, it is dedicated to it.
  • the communication devices 23 and 13 can be used.
  • a wireless LAN, WiFi (registered trademark), WiMAX (registered trademark), Bluetooth (registered trademark), a dedicated wireless line, or the like can be used.
  • the central monitoring device 20 inputs the positional information and the monitoring image transmitted from the monitoring terminal device 10 mounted on each of the moving bodies V to the database in order to monitor the city by the captured image of the camera of the moving body V. It has an input function and a display control function for displaying the received monitoring image on the display 24 while superimposing the received positional information on the display 24 on the map information read from the map database.
  • the central monitoring apparatus 20 of this embodiment selects the moving body V for which a monitoring image is to be acquired with reference to the position of the moving body V on the map information presented by the monitor, and monitors the selected moving body V. It has a function of generating a monitoring information transmission command for the terminal device 10.
  • the central monitoring device 20 of the present embodiment has a communication speed calculation function for calculating the communication speed of information transmitted from the monitoring terminal device 10, and when the calculated communication speed is equal to or less than a predetermined threshold, In addition to generating a monitoring image in which the first image having a relatively high resolution and the second image having a second resolution lower than the first resolution are mixed with the information to be transmitted under a predetermined composition condition.
  • a command generation function for generating a monitoring information transmission command for transmitting the monitoring information including the generated monitoring image to the central monitoring device 20 and a generated monitoring information transmission command are transmitted to the selected monitoring terminal device 10. Command transmission function.
  • the central monitoring device 20 includes a central control device 21, an image processing device 22, a communication device 23, a display 24, and an input device 25.
  • the central monitoring device 20 according to the present embodiment has a database for storing monitoring information inside the central monitoring device 20, but may be provided outside the central monitoring device 20 as long as it is accessible.
  • the central control device 21 of the central monitoring device 20 is configured by a CPU, a ROM, and a RAM, and controls the image processing device 22, the communication device 23, and the display 24 to transmit the positional information, the monitoring image, and the information transmitted from the monitoring terminal device 10.
  • the time information is received, subjected to image processing as necessary, and displayed on the display 24.
  • the central controller 21 calculates a communication speed when communication with a certain mobile unit V is performed.
  • the communication speed of the present embodiment is a data transfer speed between the communication device 23 of the central monitoring device 20 and the communication device 13 of the monitoring terminal device 10.
  • the communication speed of the present embodiment can be determined based on the frame rate of the monitoring image received from the monitoring terminal device 10 on the central monitoring device 20 side.
  • the frame rate is the number of images processed (rewritten) per unit time in image display and moving image playback. If the communication speed between the communication device 23 of the central monitoring device 20 and the monitoring terminal device 10 communication device 13 is high, the frame rate related to the processing of the monitoring image received from the monitoring terminal device 10 on the central monitoring device 20 side becomes high. If the communication speed between the two is slow, the frame rate related to the monitoring image processing on the central monitoring device 20 side becomes low, so the communication speed can be evaluated based on the frame rate in the monitoring terminal device 10.
  • the term “communication speed when the central monitoring device 20 and the monitoring terminal device 10 exchange information” includes “a frame rate related to processing of a monitoring image in the central monitoring device 20”. That is, in this embodiment, whether or not the communication speed is equal to or lower than the predetermined threshold can be determined based on whether or not the frame rate related to the monitoring image processing of the central monitoring device 20 is equal to or lower than the predetermined threshold.
  • the communication speed receives a flag indicating completion of data reception from the communication device 23 of the central monitoring device 20 from the timing when the communication device 13 of the monitoring terminal device 10 transmits data to the communication device 23 of the central monitoring device 20. This can be determined based on the time T until the timing. If the communication speed between the communication device 23 of the central monitoring device 20 and the monitoring terminal device 10 communication device 13 is high, the time T from information transmission to receipt of reception notification is shortened. Since the time T from transmission to receipt of the reception notification becomes longer, the communication speed can be evaluated based on the time T required for the information transmission completion in the monitoring terminal device 10.
  • the term “communication speed when the central monitoring device 20 and the monitoring terminal device 10 exchange information” in this specification is “the time required for the monitoring terminal device 20 to transmit information to the central monitoring device 20”. Including. That is, in the present embodiment, whether or not the communication speed is equal to or lower than the predetermined threshold is determined until the monitoring terminal device 10 transmits information to the central monitoring device 20 and receives a signal of information reception completion from the central monitoring device 20. Can be determined by whether or not the time is equal to or greater than a predetermined threshold.
  • the predetermined threshold regarding the communication speed depends on the performance of the communication device 23 of the central monitoring device 20 and the communication device 13 of the monitoring terminal device 10, the communication environment around the location of the monitoring terminal device 10, and the communication environment around the location of the central monitoring device 20. And can be set appropriately.
  • the threshold for evaluating the frame rate is a threshold for determining whether or not the communication speed is decreasing, and the technical significance is the same as the threshold for evaluating the communication speed. Since the value has a meaning different from the speed, the threshold for evaluating the frame rate is set by independent determination separately from the threshold for evaluating the communication speed. For the same reason, the threshold for evaluating the time from the information transmission to the transmission completion confirmation in the monitoring terminal device 10 is set by independent determination separately from the threshold for evaluating the communication speed.
  • the central control device 21 exists in the vicinity of the moving body V and the moving body V when the communication speed when the central monitoring device 20 and the monitoring terminal device 10 exchange information is below a predetermined threshold.
  • Mobile unit V or a mobile unit V belonging to a predefined area to which the mobile unit V belongs is selected, and a monitoring terminal device for the selected mobile unit V 10 generates a monitoring image under a predetermined composition condition, generates a monitoring information transmission command including a command for transmitting the generated monitoring image to the central monitoring device 20, and transmits the monitoring information to the selected monitoring terminal device 10.
  • the frame rate related to the monitoring image processing in the central monitoring device 20 is predetermined. This includes a case where the value is equal to or less than the value and a case where the time from transmission of information to confirmation of transmission completion in the monitoring terminal device 10 is equal to or greater than a predetermined value.
  • the monitoring information transmission command includes a communication ID and other identifiers for identifying the monitoring terminal device 10 of the selected moving object V.
  • Selection of the moving body V can be performed based on a selection command for the moving body V input from the input device 25.
  • the mobile body V whose communication speed is equal to or lower than a predetermined threshold that is, the mobile body V that is a target for transmitting the monitoring information transmission command, is individually determined for each mobile body V based on the actual communication speed with the central monitoring device 20.
  • a predetermined threshold that is, the mobile body V that is a target for transmitting the monitoring information transmission command
  • a plurality of moving bodies V may be selected together based on the position information. Furthermore, all the mobile objects V existing in the communication area covered by the base station where the communication speed is reduced may be selected together. Further, in normal times, the moving body V existing in the vicinity area of the accident occurrence point input from the outside, the moving body V existing in the vicinity area of the moving body V reporting the occurrence of the accident, or monitoring in advance is important. One or a plurality of moving bodies V may be selected by automatically extracting moving bodies V and the like existing in the vicinity area of the priority monitoring point defined as the point to perform.
  • the monitoring terminal device 10 mounted on the moving body V selected by the monitor uses the cameras 1a to 1d provided at different positions of the moving body V and uses the monitoring image according to the composition condition included in the monitoring information transmission command. And the monitoring information including the monitoring image is transmitted to the central monitoring device 20. In this way, the central monitoring device 20 and the monitoring terminal device 10 of the moving object V cooperate with each other to provide monitoring information for monitoring the city.
  • the image processing device 22 has a map database, displays map information from the map database on the display 24, and superimposes and displays position information detected by the position detection device 15 of the monitoring terminal device 10 on the map information. To do. In addition, image processing is performed to display the monitoring image captured by the camera 11 of the monitoring terminal device 10 and processed by the image processing device 12 on the display 24.
  • the display 24 can be composed of, for example, a liquid crystal display device having a size capable of displaying two or more window screens on one screen, or two or more liquid crystal display devices each displaying two or more window screens.
  • One window screen displays a screen in which the position information of each moving body V is superimposed on the map information (see FIG. 1), and the other window screen is captured by the camera 11 of the moving body V. A monitoring image generated based on the captured image is displayed.
  • the input device 25 is an input device such as a keyboard, a mouse, or a touch panel.
  • the input device 25 specifies a desired moving object V, outputs a monitoring information transmission command to the moving object V, and displays various information displayed on the display 24. This is used when inputting a processing command.
  • the communication device 23 is a communication means capable of wireless communication, and exchanges information with the communication device 13 of the monitoring terminal device 10 via the telecommunication network 30.
  • the telecommunications network 30 is a commercial telephone network, a mobile phone communication device can be used widely, and when the telecommunications network 30 is a dedicated telecommunications network for the monitoring system 1 of this example, it is dedicated to it.
  • the communication devices 13 and 23 can be used.
  • the monitoring terminal device 10 is a terminal device mounted on each of the plurality of moving bodies V.
  • the monitoring terminal device 10 is mounted on each of the plurality of moving bodies V and a position detection function for detecting position information of each of the plurality of moving bodies V.
  • the monitoring image generating function for generating a monitoring image based on the captured image around the moving object V captured by the camera 11, and the position information, the monitoring image and the time information acquired at a predetermined timing are stored in the central monitoring device 20.
  • a communication function for receiving a command from the central monitoring device 20. Therefore, each moving body V includes a plurality of cameras 11a to 11d, an image processing device 12, a communication device 13, a control device 14, and a position detection device 15.
  • the time information is mainly information used for post-event analysis, and may be omitted.
  • the plurality of cameras 11 mounted on the respective moving bodies V are constituted by CCD cameras or the like, take images of respective predetermined directions or predetermined areas around the moving body V, and output the image pickup signals to the image processing device 12.
  • the camera 11 of this embodiment can set the resolution, and can image the surroundings of the vehicle V with a predetermined resolution.
  • the image processing device 12 reads an imaging signal from the camera 11 and executes image processing for generating a monitoring image. Details of this image processing will be described later.
  • the position detection device 15 includes a GPS device and its correction device, and detects the current position of the moving object V and outputs it to the control device 14.
  • the control device 14 includes a CPU, a ROM, and a RAM, receives a monitoring information transmission command from the central monitoring device 20 received via the telecommunication network 30 and the communication device 13, and receives the camera 11, the image processing device 12, The communication device 13 and the position detection device 15 are controlled, the monitoring image generated by the image processing device 12, the position information of the moving object V detected by the position detection device 15, and the time information from the clock built in the CPU, Is output to the central monitoring device 20 via the communication device 13 and the telecommunication network 30.
  • the control device 14 In generating the monitoring image, the control device 14 follows the monitoring information transmission command acquired from the central monitoring device 20 via the communication device 13, and the resolution is relative based on the predetermined composition information included in the monitoring information transmission command.
  • a monitoring image generation function for causing the image processing device 12 to generate a monitoring image including a first image having a first resolution higher than the first resolution and a second image having a second resolution lower than the first resolution;
  • a monitoring information transmission function for causing the central monitoring device 20 to transmit the included monitoring information via the communication device 13 is executed.
  • a monitoring image including at least a first image having a relatively high first resolution and a second image having a relatively low second resolution will be described. It is also possible to further include images of other resolutions.
  • the control device 14 causes the image processing device 12 to generate a monitoring image having a predetermined resolution set in advance before receiving the monitoring information transmission command from the central monitoring device 20, and the central monitoring device via the communication device 13. 20 to send.
  • the cameras 11a to 11d are configured by using an image sensor such as a CCD, and the four cameras 11a to 11d are installed at different positions outside the passenger car V, respectively, and respectively photograph the four directions of front, rear, left and right around the moving body V.
  • the camera 11a installed at a predetermined position in front of the passenger car V such as a front grille is an object or road surface (front) in the area SP1 in front of the passenger car V and in the space in front thereof. Take a view).
  • the camera 11c installed at a predetermined position in the rear part of the passenger car V such as a rear finisher part or a roof spoiler part, shows an object or road surface (rear view) existing in the area SP3 behind the passenger car V and in the space behind it. Take a picture.
  • FIG. 4 is a view of the arrangement of the cameras 11a to 11d as viewed from above the passenger car V.
  • the camera 11a that captures the area SP1 the camera 11b that captures the area SP2, the camera 11c that captures the area SP3, and the camera 11d that captures the area SP4 are the outer periphery VE of the body of the passenger car V.
  • the camera 11b is installed on the left side of the camera 11a, and the camera 11c is on the left side of the camera 11b.
  • the camera 11d is installed on the left side of the camera 11c, and the camera 11a is installed on the left side of the camera 11d.
  • the camera 11d is installed on the right side of the camera 11a, and on the right side of the camera 11d.
  • a camera 11c is installed, a camera 11b is installed on the right side of the camera 11c, and a camera 11a is installed on the right side of the camera 11b.
  • FIG. 5A shows an example of an image GSP1 in which the front camera 11a images the area SP1
  • FIG. 5B shows an example of an image GSP2 in which the left side camera 11b images the area SP2
  • FIG. 5C shows a rear camera
  • 11c shows an example of an image GSP3 obtained by imaging the area SP3
  • FIG. 5D is an image diagram showing an example of an image GSP4 obtained by the right side camera 11d imaging the area SP4.
  • the size of each image in the present embodiment is 480 vertical pixels ⁇ 640 horizontal pixels or 960 vertical pixels ⁇ horizontal 1280 pixels.
  • an image having a relatively high first resolution of 960 pixels ⁇ width 128 is set as the first image, and an image having a relatively low second resolution of 480 pixels ⁇ width 640 pixels is the second image.
  • the image size is not particularly limited, and may be any size that can be played back by a general terminal device at two relatively different resolutions.
  • the number of cameras 11 and the positions of the cameras 11 can be appropriately determined according to the size, shape, detection area setting method, etc. of the passenger car V.
  • the plurality of cameras 11 described above are assigned identifiers according to their arrangement, and the control device 14 can identify each of the cameras 11 based on each identifier. Further, the control device 14 can transmit an imaging command and other commands to the specific camera 11 by attaching an identifier to the command signal.
  • the control device 14 controls the image processing device 12 to acquire each image signal picked up by the camera 11, and the image processing device 12 processes the image pickup signal from each camera 11, and is shown in FIGS. 5A to 5D. Convert to surveillance image. Then, the control device 14 generates a monitoring image based on the four monitoring images shown in FIGS. 5A to 5D (monitoring image generation function), and projects the monitoring image set on the side of the projection model of the columnar body. Mapping information to be projected onto the surface is associated with the monitoring image (mapping information adding function) and output to the central monitoring device 20.
  • the monitoring image generation function and the mapping information addition function will be described in detail.
  • the monitoring image is generated on the basis of the four monitoring images obtained by imaging the periphery of the passenger car V, and the process of associating the mapping information with the monitoring image is executed by the monitoring terminal device 10 as in this example, and also by the central monitoring device 20. It can also be executed.
  • four monitoring images obtained by imaging the periphery of the passenger car V are transmitted as they are from the monitoring terminal device 10 to the central monitoring device 20, and are monitored by the image processing device 22 and the central control device 21 of the central monitoring device 20. It is only necessary to generate an image, associate mapping information, and perform projection conversion.
  • the control device 14 of the monitoring terminal device 10 controls the image processing device 12 to acquire the imaging signals of the respective cameras 11a to 11d, and further turns clockwise or counterclockwise along the outer periphery of the body of the passenger car V.
  • One monitoring image is generated so that the monitoring images of the cameras 11a to 11d installed in the direction of are arranged in the installation order of these cameras 11a to 11d.
  • the four cameras 11a to 11d are installed in the order of the cameras 11a, 11b, 11c, and 11d in the counterclockwise direction along the outer periphery VE of the body of the passenger car V.
  • the control device 14 connects the four cameras 11a to 11d in a horizontal direction so that the four images captured by the cameras 11a to 11d are integrated in accordance with the installation order of these cameras 11a to 11d (cameras 11a ⁇ 11b ⁇ 11c ⁇ 11d).
  • one monitoring image is generated.
  • the images are arranged such that the ground contact surface (road surface) of the passenger vehicle V is the lower side, and the images are connected to each other at sides in the height direction (vertical direction) with respect to the road surface.
  • FIG. 6 is a diagram illustrating an example of the monitoring image K.
  • the monitoring image K of the present embodiment includes a captured image GSP1 in which the front camera 11a images the area SP1 and a left camera 11b in the area P along the direction P from the left side to the right side of the drawing.
  • a captured image GSP2 captured from SP2 a captured image GSP3 captured by the rear camera 11c capturing the area SP3, and a captured image GSP4 captured by the right side camera 11d capturing the area SP4 are arranged in this order in the horizontal direction.
  • Four images are taken as a series of images.
  • the monitoring image K generated in this way is displayed in order from the left end to the right side with the image corresponding to the road surface (the ground contact surface of the moving body V) facing down, so that the monitor can turn around the moving body V. It can be visually recognized on the display 24 in the same manner as when looking around clockwise. As described above, the monitoring image K in the mode shown in FIG. 6 can be reproduced in the arrangement in the expected order without causing the positional relationship to be out of order even when processing such as transmission / reception is performed.
  • one monitoring image K when one monitoring image K is generated, four images acquired at substantially the same time as the photographing timings of the cameras 11a to 11d are used. Thereby, since the information included in the monitoring image K can be synchronized, the situation around the moving object V at a predetermined timing can be accurately expressed.
  • the monitoring image K generated from the respective captured images with substantially the same imaging timing of the camera 11 is stored over time, and a moving image monitoring image K including a plurality of monitoring images K per predetermined unit time is generated. You may do it.
  • generating the monitoring image K of the moving image based on the images with the same imaging timing it is possible to accurately represent the change in the situation around the moving object V.
  • the conventional central monitoring device 20 has a disadvantage that it cannot simultaneously watch images (moving images) in a plurality of directions and cannot monitor the entire periphery of the moving object V on one screen.
  • control device 14 of the present embodiment since the control device 14 of the present embodiment generates one monitoring image K from a plurality of images, it can simultaneously play back moving images of images in different imaging directions regardless of the function of the central monitoring device 20. . That is, by continuously reproducing the monitoring image K (moving image reproduction), four images included in the monitoring image K are simultaneously reproduced (moving image reproduction), and the state change of the regions in different directions is displayed on one screen. Can be monitored.
  • the first image of the first resolution of the present embodiment includes an image of 960 ⁇ 1280 pixels shown in FIGS. 5A to 5D, and these four images are made into one image as shown in FIG.
  • the monitoring image K compressed to 960 ⁇ 1280 pixels is included
  • the second image of the second resolution includes the images of 480 ⁇ 640 pixels shown in FIGS. 5A to 5D, and these four images are shown in FIG.
  • a monitoring image K compressed into 1280 ⁇ 240 pixels is included as one image.
  • the control device 14 of the monitoring terminal device 10 can also attach a line figure indicating the boundary between the arranged images to the monitoring image K.
  • the control device 14 displays rectangular partition images Bb, Bc, Bd, Ba, Ba ′ between the images as line figures indicating the boundaries between the arranged images. It can be attached to the monitoring image K.
  • the partition image functions as a frame of each captured image.
  • the image distortion is large in the vicinity of the boundary of each captured image, it is possible to hide the image of the region with large distortion or to suggest that the distortion is large by arranging the partition image at the boundary of the captured image. .
  • control device 14 can generate the monitoring image K after correcting the distortion when four images are projected on the projection plane set on the side surface of the projection model described later.
  • image distortion is likely to occur.
  • the captured image tends to be largely distorted. Therefore, it is defined in advance to correct the image distortion. It is desirable to correct the distortion of the captured image using the image conversion algorithm and the correction amount.
  • the control device 14 reads information on the same projection model as the projection model for projecting the monitoring image K in the central monitoring device 20 from the ROM, and takes a captured image on the projection plane of the projection model.
  • the distortion generated on the projection plane can be corrected in advance.
  • the image conversion algorithm and the correction amount can be appropriately defined according to the characteristics of the camera 11 and the shape of the projection model. In this way, by correcting in advance the distortion when the image K is projected with respect to the projection plane of the projection model, it is possible to provide the monitoring image K with good visibility with less distortion. Further, by correcting the distortion in advance, it is possible to reduce the positional deviation between the images arranged side by side.
  • mapping information addition function will be described.
  • the control device 14 projects the generated monitoring image K on the projection plane set on the side surface of the columnar projection model M with the ground contact surface of the passenger car V as the bottom surface.
  • the process of associating the mapping information with the monitoring image K is executed.
  • the mapping information is information for allowing the central monitoring device 20 that has received the monitoring image K to easily recognize the projection reference position.
  • FIG. 8 is a diagram showing an example of the projection model M of the present embodiment
  • FIG. 9 is a schematic sectional view taken along the xy plane of the projection model M shown in FIG.
  • the projection model M of the present embodiment is a regular octagonal prism body having a regular octagonal bottom surface and a height along the vertical direction (z-axis direction in the figure).
  • the shape of the projection model M is not particularly limited as long as it is a column having side surfaces adjacent to each other along the boundary of the bottom surface, and is a cylinder, or a prism, such as a triangular column, a quadrangular column, or a hexagonal column, or An anti-rectangular column having a polygonal bottom surface and a triangular side surface can also be used.
  • the bottom surface of the projection model M of this embodiment is parallel to the ground contact surface of the passenger car V.
  • Projection surfaces Sa, Sb, Sc, and Sd (hereinafter collectively referred to as a projection surface S) that project an image around the passenger vehicle V that contacts the bottom surface of the projection model M are provided on the inner surface of the side surface of the projection model M. Is set.
  • the projection surface S includes a part of the projection surface Sa and a part of the projection surface Sb, a part of the projection surface Sb and a part of the projection surface Sc, a part of the projection surface Sc and a part of the projection surface Sd, and the projection surface Sd. And a part of the projection surface Sa.
  • the monitoring image K is projected on the projection plane S as an image of the passenger car V viewed from above the viewpoint R (R1 to R8, hereinafter referred to as viewpoint R) above the projection model M surrounding the passenger car V.
  • the control device 14 associates the reference coordinates of the captured image arranged at the right end or the left end with the monitoring image K as mapping information.
  • the control device 14 is arranged at the right end as mapping information (reference coordinates) indicating the start end position or the end position of the monitoring image K when projected onto the projection model M.
  • mapping information reference coordinates
  • the coordinates A (x, y) of the upper left vertex of the captured image GSP1 and the coordinates B (x, y) of the upper right vertex of the captured image GSP2 arranged at the left end are attached to the monitoring image K.
  • the reference coordinates of the captured image indicating the start position or the end position are not particularly limited, and may be the lower left vertex of the monitoring image K arranged at the left end or the lower right vertex of the monitoring image K arranged at the right end.
  • the mapping information may be attached to each pixel of the image data of the monitoring image K, or may be managed as a file different from the monitoring image K.
  • the information indicating the start position or the end position of the monitoring image K is associated with the monitoring image K as mapping information, whereby the central monitoring apparatus 20 that has received the monitoring image K Since the reference position at the time of the projection process can be easily recognized, the monitoring images K arranged in the order of arrangement of the cameras 11a to 11d are sequentially and easily projected onto the projection surface S on the side surface of the projection model M. Can do. That is, as shown in FIG. 9, the captured image GSP1 in front of the moving body V is projected onto the projection surface Sa located in the imaging direction of the camera 11a, and the right side of the moving body V is projected onto the projection surface Sb located in the imaging direction of the camera 11b.
  • the captured image GSP2 is projected, the captured image GSP3 behind the moving body V is projected onto the projection surface Sc located in the imaging direction of the camera 11c, and the left side of the moving body V is projected onto the projection surface Sd located in the imaging direction of the camera 11d.
  • One captured image GSP4 can be projected.
  • the monitoring image K projected on the projection model M can show an image that can be seen as if looking around the passenger car V.
  • the monitoring image K including four images arranged in a line in the horizontal direction in accordance with the installation order of the cameras 11a to 11d is projected on the side surfaces arranged in the horizontal direction in the column of the projection model M.
  • An image around the passenger car V can be reproduced in the monitoring image K projected on the projection plane S of the body projection model M while maintaining the positional relationship.
  • control device 14 of the present embodiment can store the correspondence between the coordinate values of the monitoring image K and the coordinate values of the projection planes S of the projection model M as mapping information, and attach it to the monitoring image K.
  • it may be stored in the central monitoring device 20 in advance.
  • the positions of the viewpoint R and the projection plane S shown in FIGS. 8 and 9 are examples, and can be arbitrarily set.
  • the viewpoint R can be changed by the operation of the operator.
  • the relationship between the viewpoint R and the projection position of the monitoring image K is defined in advance, and when the position of the viewpoint R is changed, a predetermined coordinate transformation is performed, so that the viewpoint R is viewed from the newly set viewpoint R.
  • the monitoring image K can be projected onto the projection surface S (Sa to Sd). A known method can be used for this viewpoint conversion processing.
  • the control device 14 generates the monitoring image K based on the monitoring image captured at a predetermined timing, and the monitoring image K includes line information (partitions) indicating mapping information, reference coordinates, and boundaries. Image) information is associated and stored over time according to the imaging timing.
  • the control device 14 may store the monitoring image K as one moving image file including a plurality of monitoring images K per predetermined unit time, or in a form that can be transferred / reproduced by a streaming method.
  • the monitoring image K may be stored.
  • control apparatus 14 of this embodiment can include the movement speed acquired from the apparatus which detects the movement speed of the moving body V, and the vehicle speed sensor 16 with which a vehicle is provided in this example in monitoring information.
  • This moving speed (vehicle speed) is used when the knitting condition included in the monitoring information transmission command is set on the central monitoring device 20 side.
  • the communication device 23 of the central monitoring device 20 receives the monitoring image K transmitted from the monitoring terminal device 10 and the mapping information associated with the monitoring image K.
  • the images of the four cameras 11 installed at different positions of the body of the passenger car V are installed along the clockwise or counterclockwise direction along the outer periphery of the body of the passenger car V.
  • the cameras 11a to 11d are arranged according to the installation order (clockwise or counterclockwise order along the outer periphery of the body of the moving body V).
  • the monitoring image K is associated with mapping information for projecting the monitoring image K onto the projection plane S of the octagonal prism projection model M.
  • the communication device 23 transmits the acquired monitoring image K and mapping information to the image processing device 22.
  • the image processing apparatus 22 reads the projection model M stored in advance, and sets it on the side surface of the octagonal prism projection model M with the ground contact surface of the passenger car V shown in FIGS. 8 and 9 as the bottom surface based on the mapping information.
  • a display image is generated by projecting the monitoring image K onto the projected planes Sa to Sd. Specifically, according to the mapping information, each pixel of the received monitoring image K is projected onto each pixel of the projection surfaces Sa to Sd. Further, when projecting the monitoring image K onto the projection model M, the image processing device 22 recognizes the start point of the monitoring image K (the right end or the left end of the monitoring image K) based on the reference coordinates received together with the monitoring image K.
  • the projection processing is performed so that the start point coincides with the start point (the right end or the left end of the projection surface S) defined in advance on the projection model M. Further, when projecting the monitoring image K onto the projection model M, the image processing device 22 arranges a line figure (partition image) indicating the boundary of each image on the projection model M.
  • the partition image can be attached to the projection model M in advance, or can be attached to the monitoring image K after the projection processing.
  • the display 24 or the touch panel display 24 having a display function displays the monitoring image K projected on the projection surface S of the projection model M.
  • 10 to 17 show examples of display images of the monitoring image K.
  • FIG. 10 shows a monitoring image K projected on the projection surfaces Sd, Sa, Sb viewed from the viewpoint R1 shown in FIGS.
  • An image of the moving body V viewed from each viewpoint R is pasted on the bottom surface of the projection model M. Further, the portion where the image between the projection surfaces Sd, Sa, and Sb is not displayed is a “line figure indicating a boundary (partition image)”.
  • FIG. 11 shows the monitoring image K viewed from the viewpoint R2
  • FIG. 12 shows the monitoring image K viewed from the viewpoint R3
  • FIG. 13 shows the monitoring image K viewed from the viewpoint R4
  • FIG. 15 shows the monitoring image K viewed from the viewpoint R5,
  • FIG. 15 shows the monitoring image K viewed from the viewpoint R6,
  • FIG. 16 shows the monitoring image K viewed from the viewpoint R7, and
  • FIG. The observed monitoring image K is shown.
  • the terminal device 800 transfers the captured image of each camera 1 along the x-axis direction or the y-axis direction (sideways) according to the installation order of the cameras 1 installed on the body of the moving body V.
  • the arranged monitoring image K is mapped (laterally) along the side surface of the projection model M of the columnar body according to the arrangement order, the monitoring image K shown in the projection model M has a clock around the moving body V. It is possible to show an image that can be seen when looking around. That is, the supervisor can obtain the same information as when the user gets on the moving body V and looks around while watching the monitoring image K while staying at a position separated from the moving body V.
  • the captured image GSP1 of the camera 1a provided on the front grille of the moving body V is projected onto the projection surface S facing the front grille of the moving body V, and the projection surface S facing the right side mirror of the moving body V is projected.
  • the captured image GSP3 is projected, and the captured image GSP2 of the camera 1b provided on the left side mirror of the moving body V can be projected onto the projection surface S facing the left side mirror of the moving body V.
  • the monitoring image K projected while maintaining the positional relationship of the video around the moving object V.
  • the monitor can easily grasp what is happening around the moving body V.
  • FIGS. 10 to 17 attached to the present application are still images, the actual display image reproduction state cannot be shown.
  • the display screen of the display 24 is not displayed.
  • Each of the images shown on each projection plane S is a moving image. That is, the moving image of the imaging area SP1 in front of the moving body V is projected on the projection surface S facing the front grille of the moving body V, and the moving body V is projected on the projection surface S facing the right side mirror of the moving body V.
  • a moving image of the imaging area SP4 on the right side of the V is displayed, and a moving image of the imaging area SP3 behind the moving object V is displayed on the projection surface S facing the rear part of the moving object V, and the left side of the moving object V is displayed.
  • a moving image of the imaging region SP2 on the left side of the moving object V is projected. That is, a plurality of moving image monitoring images K based on captured images captured by different cameras 1 can be simultaneously reproduced on each projection plane S shown in FIGS.
  • the viewpoint can be freely set and changed by the operation of the supervisor. Since the correspondence relationship between the viewpoint position and the projection plane S is defined in advance in the image processing device 22 or the display 24 described above, the monitoring image K corresponding to the changed viewpoint is displayed on the display 24 based on this correspondence relationship. can do.
  • the resolution of each image included in each display image shown in FIG. 10 to FIG. 17 may be made common from the viewpoint of easy viewing, or may be a different resolution from the viewpoint of reducing the amount of communication data.
  • the resolution of each image constituting the monitoring image K at a certain timing may be different, but it is possible to simultaneously reproduce the monitoring images K of a plurality of moving images.
  • FIGS. 19A, 19B, and 19C are flowcharts showing the operation on the central monitoring device 20 side.
  • the surrounding video and the indoor video are acquired from the in-vehicle camera 11 at a predetermined time interval (one routine shown in FIG. 18), and a monitoring image is obtained by the image processing device 12. Conversion is performed (step ST1). Further, the current position information of the passenger car V on which the monitoring terminal device 10 is mounted is detected from the position detection device 15 having GPS (step ST2).
  • step ST3 it is determined whether or not the report button 16 for reporting the abnormality is pressed. If the report button 16 is pressed, the process proceeds to step ST4, and the monitoring image acquired in step ST1 and the acquired in step ST2.
  • the positional information is associated with the CPU time information, and these are transmitted as monitoring information to the central monitoring device 20 via the communication device 13 and the telecommunications network 30 together with the abnormality information indicating that an abnormality has occurred.
  • the occurrence of an abnormality related to security such as an accident or a crime is automatically transmitted to the central monitoring device 20 together with the position information of the passenger car V and the monitoring image around the passenger car V, thereby further strengthening the monitoring in the city. Will be.
  • the monitoring image and the position information are acquired in the first steps ST1 and ST2, but the monitoring image and the position information may be acquired at a timing between steps ST3 and ST4.
  • step ST3 if the report button 16 is not pressed, the process proceeds to step ST5, where it communicates with the central monitoring device 20 and obtains a control command.
  • step ST6 the monitoring terminal device 10 determines whether or not a monitoring information transmission command has been acquired from the central monitoring device 20, and proceeds to step ST7 if a monitoring information transmission command has been acquired.
  • step ST7 it is determined whether or not the acquired monitoring information transmission command includes a command for generating a monitoring image in which images of different resolutions are mixed under a predetermined composition condition.
  • the process proceeds to step ST8 and the resolution is changed.
  • the resolution change command is transmitted to the image processing apparatus 12 that performs image editing processing according to the resolution or the camera 11 that has a resolution setting function.
  • the acquired monitoring information transmission command does not include a command for generating a monitoring image in which images of different resolutions are mixed under a predetermined composition condition
  • the first resolution is relatively high without changing the resolution.
  • a monitoring image is generated. That is, when the composition information is included in the monitoring information transmission command, the monitoring image is generated by mixing the second images having the relatively low second resolution.
  • step ST9 the control device 14 generates a first image of the first resolution and a second image of the second resolution according to the knitting conditions of the monitoring information transmission command, and the first image at a ratio according to the knitting conditions And the second image.
  • the “ratio at which the first image and the second image are mixed” in the composition condition is adjusted by the interval, frequency, or cycle at which the first image and the second image are generated and transmitted. That is, the mixing ratio of the first image and the second image in the composition condition is the number of the first images of the first resolution generated per unit time or the number of the first images transmitted per unit time. It can be adjusted by the ratio of the number of second images of the second resolution generated per unit time or the number of second images transmitted per unit time.
  • the control device 14 determines the first resolution or It is possible to capture a vehicle periphery at the second resolution and generate a monitoring image in which the first image and the second image are mixed under a predetermined composition condition.
  • the first image having the first resolution and the second image having the second resolution are generated by the image processing apparatus 12 having a function of processing a captured image captured by the camera 11 into an image having a predetermined resolution. Can be done.
  • the control device 14 performs image processing on a monitoring image in which a first image having a first resolution and a second image having a second resolution lower than the first resolution are mixed according to a knitting condition according to the knitting condition of the monitoring information transmission command.
  • the device 12 is generated.
  • the control device 14 transmits monitoring information including the monitoring image, time information, and position information according to the composition condition to the central monitoring device 20 according to the monitoring information transmission command.
  • the monitoring information may include the moving speed of the moving object V as necessary.
  • monitoring information such as a monitoring image, position information, and time information is stored in the memory of the monitoring terminal device 10.
  • step ST10 when the monitoring information transmission command is not acquired from the central monitoring device 20, the process proceeds to step ST10, where it is determined whether or not the passenger vehicle V exists in a predefined priority monitoring area.
  • monitoring information including a monitoring image is transmitted.
  • the monitoring information including the monitoring image that does not change the communication speed is transmitted from the viewpoint that a detailed monitoring image is preferable if it is within the priority monitoring area.
  • the process proceeds to step ST11, and monitoring information not including the monitoring image, that is, time information and position information is transmitted to the central monitoring device 20.
  • step ST11 of FIG. 19A position information and time information are acquired from all the passenger cars V and stored in the database 26.
  • the database 26 stores monitoring information including monitoring images, position information, and time information acquired from the passenger car V (monitoring terminal device 10) in association with the position information. That is, if position information is designated, a series of monitoring information can be called.
  • the monitoring information can include a mobile body ID (monitoring terminal device ID) for specifying the monitoring terminal device 10.
  • the mobile object ID may be the address of the communication device 13 of the monitoring terminal device 10.
  • step ST12 based on the position information acquired in step ST11, the position of the passenger car V is displayed by superimposing dots on the map information of the map database displayed on the display 24 as shown in the upper left of FIG. By looking at this map information, it is possible to see at a glance at which position the moving body V carrying the monitoring terminal device 10 is traveling. In other words, it is possible to identify the moving object V on which the monitoring terminal device 10 existing at the position to be monitored is mounted. Since the position information of the passenger car V is acquired and transmitted at a predetermined timing for each routine in FIG. 11, the supervisor can grasp the current position of the passenger car V in a timely manner.
  • step ST13 it is determined whether or not abnormality information notified from the monitoring terminal device 10 of the passenger car V, that is, a notification that an abnormality relating to security such as an accident or a crime has occurred has been received.
  • This abnormality information is output when the passenger of the passenger car V presses the notification button 16 of the monitoring terminal device 10.
  • the passenger vehicle V for which the abnormality information is output is identified in step ST14, the monitoring image and the time information are received from the monitoring terminal device 10 of the passenger vehicle, and the monitoring image is displayed on the display 24. Further, as shown in the upper left of FIG. 1, highlighting is performed such as changing the color so that the passenger car displayed on the map information can be distinguished from other passenger cars. Thereby, the position where the abnormality has occurred can be visually recognized on the map information, and the abnormality content can be grasped on the display 24.
  • the passenger vehicle V traveling in the vicinity (within a predetermined distance) of the passenger vehicle V that has output the abnormality information is detected, and a monitoring information transmission command including a monitoring image and time information is output to the passenger vehicle V.
  • the monitoring information can be acquired from the passenger vehicle V that travels in the vicinity of the passenger vehicle V that has output the abnormality information. Therefore, in addition to the monitoring image from the passenger vehicle V that has output the abnormality information, the abnormality information Can be understood in detail.
  • the monitoring information transmission command can include a composition condition described later.
  • step ST16 the position information of the passenger vehicle V that has output the abnormality information is transmitted to emergency vehicles such as police cars, ambulances, and fire engines.
  • emergency vehicles such as police cars, ambulances, and fire engines.
  • a monitoring image may be attached and transmitted in order to notify the abnormal content.
  • the emergency vehicle can be dispatched before a report from the site is entered, and a quick response to an accident or crime is possible.
  • step ST17 all position information, monitoring images and time information received from the monitoring terminal device 10 are recorded on the recording medium. This record is used to resolve these after an accident or crime. If there is no abnormality information in step ST13, the process proceeds to step ST21 in FIG. 19B without performing the processes in steps ST14 to ST17.
  • step ST21 it is determined whether there is an image information transmission command from an emergency passenger car such as a police car, an ambulance, or a fire engine. If an image transmission command is input, the process proceeds to step ST22. In step ST22, it is determined whether or not the passenger car V exists in the area specified by the image transmission command. If the passenger car V exists, the process proceeds to step ST23. In step ST23, a monitoring information transmission command is output to the passenger vehicle V existing in the area specified by the image transmission command. Thereby, the image information from the passenger car V can also be acquired in step ST11 of FIG. 19A in the next routine, and this can be transferred to the emergency passenger car, or the meaning of the transmission command from the emergency passenger car can be grasped. can do. If not corresponding to steps ST21 and ST22, the process proceeds to step ST24 without performing the processes of steps ST21 to ST23.
  • step ST24 it is determined whether or not there is a passenger car V in the vicinity of a suspicious location such as a preset frequent crime delay, and if so, the process proceeds to step ST25 to include a monitoring image for the passenger car V. Output monitoring information transmission command. Suspicious areas are streets with poor security and streets. As a result, the monitoring of streets and streets that are suspicious places can be strengthened, and crime prevention can be expected. If the passenger vehicle V does not exist in the region near the suspicious part, the process proceeds to step ST26 without performing the process of step ST22.
  • step ST26 it is determined whether or not there is a passenger vehicle V in the vicinity of the priority monitoring position where the priority monitoring object whose details should be monitored can be imaged. If the passenger vehicle V exists in the vicinity of the priority monitoring position, step ST27 is determined. To the passenger vehicle V and outputs a priority monitoring command for requesting transmission of monitoring information including a monitoring image in which the priority monitoring target is expanded. As a result, it is possible to monitor the priority monitoring target in detail, and to effectively detect a suspicious object that causes an incident or an accident in the specified priority monitoring target, so that prevention of crime can be expected. If there is no passenger vehicle V in the vicinity of the priority monitoring position, the process proceeds to step ST28 without performing the process of step ST27.
  • step ST28 based on the position information received from each passenger car V, the passenger car V is not traveling within a predetermined time within a predetermined area that is required to be monitored (not limited to the suspicious location and the priority monitoring area). It is determined whether there is a route, and when there is such a route, it is monitored whether there is a passenger vehicle V traveling on the route. If there is a passenger car V traveling on the route most recently, the process proceeds to step ST29, and a monitoring information transmission command including a monitoring image is output to the passenger car V. As a result, it is possible to automatically acquire a monitoring image of a route that is in a region other than the suspicious location or the priority monitoring region and has a small traffic volume of the passenger car V. If there is no route that satisfies the condition of step ST28, the process returns to step ST11 of FIG. 19A without performing the process of step ST29.
  • the above-described monitoring information transmission command can include a command for requesting transmission of the monitoring image, position information, time information, and moving speed, and can further include an image resolution setting command applied when the monitoring image is created. Can be included.
  • FIG. 19C is a flowchart showing a procedure of processing relating to generation and transmission of a monitoring information transmission command including a predetermined composition condition.
  • FIG. 19C and FIGS. 20 to 23 the generation and transmission processing of the monitoring information transmission command of this embodiment will be described.
  • 20 to 23 are diagrams for explaining the transmission state of the monitoring image based on each composition condition.
  • the central monitoring device 20 records the monitoring image, the position information, the time information, and the moving speed acquired from each passenger vehicle V in a recording medium in association with the identifier of each moving body V.
  • Step 31 and subsequent steps are processing related to generation and transmission processing of the next monitoring information transmission command based on information acquired up to the previous time.
  • the central monitoring device 20 sets the monitor image of the acquired monitoring information on the touch panel display 24 having an input function so that it can be presented to the monitor.
  • the central monitoring device 20 receives information specifying the moving object V present at the position to be monitored from the monitor. This operation is an operation such as clicking a dot corresponding to the moving object V displayed in the map information shown in the upper left of FIG.
  • step ST33 the central monitoring apparatus 20 receives an input of a gaze direction that the supervisor wants to gaze at.
  • the gaze direction is input by touching, tapping, or sliding touching the screen of the touch panel display 24.
  • the direction of the touched point can be specified as the gaze direction.
  • the left side camera 11b of the monitoring image K performs input for selecting the captured image GSP2 obtained by imaging the area SP2.
  • the supervisor touches the point PRa in the area where the captured image GSP4 is presented in the touch panel display 24, so that the right direction of the moving object V is set as the gaze direction. Can be entered.
  • the central monitoring device 20 calculates a communication speed in communication (information exchange) with the monitoring terminal device 10 mounted on the identified vehicle V.
  • the communication speed in the present embodiment is the data amount per unit time of information received by the central monitoring device 20 from the monitoring terminal device 10 side.
  • the communication speed can be determined based on the frame rate related to monitoring image processing in the central monitoring device 20 and the time required from transmission of information to confirmation of transmission completion in the monitoring terminal device 10.
  • step ST34 the central monitoring device 20 determines whether or not the communication speed is equal to or lower than a predetermined threshold (whether the frame rate is equal to or lower than the predetermined threshold, the time required from transmission to transmission completion is equal to or higher than the predetermined threshold). Whether or not there is).
  • step ST34 the central monitoring apparatus 20 determines that the communication speed is equal to or lower than a predetermined threshold (the frame rate is equal to or lower than the predetermined threshold, or the time until information transmission is completed is equal to or higher than the predetermined value).
  • Advances to step ST35 In step ST35, generation of the knitting condition for the monitoring image including images with different resolutions is started. The resolution can be specified for each frame.
  • the knitting conditions in this embodiment include a matter of mixing a first image having a first resolution with a relatively high resolution and a second image having a second resolution lower than the first resolution under a predetermined knitting condition.
  • the composition condition in the present embodiment defines that at least a first image and a second image having different resolutions are included at a predetermined ratio, but images of further different third resolution, fourth resolution to nth resolution are included. Inclusion can be defined at a predetermined rate.
  • the composition condition of the present embodiment is that the first image of the first resolution that is the resolution of the image to be transmitted in the normal state where the communication speed is not less than or equal to the predetermined threshold is the second lower than the first resolution.
  • a combination of the second images of resolution is defined.
  • composition conditions include the transmission frequency of the first image and the second image transmitted as the monitoring image, the existence ratio of the first image and the second image included in the monitoring image transmitted per unit time, the first image and the first image
  • the transmission order of two images, and the transmission interval (cycle) of the first image and the second image sequentially transmitted according to the transmission order are included.
  • FIG. 20 is a diagram for explaining the transmission state of the monitoring image based on the first composition condition.
  • the monitoring terminal device 10 in the normal process (1) in which the communication speed between the central monitoring device 20 and the monitoring terminal device 10 does not decrease, the monitoring terminal device 10 has a relatively high first resolution. The first image is transmitted to the central monitoring apparatus 20 at timings t0 to t3 set at regular intervals.
  • the monitoring terminal device 10 in the processes (2) and (3) when the communication speed between the central monitoring device 20 and the monitoring terminal device 10 is a predetermined threshold decrease, the monitoring terminal device 10 is relatively A first image with a high first resolution and a second image with a relatively low second resolution are alternately transmitted to the central monitoring device 20 at timings t0 to t3 set at regular intervals.
  • composition conditions shown in (2) and (3) of this example indicate that one or more second images of the second resolution are transmitted after a predetermined interval has elapsed after transmitting the first image of the first resolution.
  • the composition condition shown in (2) of this example since the number of second images transmitted at t1 and t3 is two (640 ⁇ 480 pixels ⁇ 2), one first image (1280 ⁇ 960 pixels ⁇ 2) The amount of information to be communicated can be reduced compared to the case shown in (1) where only 1) is transmitted.
  • the composition condition shown in (3) of this example since the number of second images transmitted at t1 and t3 is four (640 ⁇ 480 pixels ⁇ 4), one first image (1280 ⁇ 960) is obtained.
  • the amount of information related to communication is the same as in the case of (1) transmitting only (pixels ⁇ 1), but in the case of (1), two images are transmitted between t0 and t1 ( In the case of 2), there is an advantage that as many as five images can be transmitted between t0 and t1. That is, a large number of monitoring images can be provided to the monitor while maintaining the amount of information.
  • a monitoring image formed by mixing the first image having a high resolution and the second image having a low resolution on the monitoring terminal device 10 mounted on each moving body V is combined. Since transmission is performed to the central monitoring device 20, an increase in the amount of information related to communication can be suppressed while ensuring the transmission frequency of information. That is, when the communication speed is low, the central monitoring device 20 is not detailed but instead of acquiring one detailed image with a large amount of information related to communication. Since the monitoring terminal device 10 is controlled so that many (a plurality of) small images can be acquired, information can be collected in real time without encouraging congestion of the communication line that causes a decrease in communication speed.
  • the central monitoring apparatus 20 of this embodiment can generate a monitoring information transmission command for each monitoring direction with the moving body V as a reference.
  • the central monitoring device 20 includes, in the monitoring information transmission command, information that specifies the monitoring direction (front, rear, right side, left side) with respect to the moving body, and the resolution and the monitoring direction in each monitoring direction.
  • the transmission frequency can be associated.
  • the central monitoring device 20 preliminarily stores information for specifying the monitoring direction with respect to the moving object V and information for associating the cameras 11a to 11d installed at a predetermined position of the moving object V and imaging the predetermined monitoring direction.
  • information for identifying the cameras 11a to 11d can be included in the monitoring information transmission command.
  • the monitoring direction included in the monitoring information transmission command is stored on the moving body V side in such a manner that the monitoring direction based on the moving body V and information associating the cameras 11a to 11d for imaging the predetermined monitoring direction are stored.
  • the monitoring terminal device 10 may identify the cameras 11a to 11d targeted by the monitoring information transmission command.
  • the central monitoring device 20 of the present embodiment can create and execute a monitoring information transmission command for each direction around the moving object V.
  • the camera 11a installed at the front portion of the moving object V is specified. And a 1st image and a 2nd image can be produced
  • FIG. 21 is a diagram for explaining the transmission state of the monitoring image based on the second composition condition.
  • the same knitting condition can be defined for all the directions (front: Fr, rear Rr, right: Right, left: Left) or all the cameras 11a to 11d. That is, in the example of (1), the monitoring terminal device 10 has a relatively high first resolution image and a relatively low first image in all directions (or imaging directions of all the cameras 11a to 11d).
  • the second image of two resolutions are alternately transmitted to the central monitoring device 20 at timings t0 to t3 set at a constant interval f0.
  • this embodiment as shown in FIG.
  • step ST ⁇ b> 40 it is determined whether or not the supervisor inputs the gaze direction that the supervisor wants to gaze among the surroundings of the moving body V on which the monitoring terminal device 10 is mounted to the central monitoring device 20 side. If the gaze direction is designated by the supervisor, the process proceeds to step ST36.
  • step ST36 when the gaze direction is input, the central monitoring device 20 generates a monitoring information transmission command including a knitting condition according to the gaze direction. Since the gaze direction is a direction that the monitor wants to observe particularly carefully, it is preferable to provide the monitor with a relatively high resolution image for the gaze direction even when the communication speed decreases. For this reason, the central monitoring apparatus 20 of this embodiment produces
  • a first image having a high first resolution is generated, a second image having a relatively low second resolution is generated based on a captured image of the camera 11 that captures a direction other than the gaze direction, and a second image having a different resolution is generated.
  • a knitting condition in which one image and the second image are mixed at a predetermined ratio is generated. This composition condition is included in a monitoring information transmission command transmitted to the monitoring terminal device 10.
  • FIG. 22 is a diagram for explaining a transmission state of the monitoring image based on the third composition condition.
  • the same knitting conditions can be defined for all the directions (front: Fr, rear Rr, right: Right, left: Left) or all the cameras 11a to 11d. That is, in the example shown in (1) of FIG. 22, the monitoring terminal device 10 compares the first image with a relatively high first resolution with respect to all directions (or the imaging directions of all the cameras 11a to 11d), Therefore, the second image of the second resolution having a lower resolution is alternately transmitted to the central monitoring apparatus 20 at timings t0 to t3 set at a constant interval f0.
  • the present embodiment as shown in (2) of FIG.
  • the monitoring terminal device 10 captures the front that is the gaze direction for the gaze direction (for example, forward: Fr) designated by the monitor. Based on the captured image at the timing P1 of the camera 11a, the first image of the first resolution is generated, and the captured images at the timing P1 of the cameras 11b, 11c, and 11d that capture the rear, right side, and left side that are directions other than the gaze direction. The second image of the second resolution is generated based on the above, and the monitoring image obtained by mixing them is transmitted to the central monitoring device 20 at the timing t0.
  • a first image having a first resolution is generated based on a captured image at timing P2 of the camera 11a that captures the front in the gaze direction, and based on a captured image at timing P2 of the camera 11c that captures the rear that is not in the gaze direction.
  • the first image having the first resolution is generated, and further, the second image having the second resolution is generated based on the captured images at the timing P2 of the cameras 11b and 11d that capture the right side and the left side which are directions other than the gaze direction.
  • the monitoring image in which these are mixed is transmitted to the central monitoring apparatus 20 at timing t0.5.
  • a first image having a relatively high first resolution is generated for the gaze direction
  • a second image having a relatively low second resolution is generated for the direction other than the gaze direction, thereby monitoring a plurality of directions. While obtaining images uniformly, by giving light weight to the gaze direction and resolutions other than the gaze direction, the amount of information related to communication between the central monitoring device 20 and the monitoring terminal device 10 can be reduced.
  • step ST41 the central monitoring apparatus 20 acquires the vehicle speed detected by the vehicle speed sensor 16 of the passenger vehicle V specified by the supervisor via the telecommunication network 30.
  • step ST42 the central monitoring device 20 determines whether the vehicle speed is equal to or higher than a predetermined threshold value. If the vehicle speed is greater than or equal to the predetermined threshold, the process proceeds to step ST37.
  • step ST37 the central monitoring apparatus 20 generates a knitting condition that increases the proportion of the second image having the relatively low second resolution.
  • the predetermined threshold value that is, when the moving object V is traveling fast
  • the image around the vehicle also changes greatly. Since the video around the vehicle changes every moment, it is preferable to provide a large number of images to the monitor at short intervals when the vehicle speed is high even when the communication speed decreases. For this reason, the central monitoring apparatus 20 of this embodiment produces
  • FIG. 23 is a diagram for explaining a transmission state of the monitoring image based on the fourth composition condition.
  • the monitoring terminal device 10 uses a relatively high first resolution first image and a relatively low second resolution second image.
  • the images are alternately transmitted to the central monitoring device 20 at timings t0 to t3 set at regular intervals so that the images are included at the same rate.
  • the monitoring terminal device 10 applies the first image of the first resolution at the timing t0 according to the knitting condition in which the ratio of the second image is increased.
  • the second image with the second resolution is transmitted twice at timings t1 and t2, and then the first image with the first resolution is transmitted at timing t3.
  • the ratio between the first image and the second image is 1: 1, but in (2), the ratio between the first image and the second image is 1: 2.
  • the ratio of the relatively low second resolution second image included in the monitoring image is increased, so that the vehicle speed is equal to or higher than the predetermined threshold.
  • the amount of information required for communication between the central monitoring device 20 and the monitoring terminal device 10 is reduced, and the monitoring image around the vehicle that varies greatly according to the moving speed Can be provided in real time.
  • step ST38 the central monitoring apparatus 20 generates a monitoring information transmission command including the knitting conditions generated in steps ST35 to ST37.
  • a monitoring information transmission command including the composition conditions is transmitted to the vehicle previously identified in ST32. This monitoring information transmission command is a command for controlling the next generation of monitoring information.
  • step ST34 if the communication speed is equal to or lower than the predetermined threshold (the frame rate is equal to or lower than the predetermined threshold, or the time required from the start of transmission to the completion of transmission is equal to or higher than the predetermined threshold), the previous monitoring information transmission command Therefore, the monitoring terminal device 10 may be processed based on the processing, so that the processing after step ST35 is terminated without performing the processing, and the processing returns to step ST30. Further, when the supervisor does not specify the gaze direction in step ST40, the processing after step ST41 is performed. Furthermore, when the vehicle speed is less than the predetermined threshold value in step ST42, the process proceeds to step ST38.
  • the monitoring information transmission command is transmitted to the specified vehicle V.
  • the monitoring terminal device 10 that has acquired the monitoring information transmission command generates a first image and a second image in order to obtain a monitoring image according to the composition condition.
  • the image processing device 12 may be caused to change the resolution of the captured image, or the resolution of the camera 11 may be changed as desired. You may acquire the captured image of the resolution.
  • FIG. 24 shows the time load related to each process when the generation of the first image and the second image is executed only by the image processing apparatus 12, and is distributed to the camera 11 and the image processing apparatus 12. It is a figure which shows the time load concerning each process in a case. In FIG. 24, a temporal load related to each process is indicated by a rectangular block.
  • the camera 11 and the image processing device 12 can execute processing of four captured images in parallel. Therefore, the time tq2 required to process the four captured images dispersedly in the camera 11 and the image processing device 12 to generate the first image or the second image is four times only for the image processing device 12.
  • the time tq1 required for processing the captured image to generate the first image or the second image is shorter.
  • the processing time for generating the first image or the second image is shortened. Therefore, the monitoring information including the monitoring image is stored in the central monitoring device 20. The time required to provide the data can also be shortened. As a result, a monitoring image can be provided to the monitoring person in real time.
  • the monitoring system 1 provides the first information with a relatively high first resolution to the monitoring terminal device 10 mounted on each mobile body when the communication speed is low. And the monitoring image formed by mixing the second information with the relatively low second resolution are transmitted to the central monitoring device 20, so that an increase in the amount of communication data can be suppressed while ensuring the information transmission frequency. . That is, the central monitoring device 20 can acquire rough information with a small amount of communication data a plurality of times instead of acquiring detailed information with a large amount of communication data once when the communication speed is low. Since the monitoring terminal device 10 is controlled as described above, information can be collected in real time without encouraging the congestion of the communication line that causes a decrease in the communication speed.
  • the central monitoring device 20 of the monitoring system 1 Since the central monitoring device 20 of the monitoring system 1 according to the present embodiment of the present invention generates a monitoring information transmission command for each monitoring direction with the moving body V as a reference, for example, the progress of the moving body V in the monitor
  • the camera 11a installed in the front part of the moving object V is specified, and the first image and the second image are generated based on the captured image of the camera 11a. Can be made. Thereby, it is possible to reduce the amount of information related to communication between the central monitoring device 20 and the monitoring terminal device 10 by adjusting the resolution while acquiring a monitoring image according to the intention of the monitoring person.
  • the central monitoring device 20 of the monitoring system 1 is a camera that captures a gaze direction when a specific gaze direction that the monitor wants to gaze is input around the moving body V.
  • a first image having a relatively high first resolution is generated based on the captured image of 11 and a second image having a relatively low second resolution based on the captured image of the camera 11 that captures a direction other than the gaze direction. Since the knitting condition for generating the image is generated, the monitoring image in a plurality of directions can be obtained evenly, and the gaze direction and the resolution other than the gaze direction are given light weight, so that the central monitoring device 20 and the monitoring terminal device 10 are The amount of information related to the communication can be reduced.
  • the central monitoring device 20 of the monitoring system 1 is configured such that when the vehicle speed of the moving object V is equal to or higher than a predetermined threshold, a second image having a relatively low second resolution is used as the monitoring image.
  • the communication between the central monitoring device 20 and the monitoring terminal device 10 is performed by giving a light weight to the ratio of the second image when the vehicle speed is equal to or higher than the predetermined threshold value and below the predetermined threshold value. While reducing the amount of information, it is possible to provide a real-time monitoring image around the vehicle that varies greatly according to the moving speed.
  • the monitoring terminal device 10 of the monitoring system 1 provides each camera 11 having a resolution setting function for imaging the surroundings of the vehicle with a predetermined resolution according to the knitting condition of the monitoring information transmission command.
  • the surroundings of the vehicle can be imaged at the resolution or the second resolution, and a monitoring image in which the first image and the second image are mixed under a predetermined composition condition can be generated. Since the processing of four captured images can be executed in parallel by the camera 11 and the image processing device 12, the processing time for generating the first image or the second image is shortened. The time required for providing the monitoring information to be included to the central monitoring device 20 can also be shortened. As a result, a monitoring image can be provided to the monitoring person in real time.
  • the monitoring terminal device 10 of the monitoring system 1 can cause the image processing device 12 to generate the first image having the first resolution and the second image having the second resolution.
  • the actions and effects (1) to (5) above can be achieved.
  • the monitoring terminal device 10 of the monitoring system 1 has the first resolution first. Since the monitoring image of the image is generated and transmitted to the central monitoring device 20, when the communication speed is higher than the predetermined threshold, the monitoring image having a relatively high resolution is transmitted to the monitoring person. Can monitor the city based on detailed monitoring images.
  • the monitoring system 1 including the monitoring terminal device 10 and the central monitoring device 20 will be described as an example of the monitoring system including the monitoring terminal device and the central monitoring device according to the present invention. It is not limited to this.
  • the central monitoring device including the communication speed calculation unit, the command generation unit, and the command transmission unit according to the present invention, a communication speed calculation function, a command generation function, and a command transmission function
  • the central monitoring device 20 including the control device 21, the image processing device 22, the communication device 23, the display 24, and the input device 25 will be described as an example.
  • the present invention is not limited to this.
  • the control apparatus 14 which has a monitoring image generation function and the monitoring information transmission function,
  • the monitoring terminal device 10 including the cameras 11a to 11d, the image processing device 12, the communication device 13, the position detection device 15, and the vehicle speed sensor 16 will be described as an example, but is not limited thereto.
  • the position information of the passenger car V and the monitoring images from the cameras 11a to 11d are acquired.
  • the monitoring image from the fixed camera 11f installed in the city shown in FIG. You may get it.
  • the passenger car V which acquires a positional information and a monitoring image it is desirable to use the taxi V1 and bus

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Signal Processing (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

Provided is a monitoring system comprising a central monitoring device (20), a camera (11) that picks up images from different directions in the vicinity of each moving body (V), and monitoring terminal devices (10). The central monitoring device (20) comprises: a function of calculating the communication rate with the monitoring terminal devices (10); a function of generating a monitoring information transmission instruction; and a function of transmitting the monitoring information transmission instruction that has thus been generated to a selected monitoring terminal device. If the transmission rate is less than a prescribed threshold value, the function of generating a monitoring information transmission instruction generates a monitoring image in which a first image of a first resolution which is a relatively high resolution and a second image of lower resolution than the first resolution are mixed under prescribed composition conditions with the information that is transmitted, and transmits monitoring information containing this monitoring image to the central monitoring device (20). The monitoring terminal devices (10) are equipped with: a function of generating a monitoring image in accordance with prescribed composition conditions contained in the monitoring information transmission instruction, and a function of transmitting the monitoring information including monitoring images in accordance with the monitoring information transmission instructions to the central monitoring device (20).

Description

監視システム及び監視方法Monitoring system and monitoring method
 本発明は、監視システム及び監視方法に関するものである。
 本出願は、2012年4月24日に出願された日本国特許出願の特願2012―098545に基づく優先権を主張するものであり、文献の参照による組み込みが認められる指定国については、上記の出願に記載された内容を参照により本出願に組み込み、本出願の記載の一部とする。
The present invention relates to a monitoring system and a monitoring method.
This application claims priority based on Japanese Patent Application No. 2012-098545 filed on Apr. 24, 2012. For designated countries that are allowed to be incorporated by reference, The contents described in the application are incorporated into the present application by reference and made a part of the description of the present application.
 商店街、店舗の出入口、家庭の玄関その他の街中に複数の防犯カメラ装置を設置し、当該防犯カメラ装置により撮像された周囲の映像を監視することで、異常の発生を検出する防犯装置が知られている(特許文献1)。 A security device that detects the occurrence of abnormalities is known by installing multiple security camera devices in shopping streets, store entrances, home entrances, and other streets, and monitoring surrounding images captured by the security camera device (Patent Document 1).
特開2011-215767号公報JP 2011-215767 A
 しかしながら、街中に設置した防犯カメラ装置は、通信速度が低下している場合であっても、画一的な解像度の撮像画像を監視者に送信し続けるので、通信回線がさらに混雑して情報の伝達が遅延するという問題がある。 However, security camera devices installed in towns continue to send captured images with uniform resolution to the monitor even when the communication speed is low. There is a problem that transmission is delayed.
 本発明は、通信速度が所定値以下である場合に、送信する監視画像の情報量を調節することにより、通信データ量を低減させるとともに情報の伝達が遅延することを抑制する監視システムを提供することを目的とする。 The present invention provides a monitoring system that reduces the amount of communication data and suppresses the delay of information transmission by adjusting the amount of information of a monitoring image to be transmitted when the communication speed is equal to or lower than a predetermined value. For the purpose.
 本発明は、通信速度が所定値以下である場合には、解像度が相対的に高い第1解像度の第1画像と、第1解像度よりも低い第2解像度の第2画像とを所定の編成条件で混合させた監視画像を生成及び送信させる監視情報送信命令を監視端末装置に実行させることにより、上記目的を達成する。 According to the present invention, when the communication speed is equal to or lower than a predetermined value, a first image having a first resolution with a relatively high resolution and a second image having a second resolution lower than the first resolution are set to a predetermined composition condition. The above-mentioned object is achieved by causing the monitoring terminal device to execute a monitoring information transmission command for generating and transmitting the monitoring image mixed in the above.
 本発明によれば、通信速度が低下している場合には、各移動体に搭載された監視端末装置に、解像度の高い情報と解像度の低い情報とを取り混ぜて編成した監視画像を中央監視装置へ送信させるので、情報の送信頻度を確保しつつも通信データ量の増加を抑制することができる。つまり、中央監視装置は、通信速度が低下している場合には、通信データ量の大きい詳細な情報を1回取得することに代えて、通信データ量の小さい大まかな情報を複数回取得できるように監視端末装置を制御するので、通信速度の低下の原因となる通信回線の混雑を助長することなく、情報をリアルタイムで収集することができる。 According to the present invention, when the communication speed is low, a monitoring image formed by mixing information with high resolution and information with low resolution on a monitoring terminal device mounted on each mobile unit is displayed in the central monitoring device. Therefore, it is possible to suppress an increase in the amount of communication data while ensuring the transmission frequency of information. That is, the central monitoring device can acquire rough information with a small amount of communication data a plurality of times instead of acquiring detailed information with a large amount of communication data once when the communication speed is low. Since the monitoring terminal device is controlled at the same time, information can be collected in real time without encouraging the congestion of the communication line that causes a decrease in the communication speed.
本発明の一実施の形態に係る監視システムを示す模式図である。1 is a schematic diagram showing a monitoring system according to an embodiment of the present invention. 図1の監視システムを示すブロック図である。It is a block diagram which shows the monitoring system of FIG. 図2の監視システムにおけるカメラの配置例を示す斜視図である。It is a perspective view which shows the example of arrangement | positioning of the camera in the monitoring system of FIG. 図2の監視システムにおけるカメラの撮像範囲を示す平面図である。It is a top view which shows the imaging range of the camera in the monitoring system of FIG. フロントのカメラの撮影画像の一例を示す図である。It is a figure which shows an example of the picked-up image of a front camera. 右サイドのカメラの撮影画像の一例を示す図である。It is a figure which shows an example of the picked-up image of the right side camera. リアのカメラの撮影画像の一例を示す図である。It is a figure which shows an example of the picked-up image of a rear camera. 左サイドのカメラの撮影画像の一例を示す図である。It is a figure which shows an example of the picked-up image of the camera of the left side. 複数の画像に基づいて生成された監視画像の一例を示す図である。It is a figure which shows an example of the monitoring image produced | generated based on the some image. 監視画像の歪み補正処理を説明するための図である。It is a figure for demonstrating the distortion correction process of the monitoring image. 投影モデルの一例を示す模式図である。It is a schematic diagram which shows an example of a projection model. 図8に示す投影モデルのxy面に沿う断面模式図である。It is a cross-sectional schematic diagram along xy plane of the projection model shown in FIG. 中央監視装置のディスプレイに示される表示画像の一例を示す第1図である。It is FIG. 1 which shows an example of the display image shown on the display of a central monitoring apparatus. 中央監視装置のディスプレイに示される表示画像の一例を示す第2図である。It is FIG. 2 which shows an example of the display image shown on the display of a central monitoring apparatus. 中央監視装置のディスプレイに示される表示画像の一例を示す第3図である。It is FIG. 3 which shows an example of the display image shown on the display of a central monitoring apparatus. 中央監視装置のディスプレイに示される表示画像の一例を示す第4図である。It is FIG. 4 which shows an example of the display image shown on the display of a central monitoring apparatus. 中央監視装置のディスプレイに示される表示画像の一例を示す第5図である。It is FIG. 5 which shows an example of the display image shown on the display of a central monitoring apparatus. 中央監視装置のディスプレイに示される表示画像の一例を示す第6図である。It is FIG. 6 which shows an example of the display image shown on the display of a central monitoring apparatus. 中央監視装置のディスプレイに示される表示画像の一例を示す第7図である。It is FIG. 7 which shows an example of the display image shown on the display of a central monitoring apparatus. 中央監視装置のディスプレイに示される表示画像の一例を示す第8図である。It is FIG. 8 which shows an example of the display image shown on the display of a central monitoring apparatus. 図1の監視システムの監視端末装置側の主たる制御内容を示すフローチャートである。It is a flowchart which shows the main control content by the monitoring terminal device side of the monitoring system of FIG. 図1の監視システムの中央監視装置側の主たる制御内容を示すフローチャートである。It is a flowchart which shows the main control content by the central monitoring apparatus side of the monitoring system of FIG. 図1の監視システムの中央監視装置側の主たる制御内容を示すフローチャートである。It is a flowchart which shows the main control content by the central monitoring apparatus side of the monitoring system of FIG. 図1の監視システムの中央監視装置側の主たる制御内容を示すフローチャートである。It is a flowchart which shows the main control content by the central monitoring apparatus side of the monitoring system of FIG. 第1の編成条件に基づく監視画像の送信状態を説明するための図である。It is a figure for demonstrating the transmission state of the monitoring image based on 1st composition conditions. 第2の編成条件に基づく監視画像の送信状態を説明するための図である。It is a figure for demonstrating the transmission state of the monitoring image based on 2nd composition conditions. 第3の編成条件に基づく監視画像の送信状態を説明するための図である。It is a figure for demonstrating the transmission state of the monitoring image based on 3rd organization conditions. 第4の編成条件に基づく監視画像の送信状態を説明するための図である。It is a figure for demonstrating the transmission state of the monitoring image based on 4th composition conditions. 監視情報の生成処理の態様を説明するための図である。It is a figure for demonstrating the aspect of the production | generation process of monitoring information.
 以下に示す一実施の形態は、本発明に係る監視システムを、複数の移動体に搭載したカメラの撮像画像を用いて、警察署や消防署などの当局や警備委託会社などの監視者にて街中の治安を集中監視する監視システム1に具体化したものである。 In one embodiment shown below, a monitoring system according to the present invention is used in a city by a supervisor such as a police station or a fire department or a security consignment company using captured images of cameras mounted on a plurality of moving bodies. It is embodied in a monitoring system 1 that centrally monitors the security of the country.
 複数の移動体に搭載された監視端末装置10のそれぞれは、その位置情報と、当該移動体の周囲の監視画像と、時刻情報とを所定のタイミングで取得し、これら位置情報と監視画像と時刻情報と含む監視情報を、無線通信を介して、監視者側に設置された中央監視装置20へ送信する。中央監視装置20は、監視端末装置10から取得した監視画像及び位置情報を少なくとも含む監視情報を蓄積し、ディスプレイなどを介して移動体の位置情報を地図情報に重畳表示するとともに、必要に応じて各移動体又は各位置において撮像された監視画像と時刻情報と表示する。そのため、本例の監視システム1は、図1に示すように、移動体Vに搭載され、位置情報及び監視画像などの監視情報を送信する監視端末装置10と、電気通信回線網30を介して監視情報を取得して処理する中央監視装置20とを備える。 Each of the monitoring terminal devices 10 mounted on the plurality of moving bodies acquires the position information, the monitoring image around the moving body, and the time information at a predetermined timing, and the position information, the monitoring image, and the time The monitoring information including the information is transmitted to the central monitoring device 20 installed on the monitor side via wireless communication. The central monitoring device 20 accumulates monitoring information including at least a monitoring image and position information acquired from the monitoring terminal device 10, and displays the position information of the moving body on the map information in a superimposed manner via a display or the like. A monitoring image and time information captured at each moving body or each position are displayed. Therefore, as shown in FIG. 1, the monitoring system 1 of this example is mounted on a moving body V and transmits monitoring information such as position information and monitoring images via a telecommunication network 30. A central monitoring device 20 that acquires and processes the monitoring information.
 特に、本実施形態に係る監視システム1は、監視端末装置10から受信する情報の通信速度が所定閾値以下である場合には、通信データ量の低減を図る観点から定義される所定の編成条件で混合させた監視画像を生成させる監視情報送信命令を無線通信を介して監視情報送信命令を送信する。そして、移動体V側では、無線通信を介して取得した監視情報送信命令に従った監視画像を生成し、無線通信を介して監視画像を含む監視情報を監視者側へ送信する。これにより、監視者側は通信速度の低下状態に応じて画像情報の解像度を調整し、通信データ量を低減させた監視画像を含む監視情報を取得する。 In particular, when the communication speed of information received from the monitoring terminal device 10 is equal to or lower than a predetermined threshold, the monitoring system 1 according to the present embodiment has a predetermined knitting condition defined from the viewpoint of reducing the amount of communication data. A monitoring information transmission command for generating a mixed monitoring image is transmitted via wireless communication. Then, on the mobile body V side, a monitoring image according to the monitoring information transmission command acquired via wireless communication is generated, and monitoring information including the monitoring image is transmitted to the supervisor side via wireless communication. As a result, the supervisor side adjusts the resolution of the image information according to the communication speed reduction state, and acquires the monitoring information including the monitoring image with the communication data amount reduced.
 監視者は、地図情報に重畳表示された移動体の位置や、予め取得した移動体Vの移動スケジュールなどに基づいて、監視画像を生成させたい移動体Vを選択することができる。監視者は、カーソルやタッチペンなどのポインタで目的の移動体Vを指示したり、タッチパネル式の表示画面に指でタッチすることにより、移動体Vを選択することができる。また、監視者は、移動体Vの位置や進行方向を基準とし、事故が発生した場所、通報により指定された場所、監視したい場所その他の注目ポイントに応じて注視したい方向を特定する。監視者は、選択した移動体Vを特定する情報及び注視したい一又は複数の方向を特定する情報、注視方向の変更を中央監視装置20に入力する。監視者は、注視したい範囲(注視方向の範囲)の始点及び終点又は始点、中間点及び終点をそれぞれ離散的にカーソルやタッチペンなどのポインタで指定したり、タッチパネル式の表示画面に指で触ったりすることにより、注視方向を入力することができる。 The monitor can select the moving object V for which a monitoring image is to be generated based on the position of the moving object superimposed on the map information, the movement schedule of the moving object V acquired in advance, and the like. The monitor can select the moving body V by pointing the target moving body V with a pointer such as a cursor or a touch pen, or by touching the touch panel display screen with a finger. Further, the monitor specifies the direction in which he / she wants to pay attention according to the location where the accident has occurred, the location designated by the notification, the location to be monitored, and other points of interest based on the position and traveling direction of the moving object V. The monitor inputs to the central monitoring device 20 information specifying the selected moving object V, information specifying one or more directions to be watched, and a change in the watching direction. The monitor designates the start point and end point or start point, start point, intermediate point, and end point of the range to be watched with a pointer such as a cursor or a touch pen, or touches the touch panel display screen with a finger. By doing so, the gaze direction can be input.
 また、監視端末装置10が搭載される移動体Vは、目的とする監視領域を走行するものであれば特に限定されず、自動車、二輪車、産業車両、路面電車などの移動体Vを含む。図1に示すように、自動車には自家用自動車V2や緊急自動車V3が含まれるが、なかでも特に予め決められた領域をランダム且つ常時走行するタクシーや路線バスV1などが好適に含まれる。図1には、タクシーV1、自家用自動車V2、パトカー、消防車又は救急車などの緊急自動車V3を例示するが、これらを総称する場合は移動体Vまたは乗用車Vという。 The mobile body V on which the monitoring terminal device 10 is mounted is not particularly limited as long as it travels in the target monitoring area, and includes mobile bodies V such as automobiles, motorcycles, industrial vehicles, and trams. As shown in FIG. 1, automobiles include private automobiles V2 and emergency automobiles V3, and in particular, taxis and route buses V1 that travel randomly and constantly in a predetermined area are preferably included. FIG. 1 illustrates an emergency vehicle V3 such as a taxi V1, a private vehicle V2, a police car, a fire engine or an ambulance, but these are collectively referred to as a moving body V or a passenger vehicle V.
 図2は、中央監視装置20及び監視端末装置10の具体的構成を示すブロック図である。
 監視端末装置10と中央監視装置20とは電気通信回線網30を介して通信が可能である。監視端末装置10の通信装置23は、無線通信が可能な通信手段であり、電気通信回線網30を介して中央監視装置20の通信装置23と情報の授受を実行する。電気通信回線網30が商用電話回線網である場合は携帯電話通信装置を汎用することができ、電気通信回線網30が本例の監視システム1の専用電気通信回線網である場合は、それ専用の通信装置23,13を用いることができる。なお、電気通信回線網30に代えて、無線LAN、WiFi(登録商標)、WiMAX(登録商標)、Bluetooth(登録商標)、専用無線回線などを用いることもできる。
FIG. 2 is a block diagram illustrating a specific configuration of the central monitoring device 20 and the monitoring terminal device 10.
The monitoring terminal device 10 and the central monitoring device 20 can communicate via the telecommunication network 30. The communication device 23 of the monitoring terminal device 10 is a communication means capable of wireless communication, and exchanges information with the communication device 23 of the central monitoring device 20 via the telecommunication network 30. When the telecommunications network 30 is a commercial telephone network, a mobile phone communication device can be used widely, and when the telecommunications network 30 is a dedicated telecommunications network for the monitoring system 1 of this example, it is dedicated to it. The communication devices 23 and 13 can be used. Instead of the telecommunication network 30, a wireless LAN, WiFi (registered trademark), WiMAX (registered trademark), Bluetooth (registered trademark), a dedicated wireless line, or the like can be used.
 中央監視装置20は、移動体Vのカメラの撮像画像により街中を監視するために、移動体Vのそれぞれに搭載された監視端末装置10から送信された位置情報及び監視画像をデータベースに入力する情報入力機能と、地図データベースから読み出した地図情報に受信した位置情報をディスプレイ24に重畳表示するとともに、受信した監視画像をディスプレイ24に表示する表示制御機能と、を有する。 The central monitoring device 20 inputs the positional information and the monitoring image transmitted from the monitoring terminal device 10 mounted on each of the moving bodies V to the database in order to monitor the city by the captured image of the camera of the moving body V. It has an input function and a display control function for displaying the received monitoring image on the display 24 while superimposing the received positional information on the display 24 on the map information read from the map database.
 さらに、本実施形態の中央監視装置20は、監視者が提示された地図情報上の移動体Vの位置を参照して監視画像を取得したい移動体Vを選択し、選択した移動体Vの監視端末装置10に対する監視情報送信命令を生成する機能を有する。具体的に、本実施形態の中央監視装置20は、監視端末装置10から送信される情報の通信速度を算出する通信速度算出機能と、算出された通信速度が所定閾値以下である場合には、送信される情報に、解像度が相対的に高い第1解像度の第1画像と、第1解像度よりも低い第2解像度の第2画像とを所定の編成条件で混合させた監視画像を生成させるとともに、この生成された監視画像を含む監視情報を中央監視装置20へ送信させる監視情報送信命令を生成する命令生成機能と、生成された監視情報送信命令を、選択された監視端末装置10に送信する命令送信機能とを備える。 Furthermore, the central monitoring apparatus 20 of this embodiment selects the moving body V for which a monitoring image is to be acquired with reference to the position of the moving body V on the map information presented by the monitor, and monitors the selected moving body V. It has a function of generating a monitoring information transmission command for the terminal device 10. Specifically, the central monitoring device 20 of the present embodiment has a communication speed calculation function for calculating the communication speed of information transmitted from the monitoring terminal device 10, and when the calculated communication speed is equal to or less than a predetermined threshold, In addition to generating a monitoring image in which the first image having a relatively high resolution and the second image having a second resolution lower than the first resolution are mixed with the information to be transmitted under a predetermined composition condition. A command generation function for generating a monitoring information transmission command for transmitting the monitoring information including the generated monitoring image to the central monitoring device 20 and a generated monitoring information transmission command are transmitted to the selected monitoring terminal device 10. Command transmission function.
 そのため、中央監視装置20は、中央制御装置21、画像処理装置22、通信装置23、ディスプレイ24及び入力装置25を備える。なお、本実施形態の中央監視装置20は、監視情報を蓄積するデータベースを中央監視装置20の内部に有するが、アクセス可能であれば中央監視装置20の外部に設けることもできる。 Therefore, the central monitoring device 20 includes a central control device 21, an image processing device 22, a communication device 23, a display 24, and an input device 25. The central monitoring device 20 according to the present embodiment has a database for storing monitoring information inside the central monitoring device 20, but may be provided outside the central monitoring device 20 as long as it is accessible.
 中央監視装置20の中央制御装置21は、CPU,ROM,RAMにより構成され、画像処理装置22、通信装置23及びディスプレイ24を制御して、監視端末装置10から送信された位置情報、監視画像及び時刻情報を受信し、必要に応じて画像処理を施したうえでディスプレイ24に表示する。 The central control device 21 of the central monitoring device 20 is configured by a CPU, a ROM, and a RAM, and controls the image processing device 22, the communication device 23, and the display 24 to transmit the positional information, the monitoring image, and the information transmitted from the monitoring terminal device 10. The time information is received, subjected to image processing as necessary, and displayed on the display 24.
 中央制御装置21は、ある移動体Vと通信が行われる際の通信速度を算出する。本実施形態の通信速度は、中央監視装置20の通信装置23と監視端末装置10の通信装置13との間のデータ転送速度である。また、本実施形態の通信速度は、中央監視装置20側において監視端末装置10から受信した監視画像のフレームレートに基づいて判断することができる。フレームレートとは、画像表示や動画再生において単位時間あたりの処理される(書き換えられる)画像の数である。中央監視装置20の通信装置23と監視端末装置10通信装置13との間の通信速度が速ければ、中央監視装置20側において監視端末装置10から受信した監視画像の処理に関するフレームレートは高くなり、両者間の通信速度が遅ければ、中央監視装置20側における監視画像の処理に関するフレームレートは低くなるため、監視端末装置10におけるフレームレートに基づいて通信速度を評価することができる。 The central controller 21 calculates a communication speed when communication with a certain mobile unit V is performed. The communication speed of the present embodiment is a data transfer speed between the communication device 23 of the central monitoring device 20 and the communication device 13 of the monitoring terminal device 10. The communication speed of the present embodiment can be determined based on the frame rate of the monitoring image received from the monitoring terminal device 10 on the central monitoring device 20 side. The frame rate is the number of images processed (rewritten) per unit time in image display and moving image playback. If the communication speed between the communication device 23 of the central monitoring device 20 and the monitoring terminal device 10 communication device 13 is high, the frame rate related to the processing of the monitoring image received from the monitoring terminal device 10 on the central monitoring device 20 side becomes high. If the communication speed between the two is slow, the frame rate related to the monitoring image processing on the central monitoring device 20 side becomes low, so the communication speed can be evaluated based on the frame rate in the monitoring terminal device 10.
 本明細書における「中央監視装置20と監視端末装置10とが情報の授受を行う際の通信速度」の用語は、「中央監視装置20における監視画像の処理に関するフレームレート」を含む。つまり、本実施形態においては、通信速度が所定閾値以下であるか否かは、中央監視装置20の監視画像の処理に関するフレームレートが所定閾値以下であるか否かによって判断することができる。 In this specification, the term “communication speed when the central monitoring device 20 and the monitoring terminal device 10 exchange information” includes “a frame rate related to processing of a monitoring image in the central monitoring device 20”. That is, in this embodiment, whether or not the communication speed is equal to or lower than the predetermined threshold can be determined based on whether or not the frame rate related to the monitoring image processing of the central monitoring device 20 is equal to or lower than the predetermined threshold.
 また、通信速度は、監視端末装置10の通信装置13が中央監視装置20の通信装置23にデータを送信したタイミングから、中央監視装置20の通信装置23からデータの受け取り完了を示すフラグを受信するタイミングまでの時間Tに基づいて判断することができる。中央監視装置20の通信装置23と監視端末装置10通信装置13との間の通信速度が速ければ、情報送信から受信通知受領までの時間Tは短くなり、両者間の通信速度が遅ければ、情報送信から受信通知受領までの時間Tは長くなるため、監視端末装置10における情報の送信完了に要する時間Tに基づいて通信速度を評価することができる。 The communication speed receives a flag indicating completion of data reception from the communication device 23 of the central monitoring device 20 from the timing when the communication device 13 of the monitoring terminal device 10 transmits data to the communication device 23 of the central monitoring device 20. This can be determined based on the time T until the timing. If the communication speed between the communication device 23 of the central monitoring device 20 and the monitoring terminal device 10 communication device 13 is high, the time T from information transmission to receipt of reception notification is shortened. Since the time T from transmission to receipt of the reception notification becomes longer, the communication speed can be evaluated based on the time T required for the information transmission completion in the monitoring terminal device 10.
 よって、本明細書における「中央監視装置20と監視端末装置10とが情報の授受を行う際の通信速度」の用語は、「監視端末装置20における中央監視装置20に対する情報送信に要する時間」を含む。つまり、本実施形態においては、通信速度が所定閾値以下であるか否かは、監視端末装置10が中央監視装置20へ情報を送信し、中央監視装置20から情報受信完了の信号を受信するまでの時間が所定閾値以上であるか否かによって判断することができる。 Therefore, the term “communication speed when the central monitoring device 20 and the monitoring terminal device 10 exchange information” in this specification is “the time required for the monitoring terminal device 20 to transmit information to the central monitoring device 20”. Including. That is, in the present embodiment, whether or not the communication speed is equal to or lower than the predetermined threshold is determined until the monitoring terminal device 10 transmits information to the central monitoring device 20 and receives a signal of information reception completion from the central monitoring device 20. Can be determined by whether or not the time is equal to or greater than a predetermined threshold.
 通信速度に関する所定閾値は、中央監視装置20の通信装置23及び監視端末装置10の通信装置13の性能、監視端末装置10の所在地周囲の通信環境及び中央監視装置20の所在地周囲の通信環境に応じて、適宜に設定することができる。なお、フレームレートを評価するための閾値は、通信速度が低下しているか否かを判断するために閾値として、通信速度を評価する閾値と技術的な意義は同じであるが、フレームレートと通信速度とは異なる意味の値であるため、フレームレートを評価するための閾値は、通信速度を評価するための閾値とは別に、独立した判断により設定する。同様の理由から、監視端末装置10における情報の送信から送信完了確認までの時間を評価するための閾値は、通信速度を評価するための閾値とは別に、独立した判断により設定する。 The predetermined threshold regarding the communication speed depends on the performance of the communication device 23 of the central monitoring device 20 and the communication device 13 of the monitoring terminal device 10, the communication environment around the location of the monitoring terminal device 10, and the communication environment around the location of the central monitoring device 20. And can be set appropriately. Note that the threshold for evaluating the frame rate is a threshold for determining whether or not the communication speed is decreasing, and the technical significance is the same as the threshold for evaluating the communication speed. Since the value has a meaning different from the speed, the threshold for evaluating the frame rate is set by independent determination separately from the threshold for evaluating the communication speed. For the same reason, the threshold for evaluating the time from the information transmission to the transmission completion confirmation in the monitoring terminal device 10 is set by independent determination separately from the threshold for evaluating the communication speed.
 中央制御装置21は、中央監視装置20と監視端末装置10とが情報の授受を行う際の通信速度が所定閾値以下となった場合には、その移動体V、その移動体Vの近傍に存在する移動体V、又はその移動体Vが属する予め定義されたエリア(同じ基地局を介して通信を行うエリアなど)に属する移動体Vを選択し、この選択された移動体Vの監視端末装置10に所定の編成条件で監視画像を生成させ、生成した監視画像を中央監視装置20へ送信させる指令を含む監視情報送信命令を生成して、選択された監視端末装置10へそれぞれ送信する。本明細書において、中央監視装置20と監視端末装置10とが情報の授受を行う際の通信速度が所定閾値以下となった場合には、中央監視装置20における監視画像の処理に関するフレームレートが所定値以下となった場合、及び、監視端末装置10における情報の送信から送信完了確認までの時間が所定値以上となった場合を含む。 The central control device 21 exists in the vicinity of the moving body V and the moving body V when the communication speed when the central monitoring device 20 and the monitoring terminal device 10 exchange information is below a predetermined threshold. Mobile unit V or a mobile unit V belonging to a predefined area to which the mobile unit V belongs (such as an area in which communication is performed via the same base station) is selected, and a monitoring terminal device for the selected mobile unit V 10 generates a monitoring image under a predetermined composition condition, generates a monitoring information transmission command including a command for transmitting the generated monitoring image to the central monitoring device 20, and transmits the monitoring information to the selected monitoring terminal device 10. In this specification, when the communication speed when the central monitoring device 20 and the monitoring terminal device 10 exchange information is equal to or lower than a predetermined threshold, the frame rate related to the monitoring image processing in the central monitoring device 20 is predetermined. This includes a case where the value is equal to or less than the value and a case where the time from transmission of information to confirmation of transmission completion in the monitoring terminal device 10 is equal to or greater than a predetermined value.
 このため、監視情報送信命令には、選択した移動体Vの監視端末装置10を識別するための通信IDその他の識別子が含まれている。移動体Vの選択は、入力装置25から入力された移動体Vの選択命令に基づいて行うことができる。通信速度が所定閾値以下である移動体V、つまり、監視情報送信命令を送信する対象となる移動体Vは、中央監視装置20との実際の通信速度に基づいて移動体Vごとに個別に判断してもよいし、通信環境は時や場所ごとに状態に依存する傾向があることから、通信速度が低下した移動体V近傍の他の移動体Vも通信速度が低下する可能性が高いと判断して、位置情報に基づいて複数の移動体Vをまとめて選択してもよい。さらに、通信速度の低下が発生している基地局のカバーする通信エリアに存在するすべての移動体Vをまとめて選択してもよい。また、通常時においては、外部から入力された事故発生地点の近傍エリアに存在する移動体Vや、事故発生を通報した移動体Vの近傍エリアに存在する移動体Vや、予め重点的に監視を行う地点として定義されている重点監視地点の近傍エリアに存在する移動体Vなどを自動的に抽出することにより、一又は複数の移動体Vを選択してもよい。 Therefore, the monitoring information transmission command includes a communication ID and other identifiers for identifying the monitoring terminal device 10 of the selected moving object V. Selection of the moving body V can be performed based on a selection command for the moving body V input from the input device 25. The mobile body V whose communication speed is equal to or lower than a predetermined threshold, that is, the mobile body V that is a target for transmitting the monitoring information transmission command, is individually determined for each mobile body V based on the actual communication speed with the central monitoring device 20. Alternatively, since the communication environment tends to depend on the state at each time and place, it is highly likely that other mobile objects V in the vicinity of the mobile object V whose communication speed has decreased will also decrease in communication speed. It may be determined and a plurality of moving bodies V may be selected together based on the position information. Furthermore, all the mobile objects V existing in the communication area covered by the base station where the communication speed is reduced may be selected together. Further, in normal times, the moving body V existing in the vicinity area of the accident occurrence point input from the outside, the moving body V existing in the vicinity area of the moving body V reporting the occurrence of the accident, or monitoring in advance is important. One or a plurality of moving bodies V may be selected by automatically extracting moving bodies V and the like existing in the vicinity area of the priority monitoring point defined as the point to perform.
 監視者に選択された移動体Vに搭載された監視端末装置10は、移動体Vの異なる位置に設けられたカメラ1a~1dを用い、監視情報送信命令に含まれる編成条件に従った監視画像を生成し、この監視画像を含む監視情報を中央監視装置20へ送信する。このように、中央監視装置20と移動体Vの監視端末装置10は協働して街中を監視する監視情報を監視者に提供する。 The monitoring terminal device 10 mounted on the moving body V selected by the monitor uses the cameras 1a to 1d provided at different positions of the moving body V and uses the monitoring image according to the composition condition included in the monitoring information transmission command. And the monitoring information including the monitoring image is transmitted to the central monitoring device 20. In this way, the central monitoring device 20 and the monitoring terminal device 10 of the moving object V cooperate with each other to provide monitoring information for monitoring the city.
 なお、監視情報送信命令の内容については、後に詳述する。 The details of the monitoring information transmission command will be described later.
 画像処理装置22は、地図データベースを有し、当該地図データベースからの地図情報をディスプレイ24に表示するとともに、監視端末装置10の位置検出装置15により検出された位置情報を当該地図情報上に重畳表示する。また、監視端末装置10のカメラ11で撮像され、画像処理装置12で処理された監視画像をディスプレイ24に表示するための画像処理を施す。 The image processing device 22 has a map database, displays map information from the map database on the display 24, and superimposes and displays position information detected by the position detection device 15 of the monitoring terminal device 10 on the map information. To do. In addition, image processing is performed to display the monitoring image captured by the camera 11 of the monitoring terminal device 10 and processed by the image processing device 12 on the display 24.
 ディスプレイ24は、たとえば一つの画面上に2つ以上のウィンド画面が表示できる大きさの液晶表示装置又は2つ以上のウィンド画面をそれぞれ表示する2つ以上の液晶表示装置により構成することができる。そして、一のウィンド画面には、地図情報上に各移動体Vの位置情報を重ね合わせた画面を表示し(図1参照)、他方のウィンド画面には、移動体Vのカメラ11で撮像された撮像画像に基づいて生成された監視画像を表示する。 The display 24 can be composed of, for example, a liquid crystal display device having a size capable of displaying two or more window screens on one screen, or two or more liquid crystal display devices each displaying two or more window screens. One window screen displays a screen in which the position information of each moving body V is superimposed on the map information (see FIG. 1), and the other window screen is captured by the camera 11 of the moving body V. A monitoring image generated based on the captured image is displayed.
 入力装置25は、キーボード、マウス、タッチパネルなどの入力デバイスであり、所望の移動体Vを特定してその移動体Vに対して監視情報送信命令を出力したり、ディスプレイ24に表示される各種情報の処理指令を入力したりする場合に用いられる。 The input device 25 is an input device such as a keyboard, a mouse, or a touch panel. The input device 25 specifies a desired moving object V, outputs a monitoring information transmission command to the moving object V, and displays various information displayed on the display 24. This is used when inputting a processing command.
 通信装置23は、先述したとおり、無線通信が可能な通信手段であり、電気通信回線網30を介して監視端末装置10の通信装置13と情報の授受を実行する。電気通信回線網30が商用電話回線網である場合は携帯電話通信装置を汎用することができ、電気通信回線網30が本例の監視システム1の専用電気通信回線網である場合は、それ専用の通信装置13,23を用いることができる。 As described above, the communication device 23 is a communication means capable of wireless communication, and exchanges information with the communication device 13 of the monitoring terminal device 10 via the telecommunication network 30. When the telecommunications network 30 is a commercial telephone network, a mobile phone communication device can be used widely, and when the telecommunications network 30 is a dedicated telecommunications network for the monitoring system 1 of this example, it is dedicated to it. The communication devices 13 and 23 can be used.
 続いて、監視端末装置10について説明する。監視端末装置10は、複数の移動体Vにそれぞれ搭載される端末装置であって、これら複数の移動体Vのそれぞれの位置情報を検出する位置検出機能と、複数の移動体Vのそれぞれに装着されたカメラ11により撮像された当該移動体Vの周囲の撮像画像に基づいて監視画像を生成する監視画像生成機能と、所定のタイミングで取得した位置情報、監視画像及び時刻情報を中央監視装置20へ送信するとともに中央監視装置20からの命令を受け付ける通信機能と、を有する。このため、各移動体Vは、複数のカメラ11a~11d、画像処理装置12、通信装置13、制御装置14、及び位置検出装置15を備える。なお、時刻情報は主として事象の事後解析に供される情報であるため省略してもよい。 Subsequently, the monitoring terminal device 10 will be described. The monitoring terminal device 10 is a terminal device mounted on each of the plurality of moving bodies V. The monitoring terminal device 10 is mounted on each of the plurality of moving bodies V and a position detection function for detecting position information of each of the plurality of moving bodies V. The monitoring image generating function for generating a monitoring image based on the captured image around the moving object V captured by the camera 11, and the position information, the monitoring image and the time information acquired at a predetermined timing are stored in the central monitoring device 20. And a communication function for receiving a command from the central monitoring device 20. Therefore, each moving body V includes a plurality of cameras 11a to 11d, an image processing device 12, a communication device 13, a control device 14, and a position detection device 15. Note that the time information is mainly information used for post-event analysis, and may be omitted.
 それぞれの移動体Vに搭載された複数のカメラ11は、CCDカメラなどで構成され、移動体Vの周囲の各所定方向乃至所定領域を撮像し、その撮像信号を画像処理装置12へ出力する。本実施形態のカメラ11は、解像度の設定が可能であり、所定の解像度で車両Vの周囲を撮像することができる。画像処理装置12は、カメラ11からの撮像信号を読み出し、監視画像生成するための画像処理を実行する。この画像処理の詳細は後述する。 The plurality of cameras 11 mounted on the respective moving bodies V are constituted by CCD cameras or the like, take images of respective predetermined directions or predetermined areas around the moving body V, and output the image pickup signals to the image processing device 12. The camera 11 of this embodiment can set the resolution, and can image the surroundings of the vehicle V with a predetermined resolution. The image processing device 12 reads an imaging signal from the camera 11 and executes image processing for generating a monitoring image. Details of this image processing will be described later.
 位置検出装置15は、GPS装置及びその補正装置などで構成され、当該移動体Vの現在位置を検出し、制御装置14へ出力する。 The position detection device 15 includes a GPS device and its correction device, and detects the current position of the moving object V and outputs it to the control device 14.
 制御装置14は、CPU,ROM,RAMにより構成され、電気通信回線網30及び通信装置13を介して受信された中央監視装置20からの監視情報送信命令を受け付け、カメラ11、画像処理装置12、通信装置13及び位置検出装置15を制御し、画像処理装置12で生成された監視画像と、位置検出装置15で検出された移動体Vの位置情報と、CPUが内蔵する時計からの時刻情報とを含む監視情報を、通信装置13及び電気通信回線網30を介して中央監視装置20へ出力する。 The control device 14 includes a CPU, a ROM, and a RAM, receives a monitoring information transmission command from the central monitoring device 20 received via the telecommunication network 30 and the communication device 13, and receives the camera 11, the image processing device 12, The communication device 13 and the position detection device 15 are controlled, the monitoring image generated by the image processing device 12, the position information of the moving object V detected by the position detection device 15, and the time information from the clock built in the CPU, Is output to the central monitoring device 20 via the communication device 13 and the telecommunication network 30.
 監視画像を生成するにあたり、制御装置14は、通信装置13を介して中央監視装置20から取得した監視情報送信命令に従い、監視情報送信命令に含まれる所定の編成情報に基づいて、解像度が相対的に高い第1解像度の第1画像と、この第1解像度よりも低い第2解像度の第2画像とを含む監視画像を画像処理装置12に生成させる監視画像生成機能と、生成された監視画像を含む監視情報を、通信装置13を介して中央監視装置20に送信させる監視情報送信機能とを実行する。本発明の本実施形態では、相対的に高い第1解像度である第1画像と相対的に低い第2解像度である第2画像とを少なくとも含む監視画像について説明するが、生成する監視画像には他の解像度の画像をさらに含めることも可能である。なお、制御装置14は、中央監視装置20から監視情報送信命令を受け付ける前においては、予め設定された所定の解像度の監視画像を画像処理装置12に生成させ、通信装置13を介して中央監視装置20に送信させる。 In generating the monitoring image, the control device 14 follows the monitoring information transmission command acquired from the central monitoring device 20 via the communication device 13, and the resolution is relative based on the predetermined composition information included in the monitoring information transmission command. A monitoring image generation function for causing the image processing device 12 to generate a monitoring image including a first image having a first resolution higher than the first resolution and a second image having a second resolution lower than the first resolution; A monitoring information transmission function for causing the central monitoring device 20 to transmit the included monitoring information via the communication device 13 is executed. In the present embodiment of the present invention, a monitoring image including at least a first image having a relatively high first resolution and a second image having a relatively low second resolution will be described. It is also possible to further include images of other resolutions. Note that the control device 14 causes the image processing device 12 to generate a monitoring image having a predetermined resolution set in advance before receiving the monitoring information transmission command from the central monitoring device 20, and the central monitoring device via the communication device 13. 20 to send.
 次にカメラ11a~11dの装着位置と撮像範囲について説明する。ここでは移動体Vとして乗用車Vを例に挙げて説明する。カメラ11a~11dはCCD等の撮像素子を用いて構成され、4つのカメラ11a~11dは乗用車Vの外部の異なる位置にそれぞれ設置され、移動体V周囲の前後左右の4方向をそれぞれ撮影する。 Next, the mounting positions and imaging ranges of the cameras 11a to 11d will be described. Here, a passenger car V will be described as an example of the moving body V. The cameras 11a to 11d are configured by using an image sensor such as a CCD, and the four cameras 11a to 11d are installed at different positions outside the passenger car V, respectively, and respectively photograph the four directions of front, rear, left and right around the moving body V.
 例えば、図3に示すように、フロントグリル部分などの乗用車Vの前方の所定位置に設置されたカメラ11aは、乗用車Vの前方のエリアSP1内及びその前方の空間に存在する物体又は路面(フロントビュー)を撮影する。また、左サイドミラー部分などの乗用車Vの左側方の所定位置に設置されたカメラ11bは、乗用車Vの左側方のエリアSP2内及びその周囲の空間に存在する物体又は路面(左サイドビュー)を撮影する。また、リアフィニッシャー部分やルーフスポイラー部分などの乗用車Vの後方部分の所定位置に設置されたカメラ11cは、乗用車Vの後方のエリアSP3内及びその後方の空間に存在する物体又は路面(リアビュー)を撮影する。また、右サイドミラー部分などの乗用車Vの右側方の所定位置に設置されたカメラ11dは、乗用車Vの右側方のエリアSP4内及びその周囲の空間に存在する物体又は路面(右サイドビュー)を撮影する。 For example, as shown in FIG. 3, the camera 11a installed at a predetermined position in front of the passenger car V such as a front grille is an object or road surface (front) in the area SP1 in front of the passenger car V and in the space in front thereof. Take a view). The camera 11b installed at a predetermined position on the left side of the passenger car V, such as the left side mirror portion, shows an object or road surface (left side view) existing in the area SP2 on the left side of the passenger car V and in the surrounding space. Take a picture. The camera 11c installed at a predetermined position in the rear part of the passenger car V, such as a rear finisher part or a roof spoiler part, shows an object or road surface (rear view) existing in the area SP3 behind the passenger car V and in the space behind it. Take a picture. The camera 11d installed at a predetermined position on the right side of the passenger car V, such as the right side mirror portion, shows an object or road surface (right side view) existing in the area SP4 on the right side of the passenger car V and in the surrounding space. Take a picture.
 図4は、各カメラ11a~11dの配置を乗用車Vの上空から見た図である。同図に示すように、エリアSP1を撮像するカメラ11a、エリアSP2を撮像するカメラ11b、エリアSP3を撮像するカメラ11c、エリアSP4を撮像するカメラ11dの4つは、乗用車Vのボディの外周VEに沿って左回り(反時計回り)又は右回り(時計回り)に沿って設置されている。つまり、同図に矢印Cで示す左回り(反時計回り)に乗用車Vのボディの外周VEに沿って見ると、カメラ11aの左隣りにカメラ11bが設置され、カメラ11bの左隣りにカメラ11cが設置され、カメラ11cの左隣りにカメラ11dが設置され、カメラ11dの左隣りにカメラ11aが設置されている。逆に同図に示す矢印Cの方向とは反対に(時計回り)に乗用車Vのボディの外周VEに沿って見ると、カメラ11aの右隣りにカメラ11dが設置され、カメラ11dの右隣りにカメラ11cが設置され、カメラ11cの右隣りにカメラ11bが設置され、カメラ11bの右隣りにカメラ11aが設置されている。 FIG. 4 is a view of the arrangement of the cameras 11a to 11d as viewed from above the passenger car V. As shown in the figure, the camera 11a that captures the area SP1, the camera 11b that captures the area SP2, the camera 11c that captures the area SP3, and the camera 11d that captures the area SP4 are the outer periphery VE of the body of the passenger car V. Along the counterclockwise (counterclockwise) or clockwise (clockwise). That is, when viewed along the outer periphery VE of the body of the passenger car V in the counterclockwise direction (counterclockwise) indicated by the arrow C in the same figure, the camera 11b is installed on the left side of the camera 11a, and the camera 11c is on the left side of the camera 11b. The camera 11d is installed on the left side of the camera 11c, and the camera 11a is installed on the left side of the camera 11d. Conversely, when viewed along the outer periphery VE of the body of the passenger car V in the direction opposite to the direction of the arrow C shown in the figure (clockwise), the camera 11d is installed on the right side of the camera 11a, and on the right side of the camera 11d. A camera 11c is installed, a camera 11b is installed on the right side of the camera 11c, and a camera 11a is installed on the right side of the camera 11b.
 図5Aは、フロントのカメラ11aがエリアSP1を撮像した画像GSP1の一例を示し、図5Bは、左サイドのカメラ11bがエリアSP2を撮像した画像GSP2の一例を示し、図5Cは、リアのカメラ11cがエリアSP3を撮像した画像GSP3の一例を示し、図5Dは、右サイドのカメラ11dがエリアSP4を撮像した画像GSP4の一例を示す画像図である。ちなみに、本実施形態における各画像のサイズは、縦480ピクセル×横640ピクセル又は縦960ピクセル×横1280ピクセルである。本実施形態では、相対的に高い第1解像度である縦960ピクセル×横128の画像を第1画像とし、相対的に低い第2解像度である縦480ピクセル×横640ピクセルの画像を第2画像とする。画像サイズは特に限定されず、相対的に異なる二つの解像度において、一般的な端末装置で動画再生が可能なサイズであればよい。 5A shows an example of an image GSP1 in which the front camera 11a images the area SP1, FIG. 5B shows an example of an image GSP2 in which the left side camera 11b images the area SP2, and FIG. 5C shows a rear camera. 11c shows an example of an image GSP3 obtained by imaging the area SP3, and FIG. 5D is an image diagram showing an example of an image GSP4 obtained by the right side camera 11d imaging the area SP4. Incidentally, the size of each image in the present embodiment is 480 vertical pixels × 640 horizontal pixels or 960 vertical pixels × horizontal 1280 pixels. In the present embodiment, an image having a relatively high first resolution of 960 pixels × width 128 is set as the first image, and an image having a relatively low second resolution of 480 pixels × width 640 pixels is the second image. And The image size is not particularly limited, and may be any size that can be played back by a general terminal device at two relatively different resolutions.
 なお、カメラ11の配置数及び配置位置は、乗用車Vの大きさ、形状、検出領域の設定手法等に応じて適宜に決定することができる。上述した複数のカメラ11は、それぞれの配置に応じた識別子が付されており、制御装置14は、各識別子に基づいて各カメラ11のそれぞれを識別することができる。また、制御装置14は、指令信号に識別子を付することにより、特定のカメラ11に撮像命令その他の命令を送信することができる。 It should be noted that the number of cameras 11 and the positions of the cameras 11 can be appropriately determined according to the size, shape, detection area setting method, etc. of the passenger car V. The plurality of cameras 11 described above are assigned identifiers according to their arrangement, and the control device 14 can identify each of the cameras 11 based on each identifier. Further, the control device 14 can transmit an imaging command and other commands to the specific camera 11 by attaching an identifier to the command signal.
 制御装置14は、画像処理装置12を制御してカメラ11によって撮像された撮像信号をそれぞれ取得し、画像処理装置12は、各カメラ11からの撮像信号を処理して図5A~図5Dに示す監視画像に変換する。そして、制御装置14は、図5A~図5Dに示す4つの監視画像に基づいて監視画像を生成するとともに(監視画像生成機能)、この監視画像を柱体の投影モデルの側面に設定された投影面に投影するためのマッピング情報を監視画像に対応づけ(マッピング情報付加機能)、中央監視装置20へ出力する。以下、監視画像生成機能とマッピング情報付加機能について詳述する。 The control device 14 controls the image processing device 12 to acquire each image signal picked up by the camera 11, and the image processing device 12 processes the image pickup signal from each camera 11, and is shown in FIGS. 5A to 5D. Convert to surveillance image. Then, the control device 14 generates a monitoring image based on the four monitoring images shown in FIGS. 5A to 5D (monitoring image generation function), and projects the monitoring image set on the side of the projection model of the columnar body. Mapping information to be projected onto the surface is associated with the monitoring image (mapping information adding function) and output to the central monitoring device 20. Hereinafter, the monitoring image generation function and the mapping information addition function will be described in detail.
 なお、乗用車Vの周囲を撮像した4つの監視画像に基づいて監視画像を生成し、これにマッピング情報を関連付ける処理は、本例のように監視端末装置10で実行するほか、中央監視装置20で実行することもできる。この場合には、乗用車Vの周囲を撮像した4つの監視画像を監視端末装置10から中央監視装置20へそのまま送信し、これを中央監視装置20の画像処理装置22及び中央制御装置21にて監視画像を生成するとともにマッピング情報を関連付け、投影変換すればよい。 Note that the monitoring image is generated on the basis of the four monitoring images obtained by imaging the periphery of the passenger car V, and the process of associating the mapping information with the monitoring image is executed by the monitoring terminal device 10 as in this example, and also by the central monitoring device 20. It can also be executed. In this case, four monitoring images obtained by imaging the periphery of the passenger car V are transmitted as they are from the monitoring terminal device 10 to the central monitoring device 20, and are monitored by the image processing device 22 and the central control device 21 of the central monitoring device 20. It is only necessary to generate an image, associate mapping information, and perform projection conversion.
 まず、監視画像生成機能について説明する。本実施形態の監視端末装置10の制御装置14は、画像処理装置12を制御して各カメラ11a~11dの撮像信号をそれぞれ取得し、さらに乗用車Vのボディの外周に沿って右回り又は左回りの方向に設置されたカメラ11a~11dの監視画像がこれらのカメラ11a~11dの設置順に配置されるように、一枚の監視画像を生成する。 First, the monitoring image generation function will be described. The control device 14 of the monitoring terminal device 10 according to the present embodiment controls the image processing device 12 to acquire the imaging signals of the respective cameras 11a to 11d, and further turns clockwise or counterclockwise along the outer periphery of the body of the passenger car V. One monitoring image is generated so that the monitoring images of the cameras 11a to 11d installed in the direction of are arranged in the installation order of these cameras 11a to 11d.
 上述したように、本実施形態において、4つのカメラ11a~11dは乗用車Vのボディの外周VEに沿って左回り(反時計回り)にカメラ11a、11b、11c、11dの順に設置されているので、制御装置14は、これらのカメラ11a~11dの設置の順序(カメラ11a→11b→11c→11d)に従って、各カメラ11a~11dが撮像した4枚の画像が一体となるように水平方向に繋げ、一枚の監視画像を生成する。本実施形態の監視画像において、各画像は乗用車Vの接地面(路面)が下辺となるように配置され、各画像は路面に対して高さ方向(垂直方向)の辺で互いに接続される。 As described above, in this embodiment, the four cameras 11a to 11d are installed in the order of the cameras 11a, 11b, 11c, and 11d in the counterclockwise direction along the outer periphery VE of the body of the passenger car V. The control device 14 connects the four cameras 11a to 11d in a horizontal direction so that the four images captured by the cameras 11a to 11d are integrated in accordance with the installation order of these cameras 11a to 11d (cameras 11a → 11b → 11c → 11d). Then, one monitoring image is generated. In the monitoring image of the present embodiment, the images are arranged such that the ground contact surface (road surface) of the passenger vehicle V is the lower side, and the images are connected to each other at sides in the height direction (vertical direction) with respect to the road surface.
 図6は、監視画像Kの一例を示す図である。同図に示すように、本実施形態の監視画像Kは、図面左側から図面右側へ向かう方向Pに沿って、フロントのカメラ11aがエリアSP1を撮像した撮像画像GSP1、左サイドのカメラ11bがエリアSP2を撮像した撮像画像GSP2、リアのカメラ11cがエリアSP3を撮像した撮像画像GSP3、及び右サイドのカメラ11dがエリアSP4を撮像した撮像画像GSP4が、水平方向にこの順序で並べて配置され、これら4つの画像が一連の画像とされている。このように生成された監視画像Kを、路面(移動体Vの接地面)に対応する画像を下にして左端から右側へ順番に表示することにより、監視者は、移動体Vの周囲を反時計回りに見回したのと同様にディスプレイ24上で視認することができる。このように、図6に示す態様の監視画像Kは、送受信などの処理を行っても位置関係が狂うことなく、所期の順序のままの配置で再生することができる。 FIG. 6 is a diagram illustrating an example of the monitoring image K. As shown in the figure, the monitoring image K of the present embodiment includes a captured image GSP1 in which the front camera 11a images the area SP1 and a left camera 11b in the area P along the direction P from the left side to the right side of the drawing. A captured image GSP2 captured from SP2, a captured image GSP3 captured by the rear camera 11c capturing the area SP3, and a captured image GSP4 captured by the right side camera 11d capturing the area SP4 are arranged in this order in the horizontal direction. Four images are taken as a series of images. The monitoring image K generated in this way is displayed in order from the left end to the right side with the image corresponding to the road surface (the ground contact surface of the moving body V) facing down, so that the monitor can turn around the moving body V. It can be visually recognized on the display 24 in the same manner as when looking around clockwise. As described above, the monitoring image K in the mode shown in FIG. 6 can be reproduced in the arrangement in the expected order without causing the positional relationship to be out of order even when processing such as transmission / reception is performed.
 なお、一つの監視画像Kを生成する際には、各カメラ11a~11dの撮影タイミングを略同時にして取得した4つの画像が用いられる。これにより、監視画像Kに含まれる情報を同期させることができるので、所定タイミングにおける移動体V周囲の状況を正確に表現することができる。 It should be noted that when one monitoring image K is generated, four images acquired at substantially the same time as the photographing timings of the cameras 11a to 11d are used. Thereby, since the information included in the monitoring image K can be synchronized, the situation around the moving object V at a predetermined timing can be accurately expressed.
 また、カメラ11の撮像タイミングが略同時である各撮像画像から生成した監視画像Kを経時的に記憶し、所定の単位時間あたりに複数の監視画像Kが含まれる動画の監視画像Kを生成するようにしてもよい。撮像タイミングが同時の画像に基づいて動画の監視画像Kを生成することにより、移動体V周囲の状況の変化を正確に表現することができる。 In addition, the monitoring image K generated from the respective captured images with substantially the same imaging timing of the camera 11 is stored over time, and a moving image monitoring image K including a plurality of monitoring images K per predetermined unit time is generated. You may do it. By generating the monitoring image K of the moving image based on the images with the same imaging timing, it is possible to accurately represent the change in the situation around the moving object V.
 ところで、各撮像領域の画像をそれぞれ経時的に記憶し、各撮像領域ごとに生成した動画の監視画像Kを中央監視装置20へ送信した場合には、中央監視装置20の機能によっては、複数の動画を同時に再生できない場合がある。このような従来の中央監視装置20においては、複数の動画を同時に再生表示することができないため、各動画を再生する際には画面を切り替えて動画を一つずつ再生しなければならない。つまり、従来の中央監視装置20では、複数方向の映像(動画)を同時に見ることができず、移動体V周囲の全体を一画面で監視することができないという不都合がある。 By the way, when the image of each imaging area is memorize | stored each time and the monitoring image K of the moving image produced | generated for every imaging area is transmitted to the central monitoring apparatus 20, depending on the function of the central monitoring apparatus 20, a several You may not be able to play videos at the same time. In such a conventional central monitoring apparatus 20, since a plurality of moving images cannot be reproduced and displayed at the same time, when reproducing each moving image, the moving images must be reproduced one by one by switching the screen. In other words, the conventional central monitoring device 20 has a disadvantage that it cannot simultaneously watch images (moving images) in a plurality of directions and cannot monitor the entire periphery of the moving object V on one screen.
 これに対して本実施形態の制御装置14は、複数の画像から一つの監視画像Kを生成するので、中央監視装置20の機能にかかわらず、異なる撮像方向の画像を同時に動画再生することができる。つまり、監視画像Kを連続して再生(動画再生)することにより、監視画像Kに含まれる4枚の画像を同時に連続して再生(動画再生)し、方向の異なる領域の状態変化を一画面で監視することができる。 On the other hand, since the control device 14 of the present embodiment generates one monitoring image K from a plurality of images, it can simultaneously play back moving images of images in different imaging directions regardless of the function of the central monitoring device 20. . That is, by continuously reproducing the monitoring image K (moving image reproduction), four images included in the monitoring image K are simultaneously reproduced (moving image reproduction), and the state change of the regions in different directions is displayed on one screen. Can be monitored.
 また、本実施形態の監視端末装置10は、監視画像Kの画素数が各カメラ11a~11dの画像の画素数と略同一になるように画像のデータ量を圧縮して監視画像Kを生成することもできる。図5A~図5Dに示す各画像のサイズは480×640ピクセルであるのに対し、本実施形態では、図6に示すように監視画像Kのサイズが1280×240ピクセルとなるように圧縮処理を行う。これにより、監視画像Kのサイズ(1280×240=307,200ピクセル)が、各画像の合計サイズ(480×640×4枚=307,200ピクセル)と等しくなるので、監視画像Kを受信した中央監視装置20側の機能にかかわらず、画像処理及び画像再生を行うことができる。図5A~図5Dに示す各画像のサイズが960×1280ピクセルである場合にも、図6に示す4枚の画像が連ねられた態様の、圧縮された監視画像Kを生成することができる。 In addition, the monitoring terminal device 10 of the present embodiment generates the monitoring image K by compressing the image data amount so that the number of pixels of the monitoring image K is substantially the same as the number of pixels of the images of the cameras 11a to 11d. You can also. While the size of each image shown in FIGS. 5A to 5D is 480 × 640 pixels, in this embodiment, compression processing is performed so that the size of the monitoring image K is 1280 × 240 pixels as shown in FIG. Do. As a result, the size of the monitoring image K (1280 × 240 = 307,200 pixels) becomes equal to the total size of each image (480 × 640 × 4 = 307,200 pixels). Regardless of the function on the monitoring device 20 side, image processing and image reproduction can be performed. Even when the size of each image shown in FIGS. 5A to 5D is 960 × 1280 pixels, it is possible to generate a compressed monitoring image K in a form in which the four images shown in FIG. 6 are connected.
 特に限定されないが、本実施形態の第1解像度の第1画像には、図5A~5Dに示す960×1280ピクセルの画像、また、これら4枚の画像を図6に示すように一つの画像とし、960×1280ピクセルに圧縮した監視画像Kが含まれ、第2解像度の第2画像には、図5A~5Dに示す480×640ピクセルの画像、また、これら4枚の画像を図6に示すように一つの画像とし、1280×240ピクセルに圧縮した監視画像Kが含まれる。 Although not particularly limited, the first image of the first resolution of the present embodiment includes an image of 960 × 1280 pixels shown in FIGS. 5A to 5D, and these four images are made into one image as shown in FIG. , The monitoring image K compressed to 960 × 1280 pixels is included, and the second image of the second resolution includes the images of 480 × 640 pixels shown in FIGS. 5A to 5D, and these four images are shown in FIG. Thus, a monitoring image K compressed into 1280 × 240 pixels is included as one image.
 本実施形態の監視端末装置10の制御装置14は、配置された各画像の境界を示す線図形を、監視画像Kに付することもできる。図6に示す監視画像Kを例にすると、制御装置14は、配置された各画像の境界を示す線図形として、各画像の間に矩形の仕切り画像Bb,Bc,Bd,Ba,Ba´を監視画像Kに付することができる。このように、4つの画像の境界に仕切り画像を配置することにより、一連にされた監視画像Kの中で、撮像方向が異なる各画像をそれぞれ別個に認識させることができる。つまり、仕切り画像は各撮像画像の額縁として機能する。また、各撮像画像の境界付近は画像の歪みが大きいので、撮像画像の境界に仕切り画像を配置することにより、歪みの大きい領域の画像を隠すことや、歪みが大きいことを示唆することができる。 The control device 14 of the monitoring terminal device 10 according to the present embodiment can also attach a line figure indicating the boundary between the arranged images to the monitoring image K. Taking the monitoring image K shown in FIG. 6 as an example, the control device 14 displays rectangular partition images Bb, Bc, Bd, Ba, Ba ′ between the images as line figures indicating the boundaries between the arranged images. It can be attached to the monitoring image K. In this manner, by arranging the partition images at the boundaries of the four images, it is possible to recognize each image having a different imaging direction in the series of monitoring images K. That is, the partition image functions as a frame of each captured image. In addition, since the image distortion is large in the vicinity of the boundary of each captured image, it is possible to hide the image of the region with large distortion or to suggest that the distortion is large by arranging the partition image at the boundary of the captured image. .
 また、本実施形態の制御装置14は、後述する投影モデルの側面に設定された投影面に4つの画像を投影させた場合の歪みを補正してから、監視画像Kを生成することもできる。撮影された画像の周辺領域は画像の歪みが生じやすく、特に広角レンズを用いたカメラ11である場合には撮像画像の歪みが大きくなる傾向があるため、画像の歪みを補正するために予め定義された画像変換アルゴリズムと補正量とを用いて、撮像画像の歪みを補正することが望ましい。 Further, the control device 14 according to the present embodiment can generate the monitoring image K after correcting the distortion when four images are projected on the projection plane set on the side surface of the projection model described later. In the peripheral area of the captured image, image distortion is likely to occur. In particular, in the case of the camera 11 using a wide-angle lens, the captured image tends to be largely distorted. Therefore, it is defined in advance to correct the image distortion. It is desirable to correct the distortion of the captured image using the image conversion algorithm and the correction amount.
 特に限定されないが、制御装置14は、図7に示すように、中央監視装置20において監視画像Kを投影させる投影モデルと同じ投影モデルの情報をROMから読み出し、この投影モデルの投影面に撮像画像を投影し、投影面において生じた歪みを予め補正することもできる。なお、画像変換アルゴリズムと補正量はカメラ11の特性、投影モデルの形状に応じて適宜定義することができる。このように、投影モデルの投影面に関し画像Kを投影した場合の歪みを予め補正しておくことにより、歪みの少ない視認性の良い監視画像Kを提供することができる。また、歪みを予め補正しておくことにより、並べて配置された各画像同士の位置ズレを低減させることができる。 Although not particularly limited, as shown in FIG. 7, the control device 14 reads information on the same projection model as the projection model for projecting the monitoring image K in the central monitoring device 20 from the ROM, and takes a captured image on the projection plane of the projection model. The distortion generated on the projection plane can be corrected in advance. The image conversion algorithm and the correction amount can be appropriately defined according to the characteristics of the camera 11 and the shape of the projection model. In this way, by correcting in advance the distortion when the image K is projected with respect to the projection plane of the projection model, it is possible to provide the monitoring image K with good visibility with less distortion. Further, by correcting the distortion in advance, it is possible to reduce the positional deviation between the images arranged side by side.
 次に、マッピング情報付加機能について説明する。本実施形態の監視端末装置10において、制御装置14は、乗用車Vの接地面を底面とする柱体の投影モデルMの側面に設定された投影面に、生成された監視画像Kを投影するためのマッピング情報を監視画像Kに対応づける処理を実行する。マッピング情報は、監視画像Kを受信した中央監視装置20に、容易に投影基準位置を認識させるための情報である。図8は本実施形態の投影モデルMの一例を示す図、図9は図8に示す投影モデルMのxy面に沿う断面模式図である。 Next, the mapping information addition function will be described. In the monitoring terminal device 10 of the present embodiment, the control device 14 projects the generated monitoring image K on the projection plane set on the side surface of the columnar projection model M with the ground contact surface of the passenger car V as the bottom surface. The process of associating the mapping information with the monitoring image K is executed. The mapping information is information for allowing the central monitoring device 20 that has received the monitoring image K to easily recognize the projection reference position. FIG. 8 is a diagram showing an example of the projection model M of the present embodiment, and FIG. 9 is a schematic sectional view taken along the xy plane of the projection model M shown in FIG.
 図8、9に示すように、本実施形態の投影モデルMは、底面が正八角形で、鉛直方向(図中z軸方向)に沿って高さを有する正八角柱体である。なお、投影モデルMの形状は、底面の境界に沿って隣接する側面を有する柱体であれば特に限定されず、円柱体、若しくは三角柱体、四角柱体、六角柱体などの角柱体、又は底面が多角形で側面が三角形の反角柱体とすることもできる。 8 and 9, the projection model M of the present embodiment is a regular octagonal prism body having a regular octagonal bottom surface and a height along the vertical direction (z-axis direction in the figure). Note that the shape of the projection model M is not particularly limited as long as it is a column having side surfaces adjacent to each other along the boundary of the bottom surface, and is a cylinder, or a prism, such as a triangular column, a quadrangular column, or a hexagonal column, or An anti-rectangular column having a polygonal bottom surface and a triangular side surface can also be used.
 また、同図に示すように、本実施形態の投影モデルMの底面は乗用車Vの接地面と平行である。また、投影モデルMの側面の内側面には、投影モデルMの底面に接地する乗用車Vの周囲の映像を映し出す投影面Sa,Sb,Sc,Sd(以下、投影面Sと総称する。)が設定されている。投影面Sは、投影面Saの一部と投影面Sbの一部、投影面Sbの一部と投影面Scの一部、投影面Scの一部と投影面Sdの一部、投影面Sdの一部と投影面Saの一部により構成することもできる。監視画像Kは、乗用車Vを取り囲む投影モデルMの上方の視点R(R1~R8、以下、視点Rと総称する。)から乗用車Vを俯瞰した映像として投影面Sに投影される。 Also, as shown in the figure, the bottom surface of the projection model M of this embodiment is parallel to the ground contact surface of the passenger car V. Projection surfaces Sa, Sb, Sc, and Sd (hereinafter collectively referred to as a projection surface S) that project an image around the passenger vehicle V that contacts the bottom surface of the projection model M are provided on the inner surface of the side surface of the projection model M. Is set. The projection surface S includes a part of the projection surface Sa and a part of the projection surface Sb, a part of the projection surface Sb and a part of the projection surface Sc, a part of the projection surface Sc and a part of the projection surface Sd, and the projection surface Sd. And a part of the projection surface Sa. The monitoring image K is projected on the projection plane S as an image of the passenger car V viewed from above the viewpoint R (R1 to R8, hereinafter referred to as viewpoint R) above the projection model M surrounding the passenger car V.
 本実施形態の制御装置14は、右端又は左端に配置された撮像画像の基準座標を、マッピング情報として監視画像Kに対応づける。図6に示す監視画像Kを例にすると、制御装置14は、投影モデルMに投影される際の、監視画像Kの始端位置又は終端位置を示すマッピング情報(基準座標)として、右端に配置された撮像画像GSP1の左上頂点の座標A(x、y)と、左端に配置された撮像画像GSP2の右上頂点の座標B(x、y)とを監視画像Kに付する。なお、始端位置又は終端位置を示す撮像画像の基準座標は特に限定されず、左端に配置された監視画像Kの左下頂点、又は右端に配置された監視画像Kの右下頂点としてもよい。またマッピング情報は、監視画像Kの画像データの各画素に付してもよいし、監視画像Kとは別のファイルとして管理してもよい。 The control device 14 according to the present embodiment associates the reference coordinates of the captured image arranged at the right end or the left end with the monitoring image K as mapping information. Taking the monitoring image K shown in FIG. 6 as an example, the control device 14 is arranged at the right end as mapping information (reference coordinates) indicating the start end position or the end position of the monitoring image K when projected onto the projection model M. The coordinates A (x, y) of the upper left vertex of the captured image GSP1 and the coordinates B (x, y) of the upper right vertex of the captured image GSP2 arranged at the left end are attached to the monitoring image K. Note that the reference coordinates of the captured image indicating the start position or the end position are not particularly limited, and may be the lower left vertex of the monitoring image K arranged at the left end or the lower right vertex of the monitoring image K arranged at the right end. The mapping information may be attached to each pixel of the image data of the monitoring image K, or may be managed as a file different from the monitoring image K.
 このように、監視画像Kの始端位置又は終端位置を示す情報、つまり投影処理において基準とする基準座標をマッピング情報として監視画像Kに対応づけることにより、監視画像Kを受信した中央監視装置20が、容易に投影処理時における基準位置を認識することができるので、カメラ11a~11dの配置順に並べられた監視画像Kを、投影モデルMの側面の投影面Sに容易且つ迅速に順次投影することができる。すなわち、図9に示すようにカメラ11aの撮像方向に位置する投影面Saに移動体V前方の撮像画像GSP1を投影し、カメラ11bの撮像方向に位置する投影面Sbに移動体Vの右側方の撮像画像GSP2を投影し、カメラ11cの撮像方向に位置する投影面Scに移動体Vの後方の撮像画像GSP3を投影し、カメラ11dの撮像方向に位置する投影面Sdに移動体Vの左側方の撮像画像GSP4を投影することができる。 As described above, the information indicating the start position or the end position of the monitoring image K, that is, the reference coordinates used as a reference in the projection processing is associated with the monitoring image K as mapping information, whereby the central monitoring apparatus 20 that has received the monitoring image K Since the reference position at the time of the projection process can be easily recognized, the monitoring images K arranged in the order of arrangement of the cameras 11a to 11d are sequentially and easily projected onto the projection surface S on the side surface of the projection model M. Can do. That is, as shown in FIG. 9, the captured image GSP1 in front of the moving body V is projected onto the projection surface Sa located in the imaging direction of the camera 11a, and the right side of the moving body V is projected onto the projection surface Sb located in the imaging direction of the camera 11b. The captured image GSP2 is projected, the captured image GSP3 behind the moving body V is projected onto the projection surface Sc located in the imaging direction of the camera 11c, and the left side of the moving body V is projected onto the projection surface Sd located in the imaging direction of the camera 11d. One captured image GSP4 can be projected.
 これにより、投影モデルMに投影された監視画像Kは、あたかも乗用車Vの周囲を見回したときに見える映像を示すことができる。つまり、カメラ11a~11dの設置順序に応じて水平方向一列に配置された4つの画像を含む監視画像Kは、投影モデルMの柱体において、同じく水平方向に並ぶ側面に投影されるので、柱体の投影モデルMの投影面Sに投影された監視画像Kに、乗用車Vの周囲の映像をその位置関係を維持したまま再現することができる。 Thereby, the monitoring image K projected on the projection model M can show an image that can be seen as if looking around the passenger car V. In other words, the monitoring image K including four images arranged in a line in the horizontal direction in accordance with the installation order of the cameras 11a to 11d is projected on the side surfaces arranged in the horizontal direction in the column of the projection model M. An image around the passenger car V can be reproduced in the monitoring image K projected on the projection plane S of the body projection model M while maintaining the positional relationship.
 なお、本実施形態の制御装置14は、監視画像Kの各座標値と投影モデルMの各投影面Sの座標値との対応関係をマッピング情報として記憶し、監視画像Kに付することができるが、中央監視装置20に予め記憶させてもよい。 Note that the control device 14 of the present embodiment can store the correspondence between the coordinate values of the monitoring image K and the coordinate values of the projection planes S of the projection model M as mapping information, and attach it to the monitoring image K. However, it may be stored in the central monitoring device 20 in advance.
 また、図8,9に示す視点R、投影面Sの位置は例示であり、任意に設定することができる。特に、視点Rは、操作者の操作によって変更可能である。視点Rと監視画像Kの投影位置との関係は予め定義されており、視点Rの位置が変更された場合には所定の座標変換を実行することにより、新たに設定された視点Rから見た監視画像Kを投影面S(Sa~Sd)に投影することができる。この視点変換処理には公知の手法を用いることができる。 Further, the positions of the viewpoint R and the projection plane S shown in FIGS. 8 and 9 are examples, and can be arbitrarily set. In particular, the viewpoint R can be changed by the operation of the operator. The relationship between the viewpoint R and the projection position of the monitoring image K is defined in advance, and when the position of the viewpoint R is changed, a predetermined coordinate transformation is performed, so that the viewpoint R is viewed from the newly set viewpoint R. The monitoring image K can be projected onto the projection surface S (Sa to Sd). A known method can be used for this viewpoint conversion processing.
 以上のように、本実施形態の制御装置14は、所定タイミングで撮影された監視画像に基づいて監視画像Kを生成し、この監視画像Kにマッピング情報、基準座標、境界を示す線図形(仕切り画像)の情報を対応づけ、撮像タイミングに従って経時的に記憶する。特に限定されないが、制御装置14は、所定の単位時間あたりに複数の監視画像Kを含む一つの動画ファイルとして監視画像Kを記憶してもよいし、ストリーミング方式で転送・再生が可能な形態で監視画像Kを記憶してもよい。 As described above, the control device 14 according to the present embodiment generates the monitoring image K based on the monitoring image captured at a predetermined timing, and the monitoring image K includes line information (partitions) indicating mapping information, reference coordinates, and boundaries. Image) information is associated and stored over time according to the imaging timing. Although not particularly limited, the control device 14 may store the monitoring image K as one moving image file including a plurality of monitoring images K per predetermined unit time, or in a form that can be transferred / reproduced by a streaming method. The monitoring image K may be stored.
 また、本実施形態の制御装置14は、移動体Vの移動速度を検出する装置、本例では車両が備える車速センサ16から取得した移動速度を監視情報に含めることができる。この移動速度(車速)は、中央監視装置20側において監視情報送信命令に含める編成条件を設定する際に用いられる。 Moreover, the control apparatus 14 of this embodiment can include the movement speed acquired from the apparatus which detects the movement speed of the moving body V, and the vehicle speed sensor 16 with which a vehicle is provided in this example in monitoring information. This moving speed (vehicle speed) is used when the knitting condition included in the monitoring information transmission command is set on the central monitoring device 20 side.
 一方、中央監視装置20の通信装置23は、監視端末装置10から送信された監視画像Kとこの監視画像Kに対応づけられたマッピング情報を受信する。この監視画像Kは、上述したとおり乗用車Vのボディの異なる位置に設置された4つのカメラ11の画像が、乗用車Vのボディの外周に沿って右回り又は左回りの方向に沿って設置されたカメラ11a~11dの設置順序(移動体Vのボディの外周に沿う右回り又は左回りの順序)に従って配置されたものである。また、この監視画像Kには、監視画像Kを八角柱体の投影モデルMの投影面Sに投影させるためのマッピング情報が対応づけられている。通信装置23は取得した監視画像K及びマッピング情報を画像処理装置22へ送信する。 On the other hand, the communication device 23 of the central monitoring device 20 receives the monitoring image K transmitted from the monitoring terminal device 10 and the mapping information associated with the monitoring image K. In the monitoring image K, as described above, the images of the four cameras 11 installed at different positions of the body of the passenger car V are installed along the clockwise or counterclockwise direction along the outer periphery of the body of the passenger car V. The cameras 11a to 11d are arranged according to the installation order (clockwise or counterclockwise order along the outer periphery of the body of the moving body V). The monitoring image K is associated with mapping information for projecting the monitoring image K onto the projection plane S of the octagonal prism projection model M. The communication device 23 transmits the acquired monitoring image K and mapping information to the image processing device 22.
 画像処理装置22は、予め記憶している投影モデルMを読み出し、マッピング情報に基づいて、図8及び図9に示す乗用車Vの接地面を底面とする八角柱体の投影モデルMの側面に設定された投影面Sa~Sdに監視画像Kを投影させた表示画像を生成する。具体的には、マッピング情報に従い、受信した監視画像Kの各画素を、投影面Sa~Sdの各画素に投影する。また、画像処理装置22は、監視画像Kを投影モデルMに投影する際に、監視画像Kと共に受信した基準座標に基づいて、監視画像Kの開始点(監視画像Kの右端又は左端)を認識し、この開始点が予め投影モデルM上に定義された開始点(投影面Sの右端又は左端)と合致するように投影処理を行う。また、画像処理装置22は、監視画像Kを投影モデルMに投影する際に、各画像の境界を示す線図形(仕切り画像)を投影モデルM上に配置する。仕切り画像は、予め投影モデルMに付しておくこともでき、投影処理後に監視画像Kに付すこともできる。 The image processing apparatus 22 reads the projection model M stored in advance, and sets it on the side surface of the octagonal prism projection model M with the ground contact surface of the passenger car V shown in FIGS. 8 and 9 as the bottom surface based on the mapping information. A display image is generated by projecting the monitoring image K onto the projected planes Sa to Sd. Specifically, according to the mapping information, each pixel of the received monitoring image K is projected onto each pixel of the projection surfaces Sa to Sd. Further, when projecting the monitoring image K onto the projection model M, the image processing device 22 recognizes the start point of the monitoring image K (the right end or the left end of the monitoring image K) based on the reference coordinates received together with the monitoring image K. Then, the projection processing is performed so that the start point coincides with the start point (the right end or the left end of the projection surface S) defined in advance on the projection model M. Further, when projecting the monitoring image K onto the projection model M, the image processing device 22 arranges a line figure (partition image) indicating the boundary of each image on the projection model M. The partition image can be attached to the projection model M in advance, or can be attached to the monitoring image K after the projection processing.
 ディスプレイ24又は表示機能を備えるタッチパネル式のディスプレイ24は、投影モデルMの投影面Sに投影した監視画像Kを表示する。図10~図17は、監視画像Kの表示画像の一例を示す。なお、図11に示す白抜きの二重丸は、後に説明する注視方向の入力(特定)の手法を説明するために付したものであり、監視画像Kに含まれるものではない。 The display 24 or the touch panel display 24 having a display function displays the monitoring image K projected on the projection surface S of the projection model M. 10 to 17 show examples of display images of the monitoring image K. FIG. Note that white double circles shown in FIG. 11 are attached to explain the method of inputting (specifying) the gaze direction, which will be described later, and are not included in the monitoring image K.
 図10には、図8,9に示す視点R1から見た投影面Sd、Sa、Sbに投影された監視画像Kを示す。投影モデルMの底面には各視点Rから見た移動体Vの画像が貼り付けられている。また、投影面Sd、Sa、Sbの間にある画像が表示されていない部分は「境界を示す線図形(仕切り画像)」である。 FIG. 10 shows a monitoring image K projected on the projection surfaces Sd, Sa, Sb viewed from the viewpoint R1 shown in FIGS. An image of the moving body V viewed from each viewpoint R is pasted on the bottom surface of the projection model M. Further, the portion where the image between the projection surfaces Sd, Sa, and Sb is not displayed is a “line figure indicating a boundary (partition image)”.
 同様に、図11には視点R2から見た監視画像Kを示し、図12には視点R3から見た監視画像Kを示し、図13には視点R4から見た監視画像Kを示し、図14には視点R5から見た監視画像Kを示し、図15には視点R6から見た監視画像Kを示し、図16には視点R7から見た監視画像Kを示し、図17には視点R8から見た監視画像Kを示す。 Similarly, FIG. 11 shows the monitoring image K viewed from the viewpoint R2, FIG. 12 shows the monitoring image K viewed from the viewpoint R3, FIG. 13 shows the monitoring image K viewed from the viewpoint R4, and FIG. 15 shows the monitoring image K viewed from the viewpoint R5, FIG. 15 shows the monitoring image K viewed from the viewpoint R6, FIG. 16 shows the monitoring image K viewed from the viewpoint R7, and FIG. The observed monitoring image K is shown.
 このように、本実施形態の端末装置800は、移動体Vのボディに設置されたカメラ1の設置順序に応じて各カメラ1の撮像画像をx軸方向又はy軸方向に沿って(横に)配置した監視画像Kを、その配置順に従って柱体の投影モデルMの側面に沿って(横に)マッピングするので、投影モデルMに示された監視画像Kは、移動体Vの周囲を時計回りに見回したときに見える映像を示すことができる。つまり、監視者は、監視画像Kを見ることにより、移動体Vから離隔した位置に居ながら、移動体Vに搭乗して周囲を見渡したときと同じ情報を取得することができる。 As described above, the terminal device 800 according to the present embodiment transfers the captured image of each camera 1 along the x-axis direction or the y-axis direction (sideways) according to the installation order of the cameras 1 installed on the body of the moving body V. ) Since the arranged monitoring image K is mapped (laterally) along the side surface of the projection model M of the columnar body according to the arrangement order, the monitoring image K shown in the projection model M has a clock around the moving body V. It is possible to show an image that can be seen when looking around. That is, the supervisor can obtain the same information as when the user gets on the moving body V and looks around while watching the monitoring image K while staying at a position separated from the moving body V.
 特に図10~図17に示すように、投影モデルMの底面に移動体Vの画像を示すことにより、移動体Vの向きと各撮像画像との位置関係が分かり易くすることができる。つまり移動体Vのフロントグリルに対向する投影面Sには、移動体Vのフロントグリルに設けられたカメラ1aの撮像画像GSP1を投影し、移動体Vの右サイドミラーに対向する投影面Sには、移動体Vの右サイドミラーに設けられたカメラ1dの撮像画像GSP4を投影し、移動体Vのリア部に対向する投影面Sには、移動体Vのリア部に設けられたカメラ1cの撮像画像GSP3を投影し、移動体Vの左サイドミラーに対向する投影面Sには、移動体Vの左サイドミラーに設けられたカメラ1bの撮像画像GSP2を投影することができる。このように、図10~図17に示すように、本実施形態によれば、移動体Vの周囲の映像がその位置関係を保ったまま投影された監視画像Kを提示することができるので、監視者は移動体Vの周囲で何が起きているかを容易に把握することができる。 Particularly, as shown in FIGS. 10 to 17, by showing the image of the moving body V on the bottom surface of the projection model M, the positional relationship between the orientation of the moving body V and each captured image can be easily understood. In other words, the captured image GSP1 of the camera 1a provided on the front grille of the moving body V is projected onto the projection surface S facing the front grille of the moving body V, and the projection surface S facing the right side mirror of the moving body V is projected. Projects the captured image GSP4 of the camera 1d provided on the right side mirror of the moving object V, and the camera 1c provided on the rear part of the moving object V on the projection surface S facing the rear part of the moving object V. The captured image GSP3 is projected, and the captured image GSP2 of the camera 1b provided on the left side mirror of the moving body V can be projected onto the projection surface S facing the left side mirror of the moving body V. As described above, as shown in FIGS. 10 to 17, according to the present embodiment, it is possible to present the monitoring image K projected while maintaining the positional relationship of the video around the moving object V. The monitor can easily grasp what is happening around the moving body V.
 ちなみに、本願願書に添付した図10~図17は静止画であるため、実際の表示画像の再生状態を示すことができないが、本実施形態の中央監視装置20においては、ディスプレイ24の表示画面中の各投影面Sに示される画像はそれぞれ動画である。つまり、移動体Vのフロントグリルに対向する投影面Sには、移動体V前方の撮像領域SP1の動画映像が映し出され、移動体Vの右サイドミラーに対向する投影面Sには、移動体V右側方の撮像領域SP4の動画映像が映し出され、移動体Vのリア部に対向する投影面Sには、移動体V後方の撮像領域SP3の動画映像が映し出され、移動体Vの左サイドミラーに対向する投影面Sには、移動体V左側方の撮像領域SP2の動画映像が映し出されている。つまり、図10~図17に示す各投影面Sには、異なるカメラ1により撮像された撮像画像に基づく複数の動画の監視画像Kを同時に再生することができる。 Incidentally, since FIGS. 10 to 17 attached to the present application are still images, the actual display image reproduction state cannot be shown. However, in the central monitoring device 20 of the present embodiment, the display screen of the display 24 is not displayed. Each of the images shown on each projection plane S is a moving image. That is, the moving image of the imaging area SP1 in front of the moving body V is projected on the projection surface S facing the front grille of the moving body V, and the moving body V is projected on the projection surface S facing the right side mirror of the moving body V. A moving image of the imaging area SP4 on the right side of the V is displayed, and a moving image of the imaging area SP3 behind the moving object V is displayed on the projection surface S facing the rear part of the moving object V, and the left side of the moving object V is displayed. On the projection surface S facing the mirror, a moving image of the imaging region SP2 on the left side of the moving object V is projected. That is, a plurality of moving image monitoring images K based on captured images captured by different cameras 1 can be simultaneously reproduced on each projection plane S shown in FIGS.
 なお、マウスやキーボードなどの入力装置25を設けること又は入力装置25でもあるタッチパネル式のディスプレイ24を採用することにより、監視者の操作により視点を自在に設定・変更することができる。視点位置と投影面Sとの対応関係は上述の画像処理装置22又はディスプレイ24において予め定義されているので、この対応関係に基づいて、変更後の視点に応じた監視画像Kをディスプレイ24に表示することができる。 It should be noted that by providing an input device 25 such as a mouse or a keyboard or adopting a touch panel type display 24 that is also the input device 25, the viewpoint can be freely set and changed by the operation of the supervisor. Since the correspondence relationship between the viewpoint position and the projection plane S is defined in advance in the image processing device 22 or the display 24 described above, the monitoring image K corresponding to the changed viewpoint is displayed on the display 24 based on this correspondence relationship. can do.
 図10~図17に示す各表示画像に含まれる各画像の解像度は見易さの観点から共通させてもよいし、通信データ量の低減の観点から異なる解像度とすることもできる。本実施形態では、あるタイミングにおける監視画像Kを構成する各画像の解像度が異なる場合があるが、複数の動画の監視画像Kを同時に再生することは可能である。 The resolution of each image included in each display image shown in FIG. 10 to FIG. 17 may be made common from the viewpoint of easy viewing, or may be a different resolution from the viewpoint of reducing the amount of communication data. In this embodiment, the resolution of each image constituting the monitoring image K at a certain timing may be different, but it is possible to simultaneously reproduce the monitoring images K of a plurality of moving images.
 次に本実施形態に係る監視システム1の動作について説明する。図18は監視端末装置10側の動作を示すフローチャート、図19A,19B、19Cは中央監視装置20側の動作を示すフローチャートである。 Next, the operation of the monitoring system 1 according to this embodiment will be described. 18 is a flowchart showing the operation on the monitoring terminal device 10 side, and FIGS. 19A, 19B, and 19C are flowcharts showing the operation on the central monitoring device 20 side.
 図18に示すように、監視端末装置10においては、所定の時間間隔(同図に示す1ルーチン)で車載カメラ11から周囲の映像と室内の映像を取得し、画像処理装置12によって監視画像に変換する(ステップST1)。また、GPSを備える位置検出装置15から当該監視端末装置10が搭載された乗用車Vの現在位置情報を検出する(ステップST2)。 As shown in FIG. 18, in the monitoring terminal device 10, the surrounding video and the indoor video are acquired from the in-vehicle camera 11 at a predetermined time interval (one routine shown in FIG. 18), and a monitoring image is obtained by the image processing device 12. Conversion is performed (step ST1). Further, the current position information of the passenger car V on which the monitoring terminal device 10 is mounted is detected from the position detection device 15 having GPS (step ST2).
 ステップST3では、異常を通報する通報ボタン16が押されたか否かを判断し、通報ボタン16が押された場合はステップST4へ進み、ステップST1にて取得した監視画像と、ステップST2で取得した位置情報と、CPUの時刻情報とを関連付け、これらを、異常が発生した旨の異常情報とともに、監視情報として通信装置13及び電気通信回線網30を介して中央監視装置20へ送信する。これにより、事故、犯罪などの治安に関する異常が発生したことを、乗用車Vの位置情報と、乗用車Vの周囲の監視画像と共に中央監視装置20へ自動送信されるので、街中の監視がより一層強化されることになる。なお、本例では最初のステップST1及びST2において監視画像と位置情報とを取得するが、ステップST3とST4との間のタイミングでこれら監視画像と位置情報とを取得してもよい。 In step ST3, it is determined whether or not the report button 16 for reporting the abnormality is pressed. If the report button 16 is pressed, the process proceeds to step ST4, and the monitoring image acquired in step ST1 and the acquired in step ST2. The positional information is associated with the CPU time information, and these are transmitted as monitoring information to the central monitoring device 20 via the communication device 13 and the telecommunications network 30 together with the abnormality information indicating that an abnormality has occurred. As a result, the occurrence of an abnormality related to security such as an accident or a crime is automatically transmitted to the central monitoring device 20 together with the position information of the passenger car V and the monitoring image around the passenger car V, thereby further strengthening the monitoring in the city. Will be. In this example, the monitoring image and the position information are acquired in the first steps ST1 and ST2, but the monitoring image and the position information may be acquired at a timing between steps ST3 and ST4.
 ステップST3に戻り、通報ボタン16が押されていない場合はステップST5へ進み、中央監視装置20と通信し、制御命令を取得する。 Returning to step ST3, if the report button 16 is not pressed, the process proceeds to step ST5, where it communicates with the central monitoring device 20 and obtains a control command.
 続いて、ステップST6において、監視端末装置10は、中央監視装置20から監視情報送信命令を取得したか否かを判断し、監視情報送信命令を取得した場合にはステップST7へ進む。ステップST7において、取得した監視情報送信命令に異なる解像度の画像を所定の編成条件で混合させた監視画像を生成する命令が含まれているか否かを判断する。監視情報送信命令に異なる解像度の画像を所定の編成条件で混合させた監視画像を生成する命令が含まれている場合には、ステップST8へ進み、解像度の変更を行う。解像度の変更の指令は、解像度に応じた画像の編集処理が行われる画像処理装置12又は解像度の設定機能を備えるカメラ11へ送信される。取得した監視情報送信命令に異なる解像度の画像を所定の編成条件で混合させた監視画像を生成する命令が含まれていない場合には、解像度を変更することなく、相対的に高い第1解像度で監視画像を生成する。つまり、監視情報送信命令に編成条件が含まれる場合には、相対的に低い第2解像度の第2画像を取り混ぜて監視画像を生成する。 Subsequently, in step ST6, the monitoring terminal device 10 determines whether or not a monitoring information transmission command has been acquired from the central monitoring device 20, and proceeds to step ST7 if a monitoring information transmission command has been acquired. In step ST7, it is determined whether or not the acquired monitoring information transmission command includes a command for generating a monitoring image in which images of different resolutions are mixed under a predetermined composition condition. When the monitoring information transmission command includes a command for generating a monitoring image in which images of different resolutions are mixed under a predetermined composition condition, the process proceeds to step ST8 and the resolution is changed. The resolution change command is transmitted to the image processing apparatus 12 that performs image editing processing according to the resolution or the camera 11 that has a resolution setting function. If the acquired monitoring information transmission command does not include a command for generating a monitoring image in which images of different resolutions are mixed under a predetermined composition condition, the first resolution is relatively high without changing the resolution. A monitoring image is generated. That is, when the composition information is included in the monitoring information transmission command, the monitoring image is generated by mixing the second images having the relatively low second resolution.
 解像度の設定変更後ステップST9へ進み、制御装置14は、監視情報送信命令の編成条件に従う第1解像度の第1画像と第2解像度の第2画像を生成し、編成条件に従う割合で第1画像と第2画像を混合させる。本実施形態において、編成条件における「第1画像と第2画像の混合する割合」は、第1画像と第2画像を生成・送信する間隔、頻度、又は周期により調整する。つまり、編成条件における第1画像と第2画像の混合する割合は、単位時間あたりに生成される第1解像度の第1画像の数、又は単位時間あたりに送信される第1画像の数と、単位時間あたりに生成される第2解像度の第2画像の数、又は単位時間あたりに送信される第2画像の数との比によって調整することができる。 After changing the resolution setting, the process proceeds to step ST9, where the control device 14 generates a first image of the first resolution and a second image of the second resolution according to the knitting conditions of the monitoring information transmission command, and the first image at a ratio according to the knitting conditions And the second image. In the present embodiment, the “ratio at which the first image and the second image are mixed” in the composition condition is adjusted by the interval, frequency, or cycle at which the first image and the second image are generated and transmitted. That is, the mixing ratio of the first image and the second image in the composition condition is the number of the first images of the first resolution generated per unit time or the number of the first images transmitted per unit time. It can be adjusted by the ratio of the number of second images of the second resolution generated per unit time or the number of second images transmitted per unit time.
 また、本実施形態において、各カメラ11が所定の解像度で車両周囲を撮像する解像度設定機能を有している場合には、制御装置14は、監視情報送信命令の編成条件に従い、第1解像度又は第2解像度で車両周囲を撮像させ、第1画像と第2画像とを所定の編成条件で混合させた監視画像を生成することができる。 Further, in the present embodiment, when each camera 11 has a resolution setting function for imaging the surroundings of the vehicle at a predetermined resolution, the control device 14 determines the first resolution or It is possible to capture a vehicle periphery at the second resolution and generate a monitoring image in which the first image and the second image are mixed under a predetermined composition condition.
 本実施形態において、第1解像度の第1画像と、第2解像度の第2画像の生成は、カメラ11により撮像された撮像画像を所定の解像度の画像に加工する機能を備える画像処理装置12によって行わせることができる。制御装置14は、監視情報送信命令の編成条件に従い第1解像度の第1画像と、第1解像度よりも低い第2解像度の第2画像とを所定の編成条件で混合させた監視画像を画像処理装置12に生成させる。 In the present embodiment, the first image having the first resolution and the second image having the second resolution are generated by the image processing apparatus 12 having a function of processing a captured image captured by the camera 11 into an image having a predetermined resolution. Can be done. The control device 14 performs image processing on a monitoring image in which a first image having a first resolution and a second image having a second resolution lower than the first resolution are mixed according to a knitting condition according to the knitting condition of the monitoring information transmission command. The device 12 is generated.
 そして、制御装置14は、監視情報送信命令に従って、編成条件に従った監視画像、時刻情報、位置情報を含む監視情報を中央監視装置20へ送信する。監視情報には、必要に応じて移動体Vの移動速度を含めてもよい。監視情報送信命令に記憶指令が含まれている場合には監視画像、位置情報、時刻情報などの監視情報を監視端末装置10のメモリに記憶する。 Then, the control device 14 transmits monitoring information including the monitoring image, time information, and position information according to the composition condition to the central monitoring device 20 according to the monitoring information transmission command. The monitoring information may include the moving speed of the moving object V as necessary. When a storage command is included in the monitoring information transmission command, monitoring information such as a monitoring image, position information, and time information is stored in the memory of the monitoring terminal device 10.
 ステップST6に戻り、中央監視装置20から監視情報送信命令を取得しない場合は、ステップST10へ進み、乗用車Vが予め定義された重点監視領域に存在するか否かを判断する。乗用車Vが予め定義された重点監視領域に存在する場合には、監視画像を含む監視情報を送信する。本例では、重点監視領域内であれば詳細な監視画像のほうが好ましいという観点から通信速度を変更しない監視画像を含む監視情報を送信する処理としたが、先述したステップST7,ST8の処理をステップST10とステップST12との間に行ってもよい。他方、監視情報送信指令を取得せず、重点監視領域でもない場合には、ステップST11へ進み、監視画像を含まない監視情報、つまり時刻情報、位置情報を中央監視装置20へ送信する。 Returning to step ST6, when the monitoring information transmission command is not acquired from the central monitoring device 20, the process proceeds to step ST10, where it is determined whether or not the passenger vehicle V exists in a predefined priority monitoring area. When the passenger car V is present in a pre-defined priority monitoring area, monitoring information including a monitoring image is transmitted. In this example, the monitoring information including the monitoring image that does not change the communication speed is transmitted from the viewpoint that a detailed monitoring image is preferable if it is within the priority monitoring area. You may perform between ST10 and step ST12. On the other hand, if the monitoring information transmission command is not acquired and is not the priority monitoring area, the process proceeds to step ST11, and monitoring information not including the monitoring image, that is, time information and position information is transmitted to the central monitoring device 20.
 次に、図19A~19Cに基づいて、中央監視装置20側の処理を説明する。
 図19AのステップST11では、すべての乗用車Vから位置情報、時刻情報を取得し、データベース26に蓄積する。データベース26には、乗用車V(監視端末装置10)から取得された監視画像、位置情報、時刻情報を含む監視情報は、位置情報に対応づけて記憶されている。つまり、位置情報を指定すると、一連の監視情報を呼び出すことができる。また、この監視情報には、監視端末装置10を特定するための移動体ID(監視端末装置ID)を含ませることができる。移動体IDは監視端末装置10の通信装置13のアドレスであってもよい。
Next, processing on the central monitoring device 20 side will be described based on FIGS. 19A to 19C.
In step ST11 of FIG. 19A, position information and time information are acquired from all the passenger cars V and stored in the database 26. The database 26 stores monitoring information including monitoring images, position information, and time information acquired from the passenger car V (monitoring terminal device 10) in association with the position information. That is, if position information is designated, a series of monitoring information can be called. The monitoring information can include a mobile body ID (monitoring terminal device ID) for specifying the monitoring terminal device 10. The mobile object ID may be the address of the communication device 13 of the monitoring terminal device 10.
 ステップST12において、ステップST11で取得した位置情報に基づいて乗用車Vの位置を、ディスプレイ24に表示された地図データベースの地図情報上に図1の左上に示すようにドットを重畳させて表示する。この地図情報を見れば、どの位置に監視端末装置10を搭載する移動体Vが走行しているかを一目で視認することができる。言い換えると、監視したい位置に存在する監視端末装置10を搭載する移動体Vを特定することができる。乗用車Vの位置情報は、図11の1ルーチン毎の所定のタイミングにて取得され送信されるので、監視者は乗用車Vの現在位置をタイムリーに把握することができる。 In step ST12, based on the position information acquired in step ST11, the position of the passenger car V is displayed by superimposing dots on the map information of the map database displayed on the display 24 as shown in the upper left of FIG. By looking at this map information, it is possible to see at a glance at which position the moving body V carrying the monitoring terminal device 10 is traveling. In other words, it is possible to identify the moving object V on which the monitoring terminal device 10 existing at the position to be monitored is mounted. Since the position information of the passenger car V is acquired and transmitted at a predetermined timing for each routine in FIG. 11, the supervisor can grasp the current position of the passenger car V in a timely manner.
 ステップST13では、乗用車Vの監視端末装置10から通報される異常情報、すなわち事故、犯罪などの治安に関する異常が発生した旨の通報を受信したか否かを判断する。この異常情報は、乗用車Vの搭乗者が監視端末装置10の通報ボタン16を押すことで出力される。 In step ST13, it is determined whether or not abnormality information notified from the monitoring terminal device 10 of the passenger car V, that is, a notification that an abnormality relating to security such as an accident or a crime has occurred has been received. This abnormality information is output when the passenger of the passenger car V presses the notification button 16 of the monitoring terminal device 10.
 異常情報がある場合は、ステップST14にて異常情報が出力された乗用車Vを特定し、その乗用車の監視端末装置10から監視画像および時刻情報を受信し、監視画像をディスプレイ24に表示する。また、図1左上に示すように、地図情報上に表示されたその乗用車を他の乗用車と識別できるように色彩を変更するなど、強調表示を行う。これにより、異常が発生した位置を地図情報上で視認することができるとともに、異常内容をディスプレイ24にて把握することができる。 If there is abnormality information, the passenger vehicle V for which the abnormality information is output is identified in step ST14, the monitoring image and the time information are received from the monitoring terminal device 10 of the passenger vehicle, and the monitoring image is displayed on the display 24. Further, as shown in the upper left of FIG. 1, highlighting is performed such as changing the color so that the passenger car displayed on the map information can be distinguished from other passenger cars. Thereby, the position where the abnormality has occurred can be visually recognized on the map information, and the abnormality content can be grasped on the display 24.
 次のステップST15では、異常情報を出力した乗用車Vの近傍(所定距離内)を走行する乗用車Vを検出し、その乗用車Vに対して監視画像および時刻情報を含む監視情報の送信指令を出力する。これにより異常情報を出力した乗用車Vの近傍を走行する乗用車Vから監視情報を取得することができるので、異常情報を出力した乗用車Vからの監視画像に加えて、複数の監視画像により、異常情報の内容を詳細に把握することができる。監視情報送信命令には、後述する編成条件を含めることができる。 In the next step ST15, the passenger vehicle V traveling in the vicinity (within a predetermined distance) of the passenger vehicle V that has output the abnormality information is detected, and a monitoring information transmission command including a monitoring image and time information is output to the passenger vehicle V. . As a result, the monitoring information can be acquired from the passenger vehicle V that travels in the vicinity of the passenger vehicle V that has output the abnormality information. Therefore, in addition to the monitoring image from the passenger vehicle V that has output the abnormality information, the abnormality information Can be understood in detail. The monitoring information transmission command can include a composition condition described later.
 ステップST16では、異常情報を出力した乗用車Vの位置情報をパトカー、救急車、消防車等の緊急自動車へ送信する。この場合に、異常内容を報知するために監視画像を添付して送信してもよい。これにより、現場からの通報が入る前に緊急自動車を出動させることができ、事故や犯罪に対する迅速な対処が可能となる。 In step ST16, the position information of the passenger vehicle V that has output the abnormality information is transmitted to emergency vehicles such as police cars, ambulances, and fire engines. In this case, a monitoring image may be attached and transmitted in order to notify the abnormal content. As a result, the emergency vehicle can be dispatched before a report from the site is entered, and a quick response to an accident or crime is possible.
 ステップST17では、監視端末装置10から受信した全ての位置情報、監視画像および時刻情報を記録媒体へ記録する。この記録は、事故や犯罪の発生後においてこれらを解決する際に用いられる。なお、ステップST13にて異常情報がない場合はステップST14~ST17の処理を行うことなく図19BのステップST21へ進む。 In step ST17, all position information, monitoring images and time information received from the monitoring terminal device 10 are recorded on the recording medium. This record is used to resolve these after an accident or crime. If there is no abnormality information in step ST13, the process proceeds to step ST21 in FIG. 19B without performing the processes in steps ST14 to ST17.
 ステップST21では、パトカー、救急車又は消防車などの緊急乗用車から画像情報の送信指令があるか否かを判断し、画像送信指令が入力された場合にはステップST22へ進む。ステップST22では、画像送信命令で特定された地域に乗用車Vが存在するか否かを判断し、乗用車Vが存在する場合はステップST23へ進む。そして、ステップST23において、画像送信命令で特定された地域に存在する乗用車Vに対して監視情報送信命令を出力する。これにより、次のルーチンにおける図19AのステップST11にてその乗用車Vからの画像情報をも取得することができ、これを緊急乗用車に転送したり、緊急乗用車からの送信指令の意味を把握したりすることができる。なお、ステップST21及びST22に該当しない場合はステップST21~ST23の処理を行うことなくステップST24へ進む。 In step ST21, it is determined whether there is an image information transmission command from an emergency passenger car such as a police car, an ambulance, or a fire engine. If an image transmission command is input, the process proceeds to step ST22. In step ST22, it is determined whether or not the passenger car V exists in the area specified by the image transmission command. If the passenger car V exists, the process proceeds to step ST23. In step ST23, a monitoring information transmission command is output to the passenger vehicle V existing in the area specified by the image transmission command. Thereby, the image information from the passenger car V can also be acquired in step ST11 of FIG. 19A in the next routine, and this can be transferred to the emergency passenger car, or the meaning of the transmission command from the emergency passenger car can be grasped. can do. If not corresponding to steps ST21 and ST22, the process proceeds to step ST24 without performing the processes of steps ST21 to ST23.
 ステップST24では、予め設定された犯罪多発遅滞などの不審箇所の近傍領域に乗用車Vが存在するか否かを判断し、存在する場合はステップST25へ進んでその乗用車Vに対して監視画像を含む監視情報の送信命令を出力する。不審箇所とは治安の悪い通り、街などである。これにより、不審箇所である通りや街の監視を強化することができ、犯罪の未然防止が期待できる。なお、不審箇所の近傍領域に乗用車Vが存在しない場合はステップST22の処理を行うことなくステップST26へ進む。 In step ST24, it is determined whether or not there is a passenger car V in the vicinity of a suspicious location such as a preset frequent crime delay, and if so, the process proceeds to step ST25 to include a monitoring image for the passenger car V. Output monitoring information transmission command. Suspicious areas are streets with poor security and streets. As a result, the monitoring of streets and streets that are suspicious places can be strengthened, and crime prevention can be expected. If the passenger vehicle V does not exist in the region near the suspicious part, the process proceeds to step ST26 without performing the process of step ST22.
 ステップST26では、詳細を監視しておくべき重点監視対象を撮像できる重点監視位置の近傍に乗用車Vが存在するか否かを判断し、重点監視位置の近傍に乗用車Vが存在する場合はステップST27へ進んでその乗用車Vに対して重点監視対象を拡大した監視画像を含む監視情報の送信を求める重点監視指令を出力する。これにより、重点監視対象を詳細に監視することができ、特定された重点監視対象において事件や事故の原因となる不審物の発見を効果的に行うことができ、犯罪の未然防止が期待できる。なお、重点監視位置の近傍に乗用車Vが存在しない場合はステップST27の処理を行うことなくステップST28へ進む。 In step ST26, it is determined whether or not there is a passenger vehicle V in the vicinity of the priority monitoring position where the priority monitoring object whose details should be monitored can be imaged. If the passenger vehicle V exists in the vicinity of the priority monitoring position, step ST27 is determined. To the passenger vehicle V and outputs a priority monitoring command for requesting transmission of monitoring information including a monitoring image in which the priority monitoring target is expanded. As a result, it is possible to monitor the priority monitoring target in detail, and to effectively detect a suspicious object that causes an incident or an accident in the specified priority monitoring target, so that prevention of crime can be expected. If there is no passenger vehicle V in the vicinity of the priority monitoring position, the process proceeds to step ST28 without performing the process of step ST27.
 ステップST28では、各乗用車Vから受信した位置情報に基づいて、監視が必要とされる所定領域(不審箇所及び重点監視領域には限定されない)内に、一定時間内に乗用車Vが走行していない路線があるか否かを判断し、そのような路線があった場合において、その路線を走行する乗用車Vがあるか否かを監視する。そして、直近にその路線を走行する乗用車Vが存在すれば、ステップST29へ進み、その乗用車Vに対して監視画像を含む監視情報の送信命令を出力する。これにより、不審箇所や重点監視領域以外の区域であって乗用車Vの通行量が少ない路線の監視画像を自動的に取得することができる。なお、ステップST28の条件を満足する路線がない場合はステップST29の処理を行うことなく図19AのステップST11へ戻る。 In step ST28, based on the position information received from each passenger car V, the passenger car V is not traveling within a predetermined time within a predetermined area that is required to be monitored (not limited to the suspicious location and the priority monitoring area). It is determined whether there is a route, and when there is such a route, it is monitored whether there is a passenger vehicle V traveling on the route. If there is a passenger car V traveling on the route most recently, the process proceeds to step ST29, and a monitoring information transmission command including a monitoring image is output to the passenger car V. As a result, it is possible to automatically acquire a monitoring image of a route that is in a region other than the suspicious location or the priority monitoring region and has a small traffic volume of the passenger car V. If there is no route that satisfies the condition of step ST28, the process returns to step ST11 of FIG. 19A without performing the process of step ST29.
 上述した監視情報送信命令には、監視画像、位置情報、時刻情報、移動速度の送信を要求する指令を含めることができ、さらに監視画像を作成する際に適用される画像の解像度の設定指令を含めることができる。 The above-described monitoring information transmission command can include a command for requesting transmission of the monitoring image, position information, time information, and moving speed, and can further include an image resolution setting command applied when the monitoring image is created. Can be included.
 次に、図19Cの処理へ進む。図19Cは所定の編成条件を含む監視情報送信命令の生成及び送信に関する処理の手順を示すフローチャートである。以下、図19C及び図20~図23に基づいて、本実施形態の監視情報送信命令の生成及び送信処理を説明する。図20~23は、各編成条件に基づく監視画像の送信状態を説明するための図である。 Next, the process proceeds to the process of FIG. 19C. FIG. 19C is a flowchart showing a procedure of processing relating to generation and transmission of a monitoring information transmission command including a predetermined composition condition. Hereinafter, based on FIG. 19C and FIGS. 20 to 23, the generation and transmission processing of the monitoring information transmission command of this embodiment will be described. 20 to 23 are diagrams for explaining the transmission state of the monitoring image based on each composition condition.
 まず、図19CのステップST30において、中央監視装置20は、各乗用車Vから取得した、監視画像、位置情報、時刻情報、移動速度を各移動体Vの識別子に対応づけて記録媒体へ記録する。 First, in step ST30 of FIG. 19C, the central monitoring device 20 records the monitoring image, the position information, the time information, and the moving speed acquired from each passenger vehicle V in a recording medium in association with the identifier of each moving body V.
 ステップ31以降は、前回までに取得した情報に基づく、次回の監視情報送信命令の生成及び送信処理に係る処理である。ステップST31において、中央監視装置20は、入力機能を有するタッチパネル式ディスプレイ24に、取得した監視情報の監視画像を監視者に提示可能な状態とする。ステップST32において、中央監視装置20は、監視者から監視したい位置に存在する移動体Vを特定する情報を受け付ける。この操作は、図1の左上に示す地図情報に表示される移動体Vに対応するドットをクリックするなどの操作である。続いてステップST33において、中央監視装置20は、監視者が注視したい注視方向の入力を受け付ける。この注視方向の入力はタッチパネル式ディスプレイ24の画面をタッチ、タップ又はスライドタッチすることにより行われる。監視者が監視画像のいずれかのポイントをタッチすると、そのタッチしたポイントの方向を注視方向として特定することができる。監視者が移動体Vの左方向に注目する場合には、この監視画像Kのうち左サイドのカメラ11bがエリアSP2を撮像した撮像画像GSP2を選択する入力を行う。具体的に、図11に示す監視画像Kでは、監視者はタッチパネル式のディスプレイ24のうち、撮像画像GSP4が提示されるエリアのポイントPRaに触れることにより、移動体Vの右方向を注視方向として入力することができる。 Step 31 and subsequent steps are processing related to generation and transmission processing of the next monitoring information transmission command based on information acquired up to the previous time. In step ST <b> 31, the central monitoring device 20 sets the monitor image of the acquired monitoring information on the touch panel display 24 having an input function so that it can be presented to the monitor. In step ST <b> 32, the central monitoring device 20 receives information specifying the moving object V present at the position to be monitored from the monitor. This operation is an operation such as clicking a dot corresponding to the moving object V displayed in the map information shown in the upper left of FIG. Subsequently, in step ST33, the central monitoring apparatus 20 receives an input of a gaze direction that the supervisor wants to gaze at. The gaze direction is input by touching, tapping, or sliding touching the screen of the touch panel display 24. When the monitor touches any point on the monitoring image, the direction of the touched point can be specified as the gaze direction. When the monitor pays attention to the left direction of the moving object V, the left side camera 11b of the monitoring image K performs input for selecting the captured image GSP2 obtained by imaging the area SP2. Specifically, in the monitoring image K shown in FIG. 11, the supervisor touches the point PRa in the area where the captured image GSP4 is presented in the touch panel display 24, so that the right direction of the moving object V is set as the gaze direction. Can be entered.
 図19Cのフローチャートに戻り、ステップST33において、中央監視装置20は、特定した車両Vに搭載された監視端末装置10との通信(情報授受)における通信速度を算出する。本実施形態のおける通信速度は、中央監視装置20が監視端末装置10側から受信した情報の単位時間あたりのデータ量である。先述したように、通信速度は、中央監視装置20における監視画像の処理に係るフレームレートや監視端末装置10における情報の送信から送信完了確認までの所要時間により判断することができる。 Referring back to the flowchart of FIG. 19C, in step ST33, the central monitoring device 20 calculates a communication speed in communication (information exchange) with the monitoring terminal device 10 mounted on the identified vehicle V. The communication speed in the present embodiment is the data amount per unit time of information received by the central monitoring device 20 from the monitoring terminal device 10 side. As described above, the communication speed can be determined based on the frame rate related to monitoring image processing in the central monitoring device 20 and the time required from transmission of information to confirmation of transmission completion in the monitoring terminal device 10.
 ステップST34において、中央監視装置20は、通信速度が所定の閾値以下であるか否か(フレームレートが所定の閾値以下であるか否か、送信から送信完了までの所要時間が所定の閾値以上であるか否かを含む)を判断する。ステップST34において、中央監視装置20は、通信速度が所定の閾値以下である(フレームレートが所定の閾値以下である、又は情報の送信完了までの時間が所定値以上である)と判断した場合には、ステップST35へ進む。ステップST35において、異なる解像度の画像を含む監視画像の編成条件の生成を開始する。解像度はフレームごとに指定することができる。本実施形態における編成条件は、解像度が相対的に高い第1解像度の第1画像と、この第1解像度よりも低い第2解像度の第2画像とを所定の編成条件で混合させる事項を含む。なお、本実施形態における編成条件は、少なくとも解像度の異なる第1画像と第2画像とが所定割合で含まれることを定義するが、さらに異なる第3解像度、第4解像度乃至第n解像度の画像を所定割合で含むことを定義することができる。 In step ST34, the central monitoring device 20 determines whether or not the communication speed is equal to or lower than a predetermined threshold (whether the frame rate is equal to or lower than the predetermined threshold, the time required from transmission to transmission completion is equal to or higher than the predetermined threshold). Whether or not there is). In step ST34, the central monitoring apparatus 20 determines that the communication speed is equal to or lower than a predetermined threshold (the frame rate is equal to or lower than the predetermined threshold, or the time until information transmission is completed is equal to or higher than the predetermined value). Advances to step ST35. In step ST35, generation of the knitting condition for the monitoring image including images with different resolutions is started. The resolution can be specified for each frame. The knitting conditions in this embodiment include a matter of mixing a first image having a first resolution with a relatively high resolution and a second image having a second resolution lower than the first resolution under a predetermined knitting condition. The composition condition in the present embodiment defines that at least a first image and a second image having different resolutions are included at a predetermined ratio, but images of further different third resolution, fourth resolution to nth resolution are included. Inclusion can be defined at a predetermined rate.
 特に限定されないが、本実施形態の編成条件は、通信速度が所定の閾値以下ではない通常状態において送信する画像の解像度である第1解像度の第1画像に、この第1解像度よりも低い第2解像度の第2画像を組み合わせを定義する。この編成条件に従って監視端末装置10に監視画像を生成させることにより、送受信される情報量を低減乃至維持させることができる。 Although not particularly limited, the composition condition of the present embodiment is that the first image of the first resolution that is the resolution of the image to be transmitted in the normal state where the communication speed is not less than or equal to the predetermined threshold is the second lower than the first resolution. A combination of the second images of resolution is defined. By causing the monitoring terminal device 10 to generate a monitoring image according to the knitting conditions, the amount of information transmitted and received can be reduced or maintained.
 編成条件には、監視画像として送信する第1画像と第2画像の送信頻度、単位時間あたりに送信される監視画像に含まれる第1画像と第2画像との存在比率、第1画像と第2画像の送信順序、送信順序に従って順次送信される第1画像・第2画像の送信間隔(周期)を含む。 The composition conditions include the transmission frequency of the first image and the second image transmitted as the monitoring image, the existence ratio of the first image and the second image included in the monitoring image transmitted per unit time, the first image and the first image The transmission order of two images, and the transmission interval (cycle) of the first image and the second image sequentially transmitted according to the transmission order are included.
 図20は、第1の編成条件に基づく監視画像の送信状態を説明するための図である。図20に示すように、中央監視装置20と監視端末装置10との間の通信速度が低下していない通常時の処理(1)において、監視端末装置10は、相対的に高い第1解像度の第1画像を一定間隔で設定されたタイミングt0~t3に中央監視装置20へ送信する。これに対し、中央監視装置20と監視端末装置10との間の通信速度が所定閾値低下である場合の処理(2)(3)においては、監視端末装置10は、編成条件に従い、相対的に高い第1解像度の第1画像と、相対的に低い第2解像度の第2画像とを一定間隔で設定されたタイミングt0~t3に、交互に中央監視装置20へ送信する。つまり、本例の(2)(3)に示す編成条件は、第1解像度の第1画像を送信した後、所定間隔経過後に、第2解像度の第2画像を一枚乃至複数枚送信する旨を定義する。本例の(2)に示す編成条件では、t1,t3において送信する第2画像の枚数が2枚(640×480 pixels×2)であるため、第1画像を1枚(1280×960 pixels×1)だけ送信する(1)に示す場合よりも通信する情報量を低減させることができる。また、本例の(3)に示す編成条件では、t1,t3において送信する第2画像の枚数が4枚(640×480 pixels×4)であるため、第1画像を1枚(1280×960 pixels×1)だけ送信する(1)の場合と通信に係る情報量は変わらないが、(1)の場合にはt0からt1までの間に2枚の画像が送信されるのに対して(2)の場合にはt0からt1までの間に5枚もの画像を送信することができるというメリットがある。つまり、情報量を維持しつつも多くの監視画像を監視者に提供することができる。 FIG. 20 is a diagram for explaining the transmission state of the monitoring image based on the first composition condition. As shown in FIG. 20, in the normal process (1) in which the communication speed between the central monitoring device 20 and the monitoring terminal device 10 does not decrease, the monitoring terminal device 10 has a relatively high first resolution. The first image is transmitted to the central monitoring apparatus 20 at timings t0 to t3 set at regular intervals. On the other hand, in the processes (2) and (3) when the communication speed between the central monitoring device 20 and the monitoring terminal device 10 is a predetermined threshold decrease, the monitoring terminal device 10 is relatively A first image with a high first resolution and a second image with a relatively low second resolution are alternately transmitted to the central monitoring device 20 at timings t0 to t3 set at regular intervals. In other words, the composition conditions shown in (2) and (3) of this example indicate that one or more second images of the second resolution are transmitted after a predetermined interval has elapsed after transmitting the first image of the first resolution. Define In the composition condition shown in (2) of this example, since the number of second images transmitted at t1 and t3 is two (640 × 480 pixels × 2), one first image (1280 × 960 pixels × 2) The amount of information to be communicated can be reduced compared to the case shown in (1) where only 1) is transmitted. In the composition condition shown in (3) of this example, since the number of second images transmitted at t1 and t3 is four (640 × 480 pixels × 4), one first image (1280 × 960) is obtained. The amount of information related to communication is the same as in the case of (1) transmitting only (pixels × 1), but in the case of (1), two images are transmitted between t0 and t1 ( In the case of 2), there is an advantage that as many as five images can be transmitted between t0 and t1. That is, a large number of monitoring images can be provided to the monitor while maintaining the amount of information.
 このように、通信速度が低下している場合に、各移動体Vに搭載された監視端末装置10に、解像度の高い第1画像と解像度の低い第2画像とを取り混ぜて編成した監視画像を中央監視装置20へ送信させるので、情報の送信頻度を確保しつつも通信に係る情報量の増加を抑制することができる。つまり、中央監視装置20は、通信速度が低下している場合には、通信に係る情報量の大きい詳細な画像を1枚取得することに代えて、詳細ではないが、通信に係る情報量の小さい画像を多く(複数枚)取得できるように監視端末装置10を制御するので、通信速度の低下の原因となる通信回線の混雑を助長することなく、情報をリアルタイムで収集することができる。 As described above, when the communication speed is low, a monitoring image formed by mixing the first image having a high resolution and the second image having a low resolution on the monitoring terminal device 10 mounted on each moving body V is combined. Since transmission is performed to the central monitoring device 20, an increase in the amount of information related to communication can be suppressed while ensuring the transmission frequency of information. That is, when the communication speed is low, the central monitoring device 20 is not detailed but instead of acquiring one detailed image with a large amount of information related to communication. Since the monitoring terminal device 10 is controlled so that many (a plurality of) small images can be acquired, information can be collected in real time without encouraging congestion of the communication line that causes a decrease in communication speed.
 また、本実施形態の中央監視装置20は、移動体Vを基準とした監視方向ごとに監視情報送信命令を生成することができる。具体的に、中央監視装置20は、監視情報送信命令に、移動体を基準とする監視方向(前方、後方、右側方、左側方)を特定する情報を含ませるとともに、各監視方向に解像度及びその送信頻度を対応づけることができる。中央監視装置20は、移動体Vを基準とする監視方向を特定する情報と、移動体Vの所定位置に設置され、所定の監視方向を撮像するたカメラ11a~11dとを対応づける情報を予め備え、監視情報送信命令にカメラ11a~11dを特定する情報を含ませることができる。もちろん、移動体V側に移動体Vを基準とする監視方向と所定の監視方向を撮像するたカメラ11a~11dとを対応づける情報とを記憶させておき、監視情報送信命令に含ませた監視方向に基づいて、監視端末装置10側に監視情報送信命令が対象とするカメラ11a~11dを識別させてもよい。本実施形態の中央監視装置20は、移動体Vの周囲の方向ごとに監視情報送信命令を作成し、実行させることができる。 Moreover, the central monitoring apparatus 20 of this embodiment can generate a monitoring information transmission command for each monitoring direction with the moving body V as a reference. Specifically, the central monitoring device 20 includes, in the monitoring information transmission command, information that specifies the monitoring direction (front, rear, right side, left side) with respect to the moving body, and the resolution and the monitoring direction in each monitoring direction. The transmission frequency can be associated. The central monitoring device 20 preliminarily stores information for specifying the monitoring direction with respect to the moving object V and information for associating the cameras 11a to 11d installed at a predetermined position of the moving object V and imaging the predetermined monitoring direction. In addition, information for identifying the cameras 11a to 11d can be included in the monitoring information transmission command. Of course, the monitoring direction included in the monitoring information transmission command is stored on the moving body V side in such a manner that the monitoring direction based on the moving body V and information associating the cameras 11a to 11d for imaging the predetermined monitoring direction are stored. Based on the direction, the monitoring terminal device 10 may identify the cameras 11a to 11d targeted by the monitoring information transmission command. The central monitoring device 20 of the present embodiment can create and execute a monitoring information transmission command for each direction around the moving object V.
 たとえば、監視者において監視したい方向が予め決まっている場合、具体的には移動体Vの進行方向(前方)を監視するという場合には、移動体Vのフロント部分に設置されたカメラ11aを特定し、このカメラ11aの撮像画像に基づいて第1画像と第2画像とを生成させることができる。これにより、監視者の意図に応じた監視画像を取得しつつも、解像度を調整することにより中央監視装置20と監視端末装置10間の通信にかかる情報量を低減させることができる。 For example, when the direction in which the monitor wants to monitor is determined in advance, specifically when the traveling direction (front) of the moving object V is monitored, the camera 11a installed at the front portion of the moving object V is specified. And a 1st image and a 2nd image can be produced | generated based on the captured image of this camera 11a. Thereby, it is possible to reduce the amount of information related to communication between the central monitoring device 20 and the monitoring terminal device 10 by adjusting the resolution while acquiring a monitoring image according to the intention of the monitoring person.
 図21は、第2の編成条件に基づく監視画像の送信状態を説明するための図である。図21の(1)に示すように、すべての方向(前方:Fr,後方Rr,右側:Right,左側:Left)又はすべてのカメラ11a~11dについて同じ編成条件を定義することができる。つまり、(1)の例において、監視端末装置10は、すべての方向(又はすべてのカメラ11a~11dの撮像方向)について、相対的に高い第1解像度の第1画像と、相対的に低い第2解像度の第2画像とを一定間隔f0で設定されたタイミングt0~t3に、交互に中央監視装置20へ送信する。これに対し、本実施形態では、図21の(2)に示すように、各方向(前方:Fr,後方Rr,右側:Right,左側:Left)又は各カメラ11a~11dごとに異なる編成条件を定義することができる。つまり、(2)の例において、監視端末装置10は、各方向(又はすべてのカメラ11a~11dの撮像方向)について、一定間隔f1で設定されたタイミングt0~t3に、相対的に高い第1解像度の第1画像を1回送信した後に第2画像を3回送信する。このとき、(2)の例に示すように、各方向(又はすべてのカメラ11a~11dの撮像方向)について第1画像の送信タイミングが重ならないように、シフトさせることができる。 FIG. 21 is a diagram for explaining the transmission state of the monitoring image based on the second composition condition. As shown in (1) of FIG. 21, the same knitting condition can be defined for all the directions (front: Fr, rear Rr, right: Right, left: Left) or all the cameras 11a to 11d. That is, in the example of (1), the monitoring terminal device 10 has a relatively high first resolution image and a relatively low first image in all directions (or imaging directions of all the cameras 11a to 11d). The second image of two resolutions are alternately transmitted to the central monitoring device 20 at timings t0 to t3 set at a constant interval f0. On the other hand, in this embodiment, as shown in FIG. 21 (2), different knitting conditions are set for each direction (front: Fr, rear Rr, right: Right, left: Left) or for each camera 11a to 11d. Can be defined. That is, in the example of (2), the monitoring terminal device 10 has a relatively high first in each direction (or the imaging direction of all the cameras 11a to 11d) at timings t0 to t3 set at a constant interval f1. After the first image having the resolution is transmitted once, the second image is transmitted three times. At this time, as shown in the example (2), the first image transmission timing can be shifted in each direction (or the imaging direction of all the cameras 11a to 11d) so as not to overlap.
 図21の(1)に示す編成条件では、t0のタイミングで第1画像を4枚送信した後は、第画像4枚を送信するために要する送信間隔f0の経過後、たとえばt1のタイミングまで画像を送信することができない。これに対し、同図の(2)に示す編成条件では、t0のタイミングで第1画像1枚と第2画像3枚を送信し、その後t0からt1までの時間よりも短い送信時間f1が経過したt0.5のタイミングで第1画像1枚と第2画像3枚を送信することができる。このように、第1画像と第2画像とを所定の編成条件で取り混ぜて、送信に係る情報量を位置又は低下させることにより、画像の送信間隔を短くするので、情報をリアルタイムで収集することができる。 Under the knitting conditions shown in (1) of FIG. 21, after four first images are transmitted at the timing t0, the image is transmitted until the timing t1, for example, after the elapse of the transmission interval f0 required for transmitting the fourth images. Cannot be sent. On the other hand, under the knitting conditions shown in (2) of the figure, one first image and three second images are transmitted at the timing t0, and then a transmission time f1 shorter than the time from t0 to t1 has elapsed. At the timing of t0.5, one first image and three second images can be transmitted. In this way, the first image and the second image are mixed under a predetermined knitting condition, and the amount of information related to transmission is reduced or reduced, thereby shortening the image transmission interval, so that information can be collected in real time. Can do.
 ここで、ステップST32に戻り、並行して行われるステップST40へ進む。ステップST40において、監視端末装置10が搭載された移動体Vの周囲のうち監視者の注視したい注視方向を、監視者が中央監視装置20側に入力したか否かを判断する。注視方向が監視者により指定された場合には、ステップST36へ進む。 Here, it returns to step ST32 and progresses to step ST40 performed in parallel. In step ST <b> 40, it is determined whether or not the supervisor inputs the gaze direction that the supervisor wants to gaze among the surroundings of the moving body V on which the monitoring terminal device 10 is mounted to the central monitoring device 20 side. If the gaze direction is designated by the supervisor, the process proceeds to step ST36.
 ステップST36において、中央監視装置20は、注視方向が入力された場合には注視方向に応じた編成条件を含む監視情報送信命令を生成する。注視方向は監視者が特に注意深く観察をしたい方向であるから、通信速度が低下した場合であっても注視方向については相対的に高い解像度の画像を監視者に提供することが好ましい。このため、本実施形態の中央監視装置20は、注視方向に応じた編成条件を生成する。具体的に、中央監視装置20は、移動体Vの周囲のうち監視者の注視したい特定の注視方向が入力された場合には、注視方向を撮像するカメラ11の撮像画像に基づいて相対的に高い第1解像度の第1画像を生成させるとともに、注視方向以外の方向を撮像するカメラ11の撮像画像に基づいて相対的に低い第2解像度の第2画像を生成させ、さらに、解像度の異なる第1画像と第2画像とを所定の割合で混合させた編成条件を生成する。この編成条件は監視端末装置10へ送信される監視情報送信命令に含められる。 In step ST36, when the gaze direction is input, the central monitoring device 20 generates a monitoring information transmission command including a knitting condition according to the gaze direction. Since the gaze direction is a direction that the monitor wants to observe particularly carefully, it is preferable to provide the monitor with a relatively high resolution image for the gaze direction even when the communication speed decreases. For this reason, the central monitoring apparatus 20 of this embodiment produces | generates the knitting conditions according to a gaze direction. Specifically, the central monitoring device 20 is relatively based on the captured image of the camera 11 that captures the gaze direction when a specific gaze direction that the monitor wants to gaze is input around the moving body V. A first image having a high first resolution is generated, a second image having a relatively low second resolution is generated based on a captured image of the camera 11 that captures a direction other than the gaze direction, and a second image having a different resolution is generated. A knitting condition in which one image and the second image are mixed at a predetermined ratio is generated. This composition condition is included in a monitoring information transmission command transmitted to the monitoring terminal device 10.
 図22は、第3の編成条件に基づく監視画像の送信状態を説明するための図である。図22の(1)に示すように、すべての方向(前方:Fr,後方Rr,右側:Right,左側:Left)又はすべてのカメラ11a~11dについて同じ編成条件を定義することができる。つまり、図22の(1)に示す例において、監視端末装置10は、すべての方向(又はすべてのカメラ11a~11dの撮像方向)について、相対的に高い第1解像度の第1画像と、相対的に低い第2解像度の第2画像とを一定間隔f0で設定されたタイミングt0~t3に、交互に中央監視装置20へ送信する。これに対し、本実施形態では、図22の(2)に示すように、監視端末装置10は、監視者により指定された注視方向(例えば前方:Fr)については、注視方向である前方を撮像するカメラ11aのタイミングP1における撮像画像に基づいて第1解像度の第1画像を生成し、注視方向以外の方向である後方、右側、左側を撮像するカメラ11b、11c、11dのタイミングP1における撮像画像に基づいて第2解像度の第2画像を生成し、これらを混合させた監視画像をタイミングt0において中央監視装置20へ送信する。引き続き、注視方向である前方を撮像するカメラ11aのタイミングP2における撮像画像に基づいて第1解像度の第1画像を生成し、注視方向ではない後方を撮像するカメラ11cのタイミングP2における撮像画像に基づいて第1解像度の第1画像を生成し、さらに、注視方向以外の方向である右側、左側を撮像するカメラ11b、11dのタイミングP2における撮像画像に基づいて第2解像度の第2画像を生成し、これらを混合させた監視画像をタイミングt0.5において中央監視装置20へ送信する。 FIG. 22 is a diagram for explaining a transmission state of the monitoring image based on the third composition condition. As shown in (1) of FIG. 22, the same knitting conditions can be defined for all the directions (front: Fr, rear Rr, right: Right, left: Left) or all the cameras 11a to 11d. That is, in the example shown in (1) of FIG. 22, the monitoring terminal device 10 compares the first image with a relatively high first resolution with respect to all directions (or the imaging directions of all the cameras 11a to 11d), Therefore, the second image of the second resolution having a lower resolution is alternately transmitted to the central monitoring apparatus 20 at timings t0 to t3 set at a constant interval f0. On the other hand, in the present embodiment, as shown in (2) of FIG. 22, the monitoring terminal device 10 captures the front that is the gaze direction for the gaze direction (for example, forward: Fr) designated by the monitor. Based on the captured image at the timing P1 of the camera 11a, the first image of the first resolution is generated, and the captured images at the timing P1 of the cameras 11b, 11c, and 11d that capture the rear, right side, and left side that are directions other than the gaze direction. The second image of the second resolution is generated based on the above, and the monitoring image obtained by mixing them is transmitted to the central monitoring device 20 at the timing t0. Subsequently, a first image having a first resolution is generated based on a captured image at timing P2 of the camera 11a that captures the front in the gaze direction, and based on a captured image at timing P2 of the camera 11c that captures the rear that is not in the gaze direction. The first image having the first resolution is generated, and further, the second image having the second resolution is generated based on the captured images at the timing P2 of the cameras 11b and 11d that capture the right side and the left side which are directions other than the gaze direction. The monitoring image in which these are mixed is transmitted to the central monitoring apparatus 20 at timing t0.5.
 このように、注視方向については相対的に高い第1解像度の第1画像を生成させ、注視方向以外については相対的に低い第2解像度の第2画像を生成させることにより、複数の方向の監視画像を万遍なく取得しながらも、注視方向と注視方向以外の解像度に軽重を与えることにより、中央監視装置20と監視端末装置10間の通信にかかる情報量を低減させることができる。 In this way, a first image having a relatively high first resolution is generated for the gaze direction, and a second image having a relatively low second resolution is generated for the direction other than the gaze direction, thereby monitoring a plurality of directions. While obtaining images uniformly, by giving light weight to the gaze direction and resolutions other than the gaze direction, the amount of information related to communication between the central monitoring device 20 and the monitoring terminal device 10 can be reduced.
 ステップST40、ST36の処理の後、さらに、ステップST41において、中央監視装置20は、監視者によって特定された乗用車Vの車速センサ16により検出された車速を電気通信回線網30を介して取得する。そして、ステップST42において、中央監視装置20は、車速が所定の閾値以上であるか否かを判断する。車速が所定閾値以上である場合にはステップST37へ進む。 After the processing of steps ST40 and ST36, further, in step ST41, the central monitoring apparatus 20 acquires the vehicle speed detected by the vehicle speed sensor 16 of the passenger vehicle V specified by the supervisor via the telecommunication network 30. In step ST42, the central monitoring device 20 determines whether the vehicle speed is equal to or higher than a predetermined threshold value. If the vehicle speed is greater than or equal to the predetermined threshold, the process proceeds to step ST37.
 ステップST37において、中央監視装置20は、相対的に低い第2解像度の第2画像が含まれる割合を増加させる編成条件を生成する。車速が所定閾値以上、つまり移動体Vが速く走行している場合には、車両周囲の映像も大きく変化する。車両周囲の映像が刻々と変化するため、通信速度が低下した場合であっても車速が速い場合には数多くの画像を短い間隔で監視者に提供することが好ましい。このため、本実施形態の中央監視装置20は、車速に応じた編成条件を生成する。具体的に、中央監視装置20は、移動体Vの車速が所定閾値以上である場合には、編成条件における第2画像が含まれる割合を増やす。この編成条件は監視端末装置10へ送信される監視情報送信命令に含められる。 In step ST37, the central monitoring apparatus 20 generates a knitting condition that increases the proportion of the second image having the relatively low second resolution. When the vehicle speed is equal to or higher than the predetermined threshold value, that is, when the moving object V is traveling fast, the image around the vehicle also changes greatly. Since the video around the vehicle changes every moment, it is preferable to provide a large number of images to the monitor at short intervals when the vehicle speed is high even when the communication speed decreases. For this reason, the central monitoring apparatus 20 of this embodiment produces | generates the knitting conditions according to a vehicle speed. Specifically, when the vehicle speed of the moving body V is equal to or higher than a predetermined threshold, the central monitoring device 20 increases the ratio in which the second image in the knitting condition is included. This composition condition is included in a monitoring information transmission command transmitted to the monitoring terminal device 10.
 図23は、第4の編成条件に基づく監視画像の送信状態を説明するための図である。図23に示すように、車速が所定閾値未満である場合の処理(1)において、監視端末装置10は、相対的に高い第1解像度の第1画像と相対的に低い第2解像度の第2画像とが同じ割合で含まれるように、交互に、一定間隔で設定されたタイミングt0~t3に中央監視装置20へ送信する。これに対し、車速が所定閾値以上である場合の処理(2)においては、監視端末装置10は、第2画像の割合を増加させた編成条件に従い、第1解像度の第1画像をタイミングt0で送信し、その後、第2解像度の第2画像をタイミングt1,t2において2回続けて送信し、その後、第1解像度の第1画像をタイミングt3で送信する。(1)においては第1画像と第2画像との割合は1:1であるが、(2)においては第1画像と第2画像との割合は1:2である。 FIG. 23 is a diagram for explaining a transmission state of the monitoring image based on the fourth composition condition. As shown in FIG. 23, in the process (1) in the case where the vehicle speed is less than the predetermined threshold, the monitoring terminal device 10 uses a relatively high first resolution first image and a relatively low second resolution second image. The images are alternately transmitted to the central monitoring device 20 at timings t0 to t3 set at regular intervals so that the images are included at the same rate. On the other hand, in the process (2) in the case where the vehicle speed is equal to or higher than the predetermined threshold, the monitoring terminal device 10 applies the first image of the first resolution at the timing t0 according to the knitting condition in which the ratio of the second image is increased. Then, the second image with the second resolution is transmitted twice at timings t1 and t2, and then the first image with the first resolution is transmitted at timing t3. In (1), the ratio between the first image and the second image is 1: 1, but in (2), the ratio between the first image and the second image is 1: 2.
 このように、車速が所定閾値以上である場合には相対的に低い第2解像度の第2画像が監視画像に含まれる割合を増加させることにより、車速が所定閾値以上の場合と所定閾値未満の場合とで、第2画像の割合に軽重を与えることにより、中央監視装置20と監視端末装置10間の通信にかかる情報量を低減させつつ、移動速度に応じて変化の大きい車両周囲の監視画像をリアルタイムで提供することができる。 As described above, when the vehicle speed is equal to or higher than the predetermined threshold, the ratio of the relatively low second resolution second image included in the monitoring image is increased, so that the vehicle speed is equal to or higher than the predetermined threshold. In some cases, by giving a light weight to the ratio of the second image, the amount of information required for communication between the central monitoring device 20 and the monitoring terminal device 10 is reduced, and the monitoring image around the vehicle that varies greatly according to the moving speed Can be provided in real time.
 続くステップST38において、中央監視装置20は、ステップST35~ST37において生成された編成条件を含む監視情報送信命令を生成する。最後に、ステップST39において、先にST32において特定された車両へ編成条件を含む監視情報送信命令を送信する。この監視情報送信命令は、次回の監視情報生成を制御するための命令である。 In subsequent step ST38, the central monitoring apparatus 20 generates a monitoring information transmission command including the knitting conditions generated in steps ST35 to ST37. Finally, in step ST39, a monitoring information transmission command including the composition conditions is transmitted to the vehicle previously identified in ST32. This monitoring information transmission command is a command for controlling the next generation of monitoring information.
 なお、ステップST34において通信速度が所定閾値以下(フレームレートが所定閾値以下、又は端末からの送信開始から送信完了までの所要時間が所定閾値以上)である場合には、前回の監視情報送信命令に基づいて監視端末装置10に処理をさせればよいので、ステップST35以降の処理は行うことなく終了し、ステップST30の処理へ戻る。また、ステップST40において監視者が注視方向を指定しない場合には、ステップST41以降の処理を行う。さらに、ステップST42において車速が所定閾値未満である場合にはステップST38へ進む。 In step ST34, if the communication speed is equal to or lower than the predetermined threshold (the frame rate is equal to or lower than the predetermined threshold, or the time required from the start of transmission to the completion of transmission is equal to or higher than the predetermined threshold), the previous monitoring information transmission command Therefore, the monitoring terminal device 10 may be processed based on the processing, so that the processing after step ST35 is terminated without performing the processing, and the processing returns to step ST30. Further, when the supervisor does not specify the gaze direction in step ST40, the processing after step ST41 is performed. Furthermore, when the vehicle speed is less than the predetermined threshold value in step ST42, the process proceeds to step ST38.
 ステップ39において監視情報送信命令は特定された車両Vへ送信される。監視情報送信命令を取得した監視端末装置10は、編成条件に従った監視画像を得るために第1画像と第2画像を生成する。先述したように解像度が異なる第1画像と第2画像を取得する手法としては、画像処理装置12に撮像画像の解像度を変更させる処理を行わせてもよいし、カメラ11の解像度を変えて所望の解像度の撮像画像を取得してもよい。 In step 39, the monitoring information transmission command is transmitted to the specified vehicle V. The monitoring terminal device 10 that has acquired the monitoring information transmission command generates a first image and a second image in order to obtain a monitoring image according to the composition condition. As described above, as a method of acquiring the first image and the second image having different resolutions, the image processing device 12 may be caused to change the resolution of the captured image, or the resolution of the camera 11 may be changed as desired. You may acquire the captured image of the resolution.
 ところで、画像処理装置12が、カメラ11により撮像された撮像画像を所定の解像度の画像に加工する処理を行う場合には、まず、キャプチャ処理を行い、次にサイズ変更処理を行い、次に圧縮処理を行い、最後に送信処理を行う。図24は、第1画像と第2画像の生成を画像処理装置12だけで実行させた場合の各処理に係る時間的な負荷と、カメラ11と画像処理装置12とに分散させて実行させた場合の各処理に係る時間的な負荷を示す図である。図24において、各処理に係る時間的な負荷を矩形のブロックで示す。 By the way, when the image processing apparatus 12 performs a process of processing a captured image captured by the camera 11 into an image with a predetermined resolution, first, a capture process is performed, a size change process is performed, and then a compression process is performed. Process, and finally the transmission process. FIG. 24 shows the time load related to each process when the generation of the first image and the second image is executed only by the image processing apparatus 12, and is distributed to the camera 11 and the image processing apparatus 12. It is a figure which shows the time load concerning each process in a case. In FIG. 24, a temporal load related to each process is indicated by a rectangular block.
 図24の(1)に示すように、第1画像と第2画像の生成を画像処理装置12だけで実行させた場合は、1枚の撮像画像を処理するのにtu1の時間を要し、4枚の撮像画像を処理するのにtq1の時間を要する。他方、図24の(2)に示すように、第1画像と第2画像の生成をカメラ11と画像処理装置12とに分散させて実行させた場合は、1枚の撮像画像を処理するのにtu2の時間を要し、4枚の撮像画像を処理するのにtq2の時間を要する。画像処理装置12だけで1枚の撮像画像を処理して第1画像又は第2画像を生成するために要する時間tu1と、カメラ11と画像処理装置12とに分散させて1枚の撮像画像を処理して第1画像又は第2画像を生成するために要する時間tu2とは、さほど変わらないが、カメラ11と画像処理装置12とで4枚の撮像画像の処理を並行して実行することができるので、カメラ11と画像処理装置12とに分散させて4枚の撮像画像を処理して第1画像又は第2画像を生成するために要する時間tq2は、画像処理装置12だけで4枚の撮像画像を処理して第1画像又は第2画像を生成するために要する時間tq1よりも短い。このように、カメラ11において解像度の変更(圧縮)が行われることにより、第1画像又は第2画像を生成するための処理時間が短縮されるため、監視画像を含む監視情報を中央監視装置20へ提供するために要する時間も短縮することができる。この結果、監視者にリアルタイムで監視画像を提供することができる。 As shown in (1) of FIG. 24, when the generation of the first image and the second image is executed only by the image processing device 12, it takes time tu1 to process one captured image, It takes time tq1 to process four captured images. On the other hand, as shown in (2) of FIG. 24, when the generation of the first image and the second image is distributed and executed in the camera 11 and the image processing device 12, one captured image is processed. It takes a time of tu2, and it takes a time of tq2 to process four captured images. Only the image processing device 12 processes a single captured image to generate a first image or a second image, and the time tu1, which is dispersed between the camera 11 and the image processing device 12, is converted into a single captured image. Although the time tu2 required for processing to generate the first image or the second image does not change much, the camera 11 and the image processing device 12 can execute processing of four captured images in parallel. Therefore, the time tq2 required to process the four captured images dispersedly in the camera 11 and the image processing device 12 to generate the first image or the second image is four times only for the image processing device 12. The time tq1 required for processing the captured image to generate the first image or the second image is shorter. As described above, since the resolution change (compression) is performed in the camera 11, the processing time for generating the first image or the second image is shortened. Therefore, the monitoring information including the monitoring image is stored in the central monitoring device 20. The time required to provide the data can also be shortened. As a result, a monitoring image can be provided to the monitoring person in real time.
 以上のとおり、本実施形態の監視システムは以下の効果を奏する。
(1)本発明の本実施形態の監視システム1は、通信速度が低下している場合には、各移動体に搭載された監視端末装置10に、相対的に高い第1解像度の第1情報と相対的に低い第2解像度の第2情報とを取り混ぜて編成した監視画像を中央監視装置20へ送信させるので、情報の送信頻度を確保しつつも通信データ量の増加を抑制することができる。つまり、中央監視装置20は、通信速度が低下している場合には、通信データ量の大きい詳細な情報を1回取得することに代えて、通信データ量の小さい大まかな情報を複数回取得できるように監視端末装置10を制御するので、通信速度の低下の原因となる通信回線の混雑を助長することなく、情報をリアルタイムで収集することができる。
As described above, the monitoring system of the present embodiment has the following effects.
(1) The monitoring system 1 according to the present embodiment of the present invention provides the first information with a relatively high first resolution to the monitoring terminal device 10 mounted on each mobile body when the communication speed is low. And the monitoring image formed by mixing the second information with the relatively low second resolution are transmitted to the central monitoring device 20, so that an increase in the amount of communication data can be suppressed while ensuring the information transmission frequency. . That is, the central monitoring device 20 can acquire rough information with a small amount of communication data a plurality of times instead of acquiring detailed information with a large amount of communication data once when the communication speed is low. Since the monitoring terminal device 10 is controlled as described above, information can be collected in real time without encouraging the congestion of the communication line that causes a decrease in the communication speed.
(2)本発明の本実施形態の監視システム1の中央監視装置20は、移動体Vを基準とした監視方向ごとに監視情報送信命令を生成するので、たとえば、監視者において移動体Vの進行方向(前方)を監視したいという意図がある場合には、移動体Vのフロント部分に設置されたカメラ11aを特定し、このカメラ11aの撮像画像に基づいて第1画像と第2画像とを生成させることができる。これにより、監視者の意図に応じた監視画像を取得しつつも、解像度を調整することにより中央監視装置20と監視端末装置10間の通信にかかる情報量を低減させることができる。 (2) Since the central monitoring device 20 of the monitoring system 1 according to the present embodiment of the present invention generates a monitoring information transmission command for each monitoring direction with the moving body V as a reference, for example, the progress of the moving body V in the monitor When there is an intention to monitor the direction (front), the camera 11a installed in the front part of the moving object V is specified, and the first image and the second image are generated based on the captured image of the camera 11a. Can be made. Thereby, it is possible to reduce the amount of information related to communication between the central monitoring device 20 and the monitoring terminal device 10 by adjusting the resolution while acquiring a monitoring image according to the intention of the monitoring person.
(3)本発明の本実施形態の監視システム1の中央監視装置20は、移動体Vの周囲のうち監視者の注視したい特定の注視方向が入力された場合には、注視方向を撮像するカメラ11の撮像画像に基づいて相対的に高い第1解像度の第1画像を生成させるとともに、注視方向以外の方向を撮像するカメラ11の撮像画像に基づいて相対的に低い第2解像度の第2画像を生成させる編成条件を生成するので、複数の方向の監視画像を万遍なく取得しながらも、注視方向と注視方向以外の解像度に軽重を与えることにより、中央監視装置20と監視端末装置10間の通信にかかる情報量を低減させることができる。 (3) The central monitoring device 20 of the monitoring system 1 according to the present embodiment of the present invention is a camera that captures a gaze direction when a specific gaze direction that the monitor wants to gaze is input around the moving body V. A first image having a relatively high first resolution is generated based on the captured image of 11 and a second image having a relatively low second resolution based on the captured image of the camera 11 that captures a direction other than the gaze direction. Since the knitting condition for generating the image is generated, the monitoring image in a plurality of directions can be obtained evenly, and the gaze direction and the resolution other than the gaze direction are given light weight, so that the central monitoring device 20 and the monitoring terminal device 10 are The amount of information related to the communication can be reduced.
(4)本発明の本実施形態の監視システム1の中央監視装置20は、移動体Vの車速が所定閾値以上である場合には、相対的に低い第2解像度の第2画像が監視画像に含まれる割合を増加させることにより、車速が所定閾値以上の場合と所定閾値未満の場合とで、第2画像の割合に軽重を与えることにより、中央監視装置20と監視端末装置10間の通信にかかる情報量を低減させつつ、移動速度に応じて変化の大きい車両周囲の監視画像をリアルタイムで提供することができる。 (4) The central monitoring device 20 of the monitoring system 1 according to the present embodiment of the present invention is configured such that when the vehicle speed of the moving object V is equal to or higher than a predetermined threshold, a second image having a relatively low second resolution is used as the monitoring image. By increasing the included ratio, the communication between the central monitoring device 20 and the monitoring terminal device 10 is performed by giving a light weight to the ratio of the second image when the vehicle speed is equal to or higher than the predetermined threshold value and below the predetermined threshold value. While reducing the amount of information, it is possible to provide a real-time monitoring image around the vehicle that varies greatly according to the moving speed.
(5)本発明の本実施形態の監視システム1の監視端末装置10は、所定の解像度で車両周囲を撮像する解像度設定機能を有する各カメラ11に、監視情報送信命令の編成条件に従い、第1解像度又は第2解像度で車両周囲を撮像させ、第1画像と第2画像とを所定の編成条件で混合させた監視画像を生成することができる。カメラ11と画像処理装置12とで4枚の撮像画像の処理を並行して実行することができるので、第1画像又は第2画像を生成するための処理時間が短縮されるため、監視画像を含む監視情報を中央監視装置20へ提供するために要する時間も短縮することができる。この結果、監視者にリアルタイムで監視画像を提供することができる。 (5) The monitoring terminal device 10 of the monitoring system 1 according to the present embodiment of the present invention provides each camera 11 having a resolution setting function for imaging the surroundings of the vehicle with a predetermined resolution according to the knitting condition of the monitoring information transmission command. The surroundings of the vehicle can be imaged at the resolution or the second resolution, and a monitoring image in which the first image and the second image are mixed under a predetermined composition condition can be generated. Since the processing of four captured images can be executed in parallel by the camera 11 and the image processing device 12, the processing time for generating the first image or the second image is shortened. The time required for providing the monitoring information to be included to the central monitoring device 20 can also be shortened. As a result, a monitoring image can be provided to the monitoring person in real time.
(6)本発明の本実施形態の監視システム1の監視端末装置10は、第1解像度の第1画像と、第2解像度の第2画像の生成を画像処理装置12によって行わせることができるので、上記(1)~(5)の作用及び効果を奏することができる。 (6) Since the monitoring terminal device 10 of the monitoring system 1 according to the present embodiment of the present invention can cause the image processing device 12 to generate the first image having the first resolution and the second image having the second resolution. The actions and effects (1) to (5) above can be achieved.
(7)本発明の本実施形態の監視システム1の監視端末装置10は、中央監視装置20から取得した監視情報送信命令に所定の編成条件が含まれない場合には、第1解像度の第1画像の監視画像を生成して、これを中央監視装置20に送信するので、通信速度が所定閾値よりも高い場合には、相対的に高い解像度の監視画像を監視者に送信するので、監視者は詳細な監視画像に基づいて街中の監視を行うことができる。 (7) When the monitoring information transmission command acquired from the central monitoring device 20 does not include a predetermined composition condition, the monitoring terminal device 10 of the monitoring system 1 according to the present embodiment of the present invention has the first resolution first. Since the monitoring image of the image is generated and transmitted to the central monitoring device 20, when the communication speed is higher than the predetermined threshold, the monitoring image having a relatively high resolution is transmitted to the monitoring person. Can monitor the city based on detailed monitoring images.
(8)本発明の本実施形態の監視方法を使用した場合においても、監視システム1と同様の作用を奏し、同様の効果を得ることができる。 (8) Even when the monitoring method of the present embodiment of the present invention is used, the same effect as the monitoring system 1 can be obtained and the same effect can be obtained.
 なお、以上説明したすべての実施形態は、本発明の理解を容易にするために記載されたものであって、本発明を限定するために記載されたものではない。したがって、上記の実施形態に開示された各要素は、本発明の技術的範囲に属する全ての設計変更や均等物をも含む趣旨である。 It should be noted that all the embodiments described above are described for easy understanding of the present invention, and are not described for limiting the present invention. Therefore, each element disclosed in the above embodiment is intended to include all design changes and equivalents belonging to the technical scope of the present invention.
 本明細書では、本発明に係る監視端末装置と中央監視装置を備える監視システムの一態様として、監視端末装置10と中央監視装置20を備える監視システム1を例にして説明するが、本発明はこれに限定されるものではない。 In this specification, the monitoring system 1 including the monitoring terminal device 10 and the central monitoring device 20 will be described as an example of the monitoring system including the monitoring terminal device and the central monitoring device according to the present invention. It is not limited to this.
 また、本明細書では、本発明に係る通信速度算出手段と、命令生成手段と、命令送信手段とを備える中央監視装置の一態様として、通信速度算出機能と、命令生成機能と、命令送信機能を有する制御装置21と、画像処理装置22と、通信装置23と、ディスプレイ24と、入力装置25とを備える中央監視装置20を例にして説明するが、これに限定されるものではない。 Further, in the present specification, as one aspect of the central monitoring device including the communication speed calculation unit, the command generation unit, and the command transmission unit according to the present invention, a communication speed calculation function, a command generation function, and a command transmission function The central monitoring device 20 including the control device 21, the image processing device 22, the communication device 23, the display 24, and the input device 25 will be described as an example. However, the present invention is not limited to this.
 また、本明細書では、本発明に係る監視画像生成手段と、監視情報送信手段と、を備える監視端末装置の一態様として、監視画像生成機能と、監視情報送信機能を有する制御装置14と、カメラ11a~11dと、画像処理装置12と、通信装置13と、位置検出装置15と、車速センサ16とを備える監視端末装置10を例にして説明するが、これに限定されるものではない。 Moreover, in this specification, as one aspect | mode of the monitoring terminal device provided with the monitoring image generation means and monitoring information transmission means which concern on this invention, the control apparatus 14 which has a monitoring image generation function and the monitoring information transmission function, The monitoring terminal device 10 including the cameras 11a to 11d, the image processing device 12, the communication device 13, the position detection device 15, and the vehicle speed sensor 16 will be described as an example, but is not limited thereto.
 なお、上述した実施形態では、乗用車Vの位置情報とカメラ11a~11dからの監視画像を取得するようにしたが、図1に示す、街中に設置された固定カメラ11fからの監視画像と組み合わせて取得してもよい。また、位置情報と監視画像を取得する乗用車Vは、図1に示すように予め決められた領域を走行するタクシーV1やバスを用いることが望ましいが、自家用自動車V2や緊急自動車V3を用いてもよい。 In the above-described embodiment, the position information of the passenger car V and the monitoring images from the cameras 11a to 11d are acquired. However, in combination with the monitoring image from the fixed camera 11f installed in the city shown in FIG. You may get it. Moreover, as for the passenger car V which acquires a positional information and a monitoring image, it is desirable to use the taxi V1 and bus | bath which drive | work a predetermined area | region as shown in FIG. 1, but even if it uses private car V2 and emergency car V3 Good.
1…監視システム
 10…監視端末装置
  11,11a~11d…カメラ
  11f…街中固定カメラ
  12…画像処理装置
  13…通信装置
  14…制御装置
  15…位置検出装置
  16…車速センサ
  20…中央監視装置
  21…中央制御装置
  22…画像処理装置
  23…通信装置
  24…ディスプレイ
  25…入力装置
 30…電気通信回線網
V,V1,V2,V3…移動体
M…投影モデル
S,Sa,Sb、Sc、Sd…投影面
R1~R8…視点
K…監視画像
DESCRIPTION OF SYMBOLS 1 ... Monitoring system 10 ... Monitoring terminal device 11, 11a-11d ... Camera 11f ... Fixed street camera 12 ... Image processing device 13 ... Communication device 14 ... Control device 15 ... Position detection device 16 ... Vehicle speed sensor 20 ... Central monitoring device 21 ... Central control device 22 ... Image processing device 23 ... Communication device 24 ... Display 25 ... Input device 30 ... Telecommunications network V, V1, V2, V3 ... Mobile object M ... Projection model S, Sa, Sb, Sc, Sd ... Projection Surfaces R1 to R8 ... view point K ... monitoring image

Claims (8)

  1.  中央監視装置と、複数の移動体にそれぞれ搭載され、当該中央監視装置と無線通信を介して情報の授受が可能な監視端末装置と、を備える監視システムであって、
     前記中央監視装置は、
     前記監視端末装置と情報の授受を行う際の通信速度を算出する通信速度算出手段と、
     前記算出された通信速度が所定閾値以下である場合には、前記送信される情報に、解像度が相対的に高い第1解像度の第1画像と、前記第1解像度よりも低い第2解像度の第2画像とを所定の編成条件で混合させた監視画像を生成させるとともに、当該生成された監視画像を含む監視情報を前記中央監視装置へ送信させる監視情報送信命令を生成する命令生成手段と、
     前記生成された監視情報送信命令を、選択された監視端末装置に送信する命令送信手段と、を備え、
     前記監視端末装置は、
     前記各移動体に搭載され、当該移動体の周囲の異なる方向を撮像するカメラと、
     前記中央監視装置から取得した前記監視情報送信命令に含まれる前記所定の編成条件に従った前記監視画像を生成する監視画像生成手段と、
     前記監視情報送信命令に従い、前記生成された監視画像を含む監視情報を前記中央監視装置に送信する監視情報送信手段と、を備えることを特徴とする監視システム。
    A monitoring system comprising a central monitoring device and a monitoring terminal device mounted on each of a plurality of mobile bodies and capable of transmitting and receiving information via the central monitoring device and wireless communication,
    The central monitoring device is
    A communication speed calculating means for calculating a communication speed when exchanging information with the monitoring terminal device;
    When the calculated communication speed is equal to or lower than a predetermined threshold, the transmitted information includes a first image having a first resolution with a relatively high resolution and a second image having a second resolution lower than the first resolution. A command generation means for generating a monitoring image in which two images are mixed under a predetermined composition condition and generating a monitoring information transmission command for transmitting monitoring information including the generated monitoring image to the central monitoring device;
    Command transmission means for transmitting the generated monitoring information transmission command to the selected monitoring terminal device,
    The monitoring terminal device
    A camera that is mounted on each of the movable bodies and images different directions around the movable body;
    Monitoring image generation means for generating the monitoring image according to the predetermined composition condition included in the monitoring information transmission command acquired from the central monitoring device;
    A monitoring system comprising: monitoring information transmitting means for transmitting monitoring information including the generated monitoring image to the central monitoring device in accordance with the monitoring information transmission command.
  2.  前記中央監視装置の前記命令生成手段は、前記移動体を基準とした監視方向ごとに前記監視情報送信命令を生成する請求項1に記載の監視システム。 The monitoring system according to claim 1, wherein the command generation means of the central monitoring device generates the monitoring information transmission command for each monitoring direction with respect to the moving body.
  3.  前記中央監視装置の前記命令生成手段は、前記移動体の周囲のうち監視者の注視したい特定の注視方向が入力された場合には、前記注視方向を撮像するカメラの撮像画像に基づいて生成された前記第1解像度の第1画像と、前記注視方向以外の方向を撮像するカメラの撮像画像に基づいて生成された前記第2解像度の第2画像とを所定の編成条件で混合させた監視画像を生成させる命令を含む監視情報送信命令を生成することを特徴とする請求項1に記載の監視システム。 The command generation unit of the central monitoring device is generated based on a captured image of a camera that captures the gaze direction when a specific gaze direction that the monitor wants to gaze is input from around the moving body. A monitoring image obtained by mixing the first image of the first resolution and the second image of the second resolution generated based on the captured image of the camera that captures a direction other than the gaze direction under a predetermined composition condition. The monitoring system according to claim 1, wherein a monitoring information transmission command including a command for generating the monitoring information is generated.
  4.  前記監視情報は、前記監視端末装置が搭載された車両の車速を含み、
     前記中央監視装置の前記命令生成手段は、前記監視情報に含まれる車速が所定の閾値以上である場合には、前記第2情報が前記情報に含まれる割合を増加させる旨の命令を含む監視情報送信命令を生成することを特徴とする請求項1~3の何れか一項に記載の監視システム。
    The monitoring information includes a vehicle speed of a vehicle on which the monitoring terminal device is mounted,
    The command generation means of the central monitoring device includes monitoring information including a command to increase the ratio of the second information included in the information when the vehicle speed included in the monitoring information is equal to or higher than a predetermined threshold. The monitoring system according to any one of claims 1 to 3, wherein a transmission command is generated.
  5.  前記監視端末装置の各カメラは、所定の解像度で前記車両周囲を撮像する解像度設定機能を有し、
     前記監視画像生成手段は、前記監視情報送信命令に従い、当該監視情報送信命令の前記編成条件に含まれる前記第1解像度で前記カメラに撮像させるとともに、前記編成条件に含まれる前記第2解像度で前記カメラに撮像させ、前記監視情報送信命令の前記編成条件に従い、前記第1解像度の第1画像と前記第2解像度の第2画像とを所定の編成条件で混合させた前記監視画像を生成することを特徴とする請求項1~4の何れか一項に記載の監視システム。
    Each camera of the monitoring terminal device has a resolution setting function for imaging the surroundings of the vehicle at a predetermined resolution,
    In accordance with the monitoring information transmission command, the monitoring image generation means causes the camera to capture an image at the first resolution included in the composition condition of the monitoring information transmission command, and at the second resolution included in the composition condition. Imaging the camera and generating the monitoring image in which the first image of the first resolution and the second image of the second resolution are mixed under a predetermined knitting condition in accordance with the knitting condition of the monitoring information transmission command The monitoring system according to any one of claims 1 to 4, wherein:
  6.  前記監視端末装置の監視画像生成手段は、前記カメラにより撮像された撮像画像を所定の解像度の画像に加工する画像処理機能を備え、
     前記監視画像生成手段は、前記監視情報送信命令に従い、当該監視情報送信命令の前記編成条件に含まれる前記第1解像度の第1画像と、前記第1解像度よりも低い第2解像度の第2画像とを所定の編成条件で混合させた前記監視画像を生成することを特徴とする請求項1~4の何れか一項に記載の監視システム。
    The monitoring image generation means of the monitoring terminal device includes an image processing function for processing a captured image captured by the camera into an image with a predetermined resolution,
    In accordance with the monitoring information transmission command, the monitoring image generation means includes the first image of the first resolution included in the composition condition of the monitoring information transmission command and the second image of the second resolution lower than the first resolution. The monitoring system according to any one of claims 1 to 4, characterized in that the monitoring image is generated by mixing with a predetermined knitting condition.
  7.  前記監視端末装置の監視画像生成手段は、前記中央監視装置から取得した前記監視情報送信命令に前記所定の編成条件が含まれない場合には、前記解像度が相対的に高い第1解像度の第1画像の監視画像を生成し、
     前記監視情報送信手段は、前記生成された監視画像を含む監視情報を所定周期で前記中央監視装置に送信する、ことを特徴とする請求項1~6の何れか一項に記載の監視システム。
    When the monitoring information transmission command acquired from the central monitoring device does not include the predetermined knitting condition, the monitoring image generation unit of the monitoring terminal device has a first resolution with a relatively high first resolution. Generate a surveillance image of the image,
    The monitoring system according to any one of claims 1 to 6, wherein the monitoring information transmitting unit transmits monitoring information including the generated monitoring image to the central monitoring device at a predetermined period.
  8.  中央監視装置と移動体に搭載された監視端末装置とが情報の授受を行う際の通信速度が所定閾値以下である場合には、前記送信される情報に、解像度が相対的に高い第1解像度の第1画像と、前記第1解像度よりも低い第2解像度の第2画像とを所定の編成条件で混合させた監視画像を生成させるとともに、当該生成された監視画像を含む監視情報を前記中央監視装置へ送信させる監視情報送信命令を生成するステップと、
     前記生成された監視情報送信命令を、選択された監視端末装置に送信するステップと
     前記移動体に設けられたカメラで撮像した撮像画像に基づいて、前記無線通信を介して取得した前記監視情報送信命令に含まれる前記所定の編成条件に従い、前記監視端末装置によって生成された前記監視画像を含む監視情報を、前記無線通信を介して取得するステップと、を有する監視方法。
    When the communication speed when the central monitoring device and the monitoring terminal device mounted on the mobile body exchange information is equal to or lower than a predetermined threshold, the transmitted information includes a first resolution having a relatively high resolution. Generating a monitoring image obtained by mixing the first image of the first image and a second image having a second resolution lower than the first resolution under a predetermined composition condition, and monitoring information including the generated monitoring image is generated in the center. Generating a monitoring information transmission command to be transmitted to the monitoring device;
    Transmitting the generated monitoring information transmission command to the selected monitoring terminal device, and transmitting the monitoring information acquired via the wireless communication based on a captured image captured by a camera provided on the moving body A monitoring method comprising: acquiring, via the wireless communication, monitoring information including the monitoring image generated by the monitoring terminal device according to the predetermined composition condition included in the command.
PCT/JP2013/053277 2012-04-24 2013-02-12 Monitoring system and monitoring method WO2013161345A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2012098545 2012-04-24
JP2012-098545 2012-04-24

Publications (1)

Publication Number Publication Date
WO2013161345A1 true WO2013161345A1 (en) 2013-10-31

Family

ID=49482687

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/053277 WO2013161345A1 (en) 2012-04-24 2013-02-12 Monitoring system and monitoring method

Country Status (1)

Country Link
WO (1) WO2013161345A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107333107A (en) * 2017-07-21 2017-11-07 广东美的制冷设备有限公司 Monitor image pickup method, device and its equipment
CN108206928A (en) * 2016-12-16 2018-06-26 无锡市蜂鸣屏蔽设备科技有限公司 A kind of shield door monitoring method
US10659991B2 (en) 2016-12-06 2020-05-19 Nissan North America, Inc. Bandwidth constrained image processing for autonomous vehicles

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007036615A (en) * 2005-07-26 2007-02-08 Matsushita Electric Ind Co Ltd Camera controller and monitoring system
JP2007214769A (en) * 2006-02-08 2007-08-23 Nissan Motor Co Ltd Video processor for vehicle, circumference monitor system for vehicle, and video processing method
WO2008035745A1 (en) * 2006-09-20 2008-03-27 Panasonic Corporation Monitor system, camera and video image coding method
JP2011041311A (en) * 2003-11-18 2011-02-24 Intergraph Software Technologies Co Digital video surveillance

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011041311A (en) * 2003-11-18 2011-02-24 Intergraph Software Technologies Co Digital video surveillance
JP2007036615A (en) * 2005-07-26 2007-02-08 Matsushita Electric Ind Co Ltd Camera controller and monitoring system
JP2007214769A (en) * 2006-02-08 2007-08-23 Nissan Motor Co Ltd Video processor for vehicle, circumference monitor system for vehicle, and video processing method
WO2008035745A1 (en) * 2006-09-20 2008-03-27 Panasonic Corporation Monitor system, camera and video image coding method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10659991B2 (en) 2016-12-06 2020-05-19 Nissan North America, Inc. Bandwidth constrained image processing for autonomous vehicles
JP2020514850A (en) * 2016-12-06 2020-05-21 ニッサン ノース アメリカ,インク Bandwidth constrained image processing for autonomous vehicles
CN108206928A (en) * 2016-12-16 2018-06-26 无锡市蜂鸣屏蔽设备科技有限公司 A kind of shield door monitoring method
CN107333107A (en) * 2017-07-21 2017-11-07 广东美的制冷设备有限公司 Monitor image pickup method, device and its equipment

Similar Documents

Publication Publication Date Title
JP5786963B2 (en) Monitoring system
JP5648746B2 (en) Vehicle monitoring device, vehicle monitoring system, terminal device, and vehicle monitoring method
CN102387344B (en) Imaging device, imaging system and formation method
JP5811190B2 (en) Monitoring system
US20160159281A1 (en) Vehicle and control method thereof
WO2016194039A1 (en) Information presentation system
JP5064201B2 (en) Image display system and camera output control method
JP4048292B2 (en) Traffic information display system, captured image transmission device, in-vehicle information display device and storage medium
WO2013111494A1 (en) Monitoring system
WO2013161345A1 (en) Monitoring system and monitoring method
JP6260174B2 (en) Surveillance image presentation system
WO2013111491A1 (en) Monitoring system
WO2013111492A1 (en) Monitoring system
JP5790788B2 (en) Monitoring system
WO2013111479A1 (en) Monitoring system
JP5796638B2 (en) Monitoring system
WO2013125301A1 (en) Surveillance system
WO2013111493A1 (en) Monitoring system
CN216331763U (en) Intelligent automobile electronic rearview mirror equipment of integrated panorama function and BSD function
CN103139539A (en) Panoramic monitoring system
JP5812105B2 (en) Monitoring system
WO2013129000A1 (en) Monitoring system
JP4696825B2 (en) Blind spot image display device for vehicles
WO2013136894A1 (en) Monitoring system and monitoring method
JP4407246B2 (en) Vehicle periphery monitoring system and vehicle periphery monitoring method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13781098

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13781098

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP