WO2013125301A1 - Surveillance system - Google Patents

Surveillance system Download PDF

Info

Publication number
WO2013125301A1
WO2013125301A1 PCT/JP2013/051753 JP2013051753W WO2013125301A1 WO 2013125301 A1 WO2013125301 A1 WO 2013125301A1 JP 2013051753 W JP2013051753 W JP 2013051753W WO 2013125301 A1 WO2013125301 A1 WO 2013125301A1
Authority
WO
WIPO (PCT)
Prior art keywords
monitoring
information
image
terminal device
tracking target
Prior art date
Application number
PCT/JP2013/051753
Other languages
French (fr)
Japanese (ja)
Inventor
秋彦 香西
照久 高野
真史 安原
Original Assignee
日産自動車株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日産自動車株式会社 filed Critical 日産自動車株式会社
Publication of WO2013125301A1 publication Critical patent/WO2013125301A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R16/00Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
    • B60R16/02Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
    • B60R16/03Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for supply of electrical power to vehicle subsystems or for
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q9/00Arrangements in telecontrol or telemetry systems for selectively calling a substation from a main station, in which substation desired apparatus is selected for applying a control signal thereto or for obtaining measured values therefrom
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2209/00Arrangements in telecontrol or telemetry systems
    • H04Q2209/10Arrangements in telecontrol or telemetry systems using a centralized architecture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2209/00Arrangements in telecontrol or telemetry systems
    • H04Q2209/80Arrangements in the sub-station, i.e. sensing device
    • H04Q2209/82Arrangements in the sub-station, i.e. sensing device where the sensing device takes the initiative of sending data
    • H04Q2209/823Arrangements in the sub-station, i.e. sensing device where the sensing device takes the initiative of sending data where the data is sent when the measured values exceed a threshold, e.g. sending an alarm

Definitions

  • the present invention relates to a monitoring system.
  • This application claims the priority of Japanese Patent Application No. 2012-39328 filed on Feb. 24, 2012, and the above-mentioned designated countries are permitted to be incorporated by reference.
  • the contents described in the application are incorporated into the present application by reference and are part of the description of the present application.
  • a security device that detects occurrences of abnormalities is known by installing a plurality of security camera devices in shopping streets, store entrances, home entrances and other streets, and monitoring surrounding images captured by the security camera devices. (Patent Document 1).
  • the imaging area of the fixed security camera device is fixed, when the movable tracking target such as a human or a vehicle moves and leaves the imaging area of the fixed security camera device, the tracking target is continued. Problem that it can not be monitored.
  • An object of the present invention is to provide a monitoring system capable of continuously monitoring a tracking object moving at random by using a camera mounted on a moving body.
  • the present invention selects a monitoring terminal device of a mobile capable of imaging a monitoring area based on a monitoring point where a tracking target is predicted to exist, and monitoring information including image information for the selected monitoring terminal device.
  • the above object is achieved by outputting an image transmission command to be transmitted.
  • the present invention it is possible to acquire an image of the tracking target from a monitoring terminal apparatus mounted on a mobile capable of imaging a monitoring area based on a monitoring point where a tracking target is predicted to be present. It can be monitored continuously. As a result, the central monitoring device can continuously monitor the randomly moving tracking target by using the monitoring terminal device mounted on the randomly moving mobile object.
  • FIG. 1 It is a schematic diagram which shows the monitoring system which concerns on one embodiment of this invention. It is a block diagram which shows the monitoring system of FIG. It is a perspective view which shows arrangement
  • One embodiment shown below embodies the surveillance system concerning the present invention to surveillance system 1 which carries out central surveillance of the security in a town with authorities, such as a police station and a fire department. That is, position information of each of a plurality of moving bodies, image information around the moving bodies, and time information are acquired at a predetermined timing, and the position information, the image information and the time information are transmitted via wireless communication. The information is transmitted to a central monitoring device installed at the authority, and the position information is displayed on the map information, and the image information and the time information are displayed on the display as needed. Therefore, as shown in FIG. 1, the monitoring system 1 of this example acquires and processes monitoring information via the telecommunication network 30 and the monitoring terminal device 10 for acquiring monitoring information such as position information and image information. A central monitoring device 20 is provided.
  • FIG. 2 is a block diagram showing a specific configuration of the monitoring terminal device 10 and the central monitoring device 20. As shown in FIG. The monitoring system of the present embodiment continuously acquires monitoring information regarding a movable specific tracking target.
  • the monitoring terminal device 10 is a terminal device mounted on a plurality of mobile units V, and is mounted on each of a plurality of mobile units and a position detection function for detecting position information of each of the plurality of mobile units V.
  • An image generation function that captures the surroundings of the moving object with a camera to generate image information, a time detection function, an information acquisition control function that acquires position information, image information, and time information at a predetermined timing, and Monitoring information generation function for generating monitoring information including position information and / or image information, and communication function for outputting the position information, image information and time information to the central monitoring device 20 and acquiring a command from the central monitoring device 20 And the function of reporting the occurrence of an abnormality.
  • the monitoring information generation function can generate monitoring information including sighting information such as having witnessed a person fleeing from an abnormal occurrence site at the time of abnormality, and having witnessed a tracking target indicated by the central monitoring device 20. This sighting information is used when setting the surveillance area.
  • the monitoring terminal device 10 can exchange information with the vehicle controller 17 that centrally controls the vehicle speed sensor 18, the navigation device 19, and other in-vehicle electronic devices.
  • the monitoring terminal device 10 can transmit the vehicle speed information acquired via the vehicle speed sensor 18 and the vehicle controller 17 to the central monitoring device 20 as part of the monitoring information.
  • the time information may be omitted because it is information mainly used for post hoc analysis of events.
  • the mobile V on which the monitoring terminal device 10 is mounted is not particularly limited as long as it travels a target monitoring area, and includes mobiles such as passenger cars, two-wheeled vehicles, industrial vehicles, and trams.
  • Vehicle V1, private passenger car V2 and emergency passenger car V3 are included, but taxis and route buses V1 that run at random and always in a predetermined area are particularly preferable.
  • FIG. 1 exemplifies a taxi V1, a private passenger car V2, an emergency passenger car V3 such as a police car, a fire engine or an ambulance, but when these are generically referred to as a mobile V or a passenger car V.
  • Each moving body V includes a plurality of on-vehicle cameras 11a to 11e (hereinafter collectively referred to as cameras 11), an image processing device 12, a communication device 13, an on-vehicle control device 14, a position detection device 15, and a notification button 16 Is mounted.
  • the camera 11 is configured of a CCD camera or the like, captures an image of the periphery of the moving body V, and outputs an imaging signal to the image processing apparatus 12.
  • the image processing device 12 reads an imaging signal from the camera 11 and performs image processing on image information. Details of this image processing will be described later.
  • the position detection device 15 includes a GPS device and its correction device, detects the current position of the mobile V, and outputs the current position to the on-vehicle control device 14.
  • the notification button 16 is an input button installed in the vehicle compartment, and information (error information) for reporting an abnormality when a driver or a passenger discovers an incident (an incident related to security such as an accident, a fire, or a crime) It is a manual button to input). This information can include position information of the mobile V that has reported an abnormality.
  • the on-vehicle control device 14 includes a CPU, a ROM, and a RAM, and controls the image processing device 12, the communication device 13, and the position detection device 15 when the notification button 16 is pressed.
  • the image information, the position information of the mobile V detected by the position detection device 15, and the time information from a clock built in the CPU are output to the central monitoring device 20 via the communication device 13 and the telecommunication network 30.
  • a command requesting information such as an image transmission command is acquired from the central monitoring device 20 received via the telecommunication network 30 and the communication device 13, and the image processing device 12, the communication device 13 and the position detection device 15 are obtained.
  • the communication device 13 controls monitoring information including image information generated by the image processing device 12, position information of the moving object V detected by the position detection device 15, and time information from a clock built in the CPU. And, it outputs to the central monitoring device 20 via the telecommunication network 30 and the monitoring terminal device 10 mounted on another mobile unit V.
  • the in-vehicle control device 14 can store monitoring information including image information, position information, time information and the like for at least a predetermined time.
  • the in-vehicle control device 14 extracts the subject included in the image information, further extracts the feature of the subject, and stores the subject information and the feature information in association with each piece of image information.
  • the on-vehicle control device 14 acquires a feature to be tracked from the central monitoring device 20, it can search the stored monitoring information for image information including a subject having the feature. Then, when the image information of the subject having the feature to be tracked is retrieved, the in-vehicle control device 14 can include report information indicating the presence of the trace object in the monitoring information and output it.
  • the in-vehicle control device 14 extracts the subject included in the image information, and further extracts the feature of the subject, and when the subject is a human, a human head, particularly a human
  • the subject information and the feature information can be stored in association with each piece of image information by extracting features for measuring the face similarity.
  • Whether or not the subject is a human can be determined by the size, shape, movement of the limbs, etc.
  • the head can be determined by the position (upper part), color, shape, etc. of the subject.
  • the face part can be extracted by the arrangement of the feature parts such as color, eyes, nose, mouth and eyebrows.
  • the feature of the face it is possible to extract the size, positional relationship, and the like of each feature portion such as eyes, nose, mouth and eyebrows.
  • the in-vehicle control device 14 acquires the image information or face feature of the face to be tracked from the central monitoring device 20, the in-vehicle control device 14 stores the image information of which the similarity between the face to be tracked and the face is a predetermined value or more. It is possible to search from within the monitoring information.
  • image information including a face whose similarity is equal to or more than a predetermined value is searched, notification information indicating the presence of a tracking target can be included in monitoring information and output.
  • the determination of the similarity of the face can be quantitatively performed based on the contour of the face and the positional relationship of the eyes, eyebrows, nose and mouth constituting the face, and the like.
  • the determination method of the face similarity is not particularly limited, and techniques known at the time of filing can be applied.
  • the feature of the face to be tracked may be determined on the side of the central monitoring device 20, or may be determined on the side of the monitoring terminal device 10.
  • the central monitoring device 20 side obtains face features
  • the obtained face features are included in an image transmission command and sent to the monitoring terminal device 10.
  • the central monitoring device 20 sends image information of the face to the monitoring terminal device 10.
  • the communication device 13 is a communication means capable of wireless communication, and exchanges information with the communication device 23 of the central monitoring device 20 via the telecommunication network 30.
  • the telecommunication network 30 is a commercial telephone network, a mobile telephone communication device can be used for general purpose, and when the telecommunication network 30 is a dedicated telecommunication network of the monitoring system 1 of this example, it is dedicated Communication devices 13 and 23 can be used.
  • a wireless LAN, WiFi (registered trademark), WiMAX (registered trademark), Bluetooth (registered trademark), a dedicated wireless channel, or the like can be used instead of the telecommunication network 30, a wireless LAN, WiFi (registered trademark), WiMAX (registered trademark), Bluetooth (registered trademark), a dedicated wireless channel, or the like can be used.
  • the central monitoring device 20 stores the information acquisition function of acquiring the position information and the image information output from the monitoring terminal device 10 described above, and at least temporarily stores the acquired monitoring information in the database 26 in association with the position information. It has a function and a display control function for displaying the map information from the map database, displaying and controlling the received position information on the map information, and displaying the received image information on the display 24.
  • the central monitoring device 20 refers to the monitoring information in the database 26 and specifies a tracking target to be tracked by the monitoring person, and sets a monitoring point setting function of setting a monitoring point where the tracking target is predicted to be present, A selection function of selecting the monitoring terminal device 10 of the passenger car V belonging to the monitoring area based on the selected monitoring point, and at least including image information for the selected monitoring terminal device 10, including image information as necessary It has an instruction output function of outputting an image transmission instruction for transmitting monitoring information.
  • the monitoring point setting function of the present embodiment can also set a monitoring area based on the monitoring information acquired from each passenger car V.
  • the monitoring point setting function sets the monitoring area based on the position information of the monitoring information without performing processing of setting a predetermined monitoring point from the monitoring information. That is, the monitoring area may be defined in association with the position information of each monitoring information, or may be defined in association with the monitoring point obtained from each monitoring information.
  • the monitoring point setting function of the present embodiment repeats setting of the monitoring point at a predetermined cycle, and sets a monitoring point or a monitoring area according to the moved position of the tracking target.
  • the selection function of this embodiment selects a passenger car V capable of imaging a monitoring area which is sequentially set with reference to new monitoring information or a monitoring point.
  • a passenger car V belonging to (exists) in the surveillance area is selected.
  • the selection function of the present embodiment selects a passenger car V capable of imaging a monitoring point.
  • the selection function selects the passenger car V capable of imaging the monitoring point based on the positional relationship (distance / direction) between the monitoring point and the current position of the passenger car V and the traveling direction (imaging direction) of the passenger car V .
  • each monitoring area is imaged by the monitoring terminal devices 10 mounted on a plurality of different passenger cars V capable of imaging, and the tracking targets are cooperated in cooperation Since the monitoring information on the object is collected, the movement of the tracking object can be captured continuously.
  • the central control unit 21 includes a CPU, a ROM, and a RAM, and controls the image processing unit 22, the communication unit 23, and the display 24 to receive position information, image information, and time information transmitted from the monitoring terminal 10. The image processing is performed as necessary, and the image is displayed on the display 24.
  • the image processing device 22 has a map database, displays the map information from the map database on the display 24, and superimposes the position information detected by the position detection device 15 of the monitoring terminal device 10 on the map information. Do. Further, image processing is performed to display the image information captured by the on-vehicle camera 11 of the monitoring terminal device 10 and processed by the image processing device 12 on the display 24.
  • the display 24 can be configured by, for example, a liquid crystal display device of a size capable of displaying two window screens on one screen or two liquid crystal display devices each of which displays two window screens. Then, on one of the window screens, a screen in which the position information of each moving object V is superimposed on the map information is displayed (see FIG. 1), and on the other window screen, the image captured by the on-vehicle camera 11 is displayed. Display the relevant image information.
  • the input device 25 includes a keyboard or a mouse, and is used to input an information acquisition command to be output to a desired moving body V, or to input a processing command to various information displayed on the display 24. .
  • the monitoring point as the reference of the monitoring area area can also be input by the monitoring person via the input device 25.
  • the observer may designate a monitoring point by clicking (selecting and inputting) an icon of each point superimposed and displayed on the map information, and may set a monitoring area based on this monitoring point. it can.
  • the monitoring point in the present embodiment is a point at which a tracking target is predicted to exist.
  • the setting method of this monitoring point is not limited to the above-mentioned method, and based on the report from the monitoring terminal device 10, the monitoring point predicted to have a tracking target may be set.
  • the central control device 21 acquires monitoring information including sighting information for reporting that the tracking target is output from the monitoring terminal device 10 and based on the position information of the monitoring terminal device 10 for which the sighting information is output.
  • the central control device 21 acquires monitoring information including sighting information for reporting that the tracking target is output from the monitoring terminal device 10 and based on the position information of the monitoring terminal device 10 for which the sighting information is output.
  • the central control device 21 sequentially acquires monitoring information including image information in a predetermined cycle, and the tracking target specified by the observer based on the image information in which the situation around the moving object is captured in time series.
  • the movement direction can be calculated, and the monitoring point can be set on the side of the calculated movement direction. According to the image information obtained by imaging the surroundings of the moving object in a predetermined cycle, it is possible to determine in which direction the tracking target has moved (run away). The point can be set accurately.
  • the central control device 21 acquires monitoring information including image information over time in a predetermined cycle, and based on the image information in which the situation around the moving object is captured in time series, The movement speed of the identified tracking target can be calculated, and the monitoring point can be set according to the calculated movement speed.
  • the image information obtained by imaging the surroundings of the moving object in a predetermined cycle it is possible to determine whether the movement speed of the tracking object, for example, whether the tracking object escapes on foot or escapes using a vehicle, etc. It is possible to accurately set a monitoring point where a tracking target is predicted to exist at the monitoring timing of.
  • the setting method of the monitoring point which considered the moving speed can be utilized with the setting method of the monitoring point which considered the moving direction mentioned above.
  • the monitoring point can be set more accurately by considering the moving direction and the moving direction.
  • the first set monitoring point is a moving road such as a highway, a bridge or a one-way road, and the moving direction is limited, only the moving speed is considered. It can also be set.
  • the road information for determining whether or not the moving direction is limited may be input by the supervisor or road information included in advance may be read out from the map information.
  • the monitoring terminal device 10 sets the monitoring timing after a predetermined time has elapsed from the time when the monitoring information capturing the tracking target is generated or acquired, and sets the monitoring point where the tracking target exists at the monitoring timing. can do.
  • the central control unit 21 sets a predetermined monitoring area based on the set monitoring point, and selects the monitoring terminal device 10 of the mobile V capable of imaging the monitoring area.
  • the setting method of the monitoring area is not particularly limited, and an area having a predetermined distance from the monitoring point can be set as the monitoring area.
  • the monitoring point can be set in consideration of the speed of the tracking target, but the speed of the tracking target may be considered when setting the monitoring area. Specifically, when the moving speed of the tracking target is high, the monitoring area can be set wide, and when the moving speed of the tracking target is low, the monitoring area can be set narrow.
  • the speed to be tracked may be set as the relative speed with respect to the passenger car V on which the monitoring terminal device 10 is mounted.
  • the central control device 21 outputs an image transmission instruction to transmit monitoring information including image information to the monitoring terminal device 10 of the selected mobile unit V.
  • the on-vehicle control device 14 of the monitoring terminal device 10 sends out monitoring information including image information of the monitoring area.
  • the monitoring system 1 since the monitoring system 1 according to the present embodiment sends monitoring information including image information to the monitoring terminal device 10 capable of imaging the monitoring area based on the monitoring point where the tracking target is predicted to exist, random It is possible to effectively collect image information of a monitoring area where it is predicted that there will be tracking objects moving to the. That is, regardless of where the tracking target exists, the movable monitoring terminal device 10 can capture the tracking target.
  • the central control unit 21 can also send out an image transmission command informing of the feature to be tracked.
  • the central control device 21 has a feature extraction function of extracting a feature to be tracked from image information acquired in the past.
  • the features of the tracking target include the color of the tracking target, the size of the tracking target, the moving speed of the tracking target, the number of tracking targets, the posture of the tracking target, and the like.
  • the object to be tracked is a human
  • the color of the clothes, height, running means such as running away or running away by car, number of people, hairstyle, gender, etc. can be the features to be tracked. .
  • the central control device 21 outputs, to the selected monitoring terminal device 10, an image transmission command for transmitting monitoring information on a subject having the extracted feature.
  • the occupant of the mobile unit V and the monitoring terminal device 10 can recognize a specific tracking target, and the monitoring accuracy can be improved.
  • the feature of “red clothes, 190 cm tall male” is sent to a mobile V capable of capturing an image of the monitoring area (for example, the mobile V present in the monitoring area)
  • the mobile V It can be expected that the crew members pay more attention to their surroundings and draw attention. As a result, it is possible to collect tracking information of sighting targets and prevent the tracking targets from being lost.
  • the in-vehicle control device 14 of the monitoring terminal device 10 having received the image transmission command refers to the data subjected to the feature analysis on the image information captured in the past, and searches for the subject having the feature included in the image transmission command.
  • the on-vehicle control device 14 of the monitoring terminal device 10 includes the notification information indicating the presence of the tracking target in the monitoring information and outputs it when the image information of the subject having the feature to be tracked is retrieved. It is possible to quickly identify the location of the tracking target moving to the site, and to effectively collect the image information. As a result, even if the subject of tracking is not lost or the subject of tracking is lost, the footprint of the subject of tracking can be reliably traced by searching for a subject whose feature is the same as the target of tracking.
  • the central control unit 21 can also send out an image transmission command notifying the feature of the face. Specifically, the central control unit 21 extracts a feature of a tracking target from image information acquired in the past, and a feature evaluation function of determining whether the tracking target is a human based on the feature. Prepare. Whether or not a person is a human can be determined using a method known at the time of application based on features such as the size, shape, and movements of limbs of a subject included in image information.
  • the central control device 21 selects an image transmission command for transmitting monitoring information including the image information of the tracking target face acquired from the image information, as the selected mobile V Output to the monitoring terminal device 10 of FIG. Also, known feature extraction techniques can be used to identify human tracking target face portions.
  • the in-vehicle control device 14 of the monitoring terminal device 10 having received the image transmission command refers to the image transmission command with reference to the data subjected to feature analysis for the purpose of similarity determination with respect to the face image included in the image information captured in the past. Search for subjects having features in common with the face of the image information included in.
  • the in-vehicle control device 14 of the monitoring terminal device 10 when the image information of the subject similar to the face of the tracking target (the similarity is equal to or more than the predetermined value) is searched, Since the information is included in the monitoring information and output, it is possible to quickly find out the position of the randomly moving tracking target and effectively collect the image information.
  • the monitoring terminal device 10 wirelessly communicates with the monitoring terminal device 10 mounted on another moving object V when the image information of the tracking target face and the image information having a degree of similarity equal to or higher than a predetermined value are searched. It is possible to output image information retrieved through inter-vehicle communication, that is, image information of a face to be tracked. As a result, since it is possible to share information on the tracking target persona between the monitoring terminal device 10 capable of imaging the monitoring area and the monitoring terminal device 10 located in the vicinity thereof, the search for the tracking object is strengthened. Can.
  • the monitoring terminal device 10 described above is fixed to be mounted at a predetermined position and have an image generation function for capturing an image and generating image information.
  • a type of surveillance terminal 10 may be included.
  • the existing fixed camera can be effectively used, and image information from different viewpoints can be acquired.
  • the position information of the fixed monitoring terminal device 10 can be stored in advance.
  • the communication device 23 is a communication means capable of wireless communication, and exchanges information with the communication device 13 of the monitoring terminal device 10 via the telecommunication network 30.
  • the telecommunication network 30 is a commercial telephone network, a mobile telephone communication device can be used for general purpose, and when the telecommunication network 30 is a dedicated telecommunication network of the monitoring system 1 of this example, it is dedicated Communication devices 13 and 23 can be used.
  • the cameras 11a to 11e are configured using imaging devices such as CCDs, and the four on-vehicle cameras 11a to 11d are respectively installed at different positions outside the passenger car V, and respectively capture four directions around the vehicle.
  • One in-vehicle camera 11e is installed in the passenger compartment of the passenger car V and captures an image of the passenger compartment.
  • the camera 1 of the present embodiment is provided with a zoom-up function for imaging a subject in an enlarged manner, and can arbitrarily change the focal length in accordance with the control command or can arbitrarily change the imaging magnification in accordance with the control command.
  • the on-vehicle camera 11a installed at a predetermined position in front of the passenger car V such as the front grille is an object or road surface existing in the space SP1 in front of the passenger car V Shoot the front view).
  • one on-vehicle camera 11e is installed, for example, on the ceiling of the passenger compartment, and captures an image of the indoor area SP5 as shown in FIG. Etc. for crime prevention or crime notification.
  • FIG. 4 is a view of the arrangement of the on-vehicle cameras 11a to 11e viewed from above the passenger car V.
  • the on-vehicle camera 11a for capturing the area SP1 the on-vehicle camera 11b for capturing the area SP2
  • the on-vehicle camera 11c for capturing the area SP3 the on-vehicle camera 11d for capturing the area SP4 are the passenger cars V It is installed along a left turn (counterclockwise) or a right turn (clockwise) along the outer periphery VE of the body.
  • the on-vehicle camera 11b is installed on the left next to the on-vehicle camera 11a.
  • the on-vehicle camera 11c is installed on the left side of the on-vehicle camera 11c
  • the on-vehicle camera 11a is installed on the left side of the on-vehicle camera 11d.
  • the on-vehicle camera 11d is installed on the right of the on-vehicle camera 11a.
  • An on-vehicle camera 11c is installed on the right
  • an on-vehicle camera 11b is installed on the right of the on-vehicle camera 11c
  • an on-vehicle camera 11a is installed on the right of the on-vehicle camera 11b.
  • FIG. 5A shows an example of an image GSP1 obtained by imaging the area SP1 by the front on-vehicle camera 11a
  • FIG. 5B shows an example of an image GSP2 obtained by imaging the area SP2 by the left-side on-vehicle camera 11b
  • FIG. 5D shows an example of an image GSP4 in which the in-vehicle camera 11d in the right side images an area SP4.
  • FIG. 5E shows an in-vehicle camera 11e in the room.
  • the size of each image is 480 pixels long by 640 pixels wide. The image size is not particularly limited, as long as it can be reproduced by a general terminal device.
  • the number and position of the on-vehicle cameras 11 can be appropriately determined according to the size, shape, detection area setting method, and the like of the passenger car V.
  • the plurality of on-vehicle cameras 11 described above are provided with identifiers according to their positions, and the on-vehicle control device 14 can identify each of the on-vehicle cameras 11 based on the respective identifiers. Further, the on-vehicle control device 14 can transmit an imaging instruction and other instructions to the specific on-vehicle camera 11 by attaching an identifier to the instruction signal.
  • the in-vehicle control device 14 controls the image processing device 12 to acquire an imaging signal captured by the in-vehicle camera 11, and the image processing device 12 processes the imaging signal from each in-vehicle camera 11 to obtain FIG. Convert to image information shown in 5E. Then, the on-vehicle control device 14 generates a monitoring image based on the four pieces of image information shown in FIGS. 5A to 5D (image generation function), and projects this monitoring image on the side surface of the projection model of the cylinder. Mapping information for projection onto a surface is associated with a monitoring image (mapping information addition function), and is output to the central monitoring device 20.
  • the image generation function and the mapping information addition function will be described in detail.
  • the monitoring image is generated based on the four image information obtained by imaging the surroundings of the passenger car V, and the processing of associating the mapping information with this is executed by the monitoring terminal device 10 as in this example, and by the central monitoring device 20 It can also be performed.
  • four pieces of image information obtained by imaging the periphery of the passenger car V are transmitted as they are from the monitoring terminal device 10 to the central monitoring device 20, and are monitored by the image processing device 22 and the central control device 21 of the central monitoring device 20.
  • the image may be generated, the mapping information may be associated, and projection conversion may be performed.
  • the on-vehicle control device 14 of the monitoring terminal device 10 controls the image processing device 12 to obtain imaging signals of the on-vehicle cameras 11a to 11e, respectively.
  • One monitoring image is generated such that the image information of the on-vehicle cameras 11a to 11d installed in the counterclockwise direction is arranged in the installation order of the on-vehicle cameras 11a to 11d.
  • the four on-vehicle cameras 11a to 11d are installed in the order of the cameras 11a, 11b, 11c, and 11d counterclockwise (counterclockwise) along the outer periphery VE of the body of the passenger car V Therefore, the on-vehicle control device 14 integrates the four images captured by the on-vehicle cameras 11 a to 11 d according to the order of installation of the on-vehicle cameras 11 a to 11 d (in-vehicle cameras 11 a ⁇ 11 b ⁇ 11 c ⁇ 11 d). Connect horizontally to generate a single surveillance image. In the monitoring image of the present embodiment, the images are arranged such that the ground contact surface (road surface) of the passenger car V is the lower side, and the images are connected to each other at the side in the height direction (vertical direction).
  • FIG. 6 is a diagram showing an example of the monitoring image K.
  • the monitoring image K of this embodiment is a captured image GSP1 obtained by imaging the area SP1 by the front on-vehicle camera 11a along the direction P from the left to the right in the drawing, and the on-vehicle camera 11b on the left side.
  • the captured image GSP2 captured the area SP2
  • the captured image GSP3 captured the area SP3 by the rear on-vehicle camera 11c
  • the captured image GSP4 captured the area SP4 by the on-vehicle camera 11d on the right side are arranged in this order in this order.
  • These four images are arranged as a series of images.
  • the observer By displaying the monitoring image K generated in this way, from the left end to the right side in order, with the image corresponding to the road surface (the ground contact surface of the vehicle) down, the observer turns counterclockwise around the vehicle V It can be viewed on the display 24 in the same way as looking around.
  • monitoring image K when generating one monitoring image K, four images acquired at substantially the same time as the imaging timings of the on-vehicle cameras 11a to 11d are used.
  • the information included in the monitoring image K can be synchronized, so that the situation around the vehicle at a predetermined timing can be accurately represented.
  • the monitoring image K generated from each captured image at which the imaging timing of the camera is substantially simultaneous is stored over time, and a monitoring image K of a moving image including a plurality of monitoring images K is generated per predetermined unit time. You may By generating the monitoring image K of the moving image on the basis of the images at the same imaging timing, it is possible to accurately express the change of the situation around the vehicle.
  • the on-vehicle control device 14 of the present embodiment since the on-vehicle control device 14 of the present embodiment generates one monitoring image K from a plurality of images, it is possible to simultaneously reproduce moving images of different imaging directions regardless of the function of the central monitoring device 20. it can. That is, by continuously reproducing the monitoring image K (moving image reproduction), the four images included in the monitoring image K are simultaneously and continuously reproduced (moving image reproduction), and the state change of the area different in direction is displayed on one screen Can be monitored.
  • the monitoring terminal device 10 of the present embodiment generates a monitoring image K by compressing the data amount of the image so that the number of pixels of the monitoring image K becomes substantially the same as the number of pixels of the images of the onboard cameras 11a to 11d.
  • the in-vehicle control device 14 can also add a line figure indicating the boundaries of the arranged images to the monitoring image K.
  • the on-vehicle control device 14 takes a rectangular partition image Bb, Bc, Bd, Ba, Ba 'between the respective images as a line figure indicating the boundaries of the arranged images.
  • the partition image functions as a frame of each captured image.
  • arranging the partition image at the boundary of the captured image can suggest that the image in the region with large distortion is hidden or that the distortion is large. .
  • the on-vehicle control device 14 can also generate the monitoring image K after correcting distortion when four images are projected on the projection plane set on the side surface of the projection model described later. .
  • the peripheral region of the captured image is prone to distortion of the image, and in the case of the on-vehicle camera 11 using a wide-angle lens in particular, the distortion of the captured image tends to be large. It is desirable to correct the distortion of the captured image using the defined image conversion algorithm and the correction amount.
  • the on-vehicle control device 14 reads information of the same projection model as that of the projection model on which the monitoring image K is projected in the central monitoring device 20 from the ROM, and captures the image on the projection plane of this projection model. Images can also be projected and pre-corrected for distortions that occur in the projection plane.
  • the image conversion algorithm and the correction amount can be appropriately defined according to the characteristics of the on-vehicle camera 11 and the shape of the projection model. As described above, by correcting distortion in the case where the image K is projected on the projection plane of the projection model in advance, it is possible to provide a highly visible monitoring image K with less distortion. In addition, by correcting distortion in advance, it is possible to reduce positional deviation between the images arranged side by side.
  • the on-vehicle control device 14 projects the generated monitoring image K on the projection plane set on the side surface of the projection model M of a column whose bottom surface is the ground plane of the passenger car V.
  • a process of associating the mapping information for the image with the monitoring image K is performed.
  • the mapping information is information for making the central monitoring device 20 that has received the monitoring image K easily recognize the projection reference position.
  • FIG. 8 is a view showing an example of a projection model M according to this embodiment
  • FIG. 9 is a schematic cross-sectional view along the xy plane of the projection model M shown in FIG.
  • the projection model M of the present embodiment is a regular octagonal prism whose base is a regular octagon and has a height along the vertical direction (z-axis direction in the figures).
  • the shape of the projection model M is not particularly limited as long as it is a cylindrical body having side surfaces adjacent along the boundary of the bottom, and a cylindrical body, or a rectangular cylindrical body such as a triangular cylindrical body, a rectangular cylindrical body, or a hexagonal cylindrical body It can also be an anti-prism body whose bottom is a polygon and whose side is a triangle.
  • the bottom face of the projection model M of this embodiment is parallel to the ground plane of the passenger car V.
  • projection surfaces Sa, Sb, Sc, Sd (hereinafter collectively referred to as projection surface S) that project an image around the passenger car V grounded on the bottom surface of the projection model M. It is set.
  • the projection surface S is a portion of the projection surface Sa and a portion of the projection surface Sb, a portion of the projection surface Sb and a portion of the projection surface Sc, a portion of the projection surface Sc and a portion of the projection surface Sd, and the projection surface Sd It can also be constituted by a part of and a part of projection plane Sa.
  • the monitoring image K is projected on the projection surface S as a video of the passenger car V over the viewpoint R (R1 to R8, hereinafter collectively referred to as a viewpoint R) above the projection model M surrounding the passenger car V.
  • the in-vehicle control device 14 associates the reference coordinates of the captured image disposed at the right end or the left end with the monitoring image K as mapping information.
  • the on-vehicle controller 14 is disposed at the right end as mapping information (reference coordinates) indicating the start position or the end position of the monitoring image K when projected onto the projection model M.
  • Coordinates A (x, y) of the upper left vertex of the captured image GSP1 and coordinates B (x, y) of the upper right vertex of the captured image GSP2 arranged at the left end are attached to the monitoring image K.
  • the reference coordinates of the captured image indicating the start position or the end position are not particularly limited, and may be the lower left vertex of the monitoring image K disposed at the left end or the lower right vertex of the monitoring image K disposed at the right end.
  • the mapping information may be attached to each pixel of the image data of the monitoring image K, or may be managed as a separate file from the monitoring image K.
  • the central monitoring device 20 that has received the monitoring image K by associating the monitoring image K with the information indicating the start position or the end position of the monitoring image K, that is, the reference coordinates used as a reference in projection processing as mapping information. Since the reference position at the time of projection processing can be easily recognized, the monitoring images K arranged in the arrangement order of the on-vehicle cameras 11a to 11d are sequentially and easily projected on the projection plane S on the side surface of the projection model M be able to. That is, as shown in FIG.
  • the captured image GSP1 ahead of the vehicle is projected on the projection surface Sa located in the imaging direction of the on-vehicle camera 11a, and the captured image on the right side of the vehicle on the projection surface Sb located in the imaging direction GSP2 is projected, the captured image GSP3 behind the vehicle is projected on the projection surface Sc located in the imaging direction of the on-vehicle camera 11c, and the captured image GSP4 in the left side of the vehicle is projected on the projection surface Sd located in the imaging direction of the on-vehicle camera 11d can do.
  • the surveillance image K projected on the projection model M can show an image that can be seen as if the surroundings of the passenger car V are looked around. That is, since the surveillance image K including four images arranged in a row in the horizontal direction according to the installation order of the on-vehicle cameras 11a to 11d is projected on the side in the same horizontal direction in the column of the projection model M, In the surveillance image K projected on the projection plane S of the projection model M of the column, the image around the passenger car V can be reproduced while maintaining its positional relationship.
  • the on-vehicle control device 14 stores the correspondence relationship between each coordinate value of the monitoring image K and the coordinate value of each projection surface S of the projection model M as mapping information and adds it to the monitoring image K. Although it can be, it may be stored in advance in the central monitoring device 20.
  • the positions of the viewpoint R and the projection plane S shown in FIGS. 8 and 9 are merely examples, and can be set arbitrarily.
  • the viewpoint R can be changed by the operation of the operator.
  • the relationship between the viewpoint R and the projection position of the surveillance image K is defined in advance, and when the position of the viewpoint R is changed, it is viewed from the newly set viewpoint R by executing predetermined coordinate conversion.
  • the surveillance image K can be projected onto the projection plane S (Sa to Sd). A known method can be used for this viewpoint conversion process.
  • the on-vehicle control device 14 of the present embodiment generates the monitoring image K based on the image information captured at a predetermined timing, and the monitoring image K shows mapping information, reference coordinates, and a line figure
  • the information of the partition image is associated and stored temporally according to the imaging timing.
  • the on-vehicle control device 14 may store the monitoring image K as one moving image file including a plurality of monitoring images K per predetermined unit time, or a mode in which transfer / reproduction can be performed by streaming method Monitoring image K may be stored.
  • the communication device 23 of the central monitoring device 20 receives the monitoring image K transmitted from the monitoring terminal device 10 and the mapping information associated with the monitoring image K. Further, the image information taken by the in-vehicle camera 11e in the room is separately received.
  • this surveillance image K images of the four on-vehicle cameras 11 installed at different positions of the body of the passenger car V as described above are installed along the clockwise or counterclockwise direction along the outer periphery of the body of the passenger car V
  • the in-vehicle cameras 11a to 11d are arranged according to the installation order (the clockwise or counterclockwise order along the outer periphery of the body of the vehicle V).
  • mapping information for causing the monitoring image K to be projected on the projection surface S of the octagonal prism projection model M is associated with the monitoring image K.
  • the communication device 23 transmits the acquired monitoring image K and the mapping information to the image processing device 22.
  • the image processing device 22 reads out the projection model M stored in advance, and sets it on the side surface of the octagonal cylinder projection model M whose bottom surface is the contact surface of the passenger car V shown in FIGS. 8 and 9 based on the mapping information.
  • a display image is generated by projecting the monitoring image K on the projection planes Sa to Sd. Specifically, each pixel of the received monitoring image K is projected on each pixel of the projection planes Sa to Sd according to the mapping information. Further, when projecting the surveillance image K onto the projection model M, the image processing device 22 recognizes the start point (right end or left end of the surveillance image K) of the surveillance image K based on the reference coordinates received together with the surveillance image K.
  • the projection processing is performed so that the start point coincides with the start point (the right end or the left end of the projection surface S) defined in advance on the projection model M. Further, when projecting the monitoring image K onto the projection model M, the image processing device 22 arranges a line figure (partition image) indicating the boundary of each image on the projection model M.
  • the partition image may be attached to the projection model M in advance, or may be attached to the monitoring image K after the projection processing.
  • the display 24 displays the monitoring image K projected on the projection plane S of the projection model M.
  • FIG. 10 shows an example of a display image of the monitoring image K.
  • the input device 25 such as a mouse or a keyboard or the display 24 as the touch panel input device 25
  • the viewpoint can be freely set and changed by the operation of the supervisor. Since the correspondence between the viewpoint position and the projection surface S is previously defined in the image processing device 22 or the display 24 described above, the monitor image K corresponding to the changed viewpoint is displayed on the display 24 based on the correspondence. can do.
  • FIG. 11 is a flowchart showing the operation of the monitoring terminal device 10
  • FIGS. 12A and 12B are flowcharts showing the operation of the central monitoring device 20
  • FIG. 13 is a diagram showing an example of database information.
  • a surrounding image and an indoor image are acquired from the in-vehicle camera 11 at predetermined time intervals (one routine shown in the same drawing), and the image processing device 12 converts it into image information. Convert (step ST1). Further, the current position information of the passenger car V on which the monitoring terminal device 10 is mounted is detected from the position detection device 15 provided with the GPS (step ST2). The position detection device 15 can also be configured as part of the navigation device 19.
  • step ST3 it is determined whether or not the report button 16 for reporting an abnormality is pressed. If the report button 16 is pressed, the process proceeds to step ST4, and the image information obtained in step ST1 and the information obtained in step ST2 The position information and the time information of the CPU are associated, and these are transmitted as monitoring information to the central monitoring device 20 via the communication device 13 and the telecommunication network 30 together with the abnormal information indicating that an abnormality has occurred.
  • the image information and the position information are acquired in the first steps ST1 and ST2, but the image information and the position information may be acquired at the timing between the steps ST3 and ST4.
  • step ST3 when the notification button 16 is not pressed, the process proceeds to step ST5, and communicates with the central monitoring device 20 to acquire a control command.
  • step ST6 the monitoring terminal device 10 determines whether or not an image transmission instruction has been acquired from the central monitoring device 20. If an image transmission instruction has been acquired, the process proceeds to step ST7 and image information and position information are obtained. , Monitoring information including time information to the central monitoring device 20.
  • the monitoring terminal device 10 includes monitoring information including notification information notifying that the tracking target exists when the image transmission command includes the feature of the tracking target, and the image information of the subject having this feature is searched. Can be sent. Further, in the monitoring terminal device 10, when the image transmission command includes the image information of the face to be tracked, and the image information of the subject of the face similar to this face (the similarity is equal to or more than a predetermined value) is searched. , May transmit monitoring information including notification information notifying that a tracking target exists. In addition, when the storage instruction is included in the image transmission instruction, the image information, the position information and the time information are stored.
  • step ST6 even if the image transmission command is not acquired from the central monitoring device 20, if the passenger car V exists in the key monitoring area defined in advance in step ST8, the process proceeds to step ST10 and the image information Send including monitoring information.
  • step ST9 monitoring information not including the image information, that is, time information and position information is transmitted to the central monitoring device 20.
  • FIG. 13 is a diagram showing an example of information stored in the database 26.
  • monitoring information including image information, position information, time information, tracking target features, and tracking target face features acquired from passenger car V (monitoring terminal device 10) is associated with the position information.
  • this monitoring information can include a mobile unit ID (monitoring terminal device ID) for identifying the monitoring terminal device 10.
  • the mobile unit ID may be the address of the communication device 13 of the monitoring terminal device 10.
  • step ST12 the passenger car V is displayed on the map information of the map database displayed on the display 24 as shown in the upper left of FIG. 1 based on the position information acquired in step ST11.
  • the position information of the passenger car V is acquired and transmitted at a predetermined timing for each routine in FIG. 11, so that the supervisor can grasp the current position of the passenger car V in a timely manner.
  • step ST13 whether or not abnormality information notified from the monitoring terminal device 10 of the passenger car V, that is, a notification that an abnormality related to security such as an accident or a crime has occurred has been received, or notification from the monitoring terminal device 10 of the passenger car V It is determined whether the received sighting information, that is, the report that the sighting object has been witnessed has been received.
  • the abnormality information or sighting information is output when the passenger of the passenger car V presses the notification button 16 of the monitoring terminal device 10.
  • step ST14 If there is abnormality information or sighting information, the passenger car V for which the abnormality information is output is specified in step ST14, image information and time information are received from the monitoring terminal device 10 of the passenger car, and the image information is displayed on the display 24 Do. Further, as shown in the upper left of FIG. 1, highlighting is performed such as changing a color so that the passenger car displayed on the map information can be distinguished from other passenger cars. Thereby, while being able to visually recognize on a map information the position where abnormality generate
  • steps ST13 to 20 are processes in the case where abnormality information or sighting information is reported as an example, and the existing position of the passenger car V who has reported the abnormality information or sighting information is selected as a monitoring point However, even if abnormal information or sighting information is not notified, and the observer arbitrarily designates a place to be monitored (monitoring point), the processing from step ST13 to step 20 can be executed similarly. In this case, the place designated by the supervisor becomes the surveillance point.
  • step ST15 the central monitoring device 20 sets a monitoring point where it is predicted that there is a tracking target to be monitored paying attention to the position where the passenger car V which has output the abnormality information is present.
  • a tracking target at the point where the sighting information was reported
  • a supervisor can also set up arbitrarily.
  • the central monitoring device 20 selects, based on the monitoring point, another vehicle capable of imaging the monitoring area set on the basis of the monitoring point, that is, the monitoring terminal device 10.
  • the central monitoring device 20 chooses other vehicles, ie, monitoring terminal device 10, which are present in the monitoring area within a predetermined distance from the monitoring point.
  • the central monitoring device 20 selects another vehicle capable of imaging the monitoring point, that is, the monitoring terminal device 10.
  • the central monitoring device 20 is another vehicle existing within a predetermined distance from the monitoring point, and another vehicle existing in a predetermined direction with respect to the monitoring point (other vehicles whose imaging direction is directed to the monitoring point) choose
  • the monitoring area may be a circular area of the same distance from the monitoring point, or may be a band area of a predetermined distance along the road including the monitoring point, or when a right turn or left turn is considered at an intersection.
  • A may be a fan-shaped area of a predetermined distance and a predetermined central angle.
  • the monitoring area may be a circular area of the same distance from the position information included in the monitoring information, or may be a band area of a predetermined distance along the road including the monitoring point, or a right turn or left turn at an intersection etc.
  • A may be a fan-shaped area of a predetermined distance and a predetermined central angle.
  • the central monitoring device 20 can also select the monitoring terminal device 10 of another vehicle approaching the monitoring area that can enter the monitoring area within a predetermined time. This is because other vehicles approaching the monitored area can image the monitored area after a predetermined time has elapsed, even if not currently present in the monitored area.
  • the passenger car V approaching the monitoring area not only the passenger car V approaching the monitoring area but also the monitoring area is separated in consideration of the possibility that the tracking target is lost and the selection of the monitoring point may be delayed with respect to the movement of the tracking target.
  • the passenger car V is also targeted for selection.
  • the passenger car V separated from the surveillance area may have imaged the surveillance area in the past.
  • the central monitoring device 20 preferably transmits an image transmission command specifying the imaging time of the image information to the passenger car V separated from the monitoring area.
  • the monitoring area is set after the time has passed since the sighting information was reported or after the time has elapsed since the occurrence of the incident, the reporting or incident occurrence of the sighting information
  • the image transmission command specifying the imaging time before and after the time it is possible to retroactively collect the report of sighting information or the image information before and after the occurrence of the incident.
  • the method of selecting the passenger car V capable of imaging the surveillance area is not particularly limited. For example, first, the passenger car who reported abnormality, the passenger car who sent sighting information, or the current position of the passenger car arbitrarily selected by the supervisor The road where the point (Y) exists is identified, and the passenger car V traveling on the road is extracted with reference to the database 26. Then, the specified position (X) of the passenger car V, the moving speed (V), and the traveling direction are specified. The moving speed and the traveling direction of the passenger car V may be obtained based on the temporal change of the position information, or may be obtained based on the moving speed acquired by being included in the monitoring information.
  • the central monitoring device 20 selects the passenger car V whose (Y ⁇ X) / V is less than a predetermined value as the traveling direction of the passenger car V approaches the monitoring point. If this (Y ⁇ X) / V is too small, the monitoring point is immediately passed, so a lower limit may be set. Then, in the same step, the central monitoring device 20 transmits an image transmission command to the selected monitoring terminal device 10. In addition, since the monitoring point and the monitoring area set based on this are updated at a predetermined cycle and change from moment to moment as the tracking target moves, a passenger car capable of imaging the monitoring area even if the tracking target moves randomly It can be selected.
  • the image transmission instruction can include information for specifying an imaging direction.
  • the central monitoring device 20 calculates the imaging direction based on the positional relationship between the monitoring point and the monitoring area.
  • the imaging direction may be expressed by an azimuth, or may be expressed by identification information of the on-vehicle camera 11 if the position of the on-vehicle camera 11 is known. Thereby, the video of the surveillance point can be reliably acquired by the camera 11 of the passenger car V in the surveillance area. Moreover, since only necessary image information can be transmitted, the amount of transmission data can be reduced.
  • the monitoring terminal device 10 when the monitoring terminal device 10 is provided with the navigation device, the monitoring including the image information is automatically performed at the timing when the own vehicle intrudes into the monitoring area from the monitoring point transmitted by the central monitoring device 20 and the current position. Information can also be sent to central monitoring device 20.
  • the position information of the passenger car V which has output the abnormality information is transmitted to an emergency passenger car such as a police car, an ambulance, a fire engine or the like.
  • image information may be attached and transmitted in order to notify of abnormal content.
  • step ST18 all position information, image information and time information received from the monitoring terminal device 10 are recorded on the recording medium. This record is used to resolve them after an accident or crime. If there is no abnormality information in step ST13, the process proceeds to step ST21 without performing the processing in steps ST14 to ST18.
  • step ST19 it is determined whether or not the tracking monitoring state of the tracking target has been released, and if the tracking monitoring state has been released, the processing from step 21 is performed. On the other hand, if it has not been released, in order to continue monitoring the monitoring point, if the passenger car V selected in the previous step ST16 passes the monitoring area or the monitoring area can not be imaged, the step Returning to ST16, a passenger car V capable of imaging a monitoring area anew is selected.
  • a method for selecting a passenger car V for imaging a surveillance area is not particularly limited, the same method as the process of step ST16 can be used.
  • the monitoring passenger car V capable of imaging the monitoring area is successively selected, so that the camera 11 is mounted even if the tracking target moves randomly. Even if the passenger car V moves randomly, it is possible to continuously track and image a specific tracking target and obtain monitoring information on the tracking target.
  • step ST21 it is determined whether there is an instruction to transmit image information from an emergency passenger car such as a police car, an ambulance or a fire engine. If an image transmission instruction is input, the process proceeds to step ST22.
  • step ST22 it is determined whether the passenger car V exists in the area specified by the transmission instruction of the image information. If the passenger car V exists, the process proceeds to step ST23. Then, in step ST23, a transmission instruction of the image information is output to the passenger car V present in the area specified by the transmission instruction of the image information. Thereby, the image information from the passenger car V can be acquired in step ST11 of FIG.
  • step ST24 without performing the process of steps ST21 to ST23.
  • step ST24 it is determined whether a passenger car V exists in the vicinity of a suspicious area such as a crime prone area set in advance, and if it exists, the process proceeds to step ST25 to transmit image information to the passenger car V Output a command.
  • Suspicious places are streets and streets that are insecure. This makes it possible to strengthen surveillance of streets and streets that are suspicious places, and can be expected to prevent crime in advance.
  • the process proceeds to step ST26 without performing the process of step ST22.
  • step ST26 it is determined whether the passenger car V exists in the vicinity of the important point monitoring position where it is possible to image the important point monitoring target whose details should be monitored. If the passenger car V exists in the vicinity of the important point monitoring position Then, it outputs a priority monitoring command requesting transmission of image information to which the focus monitoring target is expanded for the passenger car V. As a result, it is possible to monitor the focus monitoring target in detail, to effectively detect a suspicious object that causes an event or an accident in the specified focus monitoring target, and it is possible to expect crime prevention. When the passenger car V does not exist in the vicinity of the important point monitoring position, the process proceeds to step ST28 without performing the process of step ST27.
  • step ST28 the passenger car V does not travel within a predetermined time within a predetermined area (not limited to the suspicious spot and the key monitoring area) where monitoring is required based on the position information received from each passenger car V It is determined whether or not there is a route, and when there is such a route, it is monitored whether there is a passenger car V traveling on the route. Then, if there is a passenger car V traveling the route most recently, the process proceeds to step ST29, and a transmission instruction of image information is output to the passenger car V. As a result, it is possible to automatically acquire image information of a route other than a suspicious spot or a key monitoring area and in which the passenger car V has a small traffic volume. If there is no route that satisfies the condition of step ST28, the process returns to step ST11 of FIG. 12A without performing the process of step ST29.
  • the monitoring system 1 of this example selects the monitoring terminal device 10 of a mobile capable of imaging a monitoring area based on the monitoring point where the tracking target is predicted to exist, and selects the selected monitoring terminal device 10 A monitoring terminal device 10 mounted on a mobile capable of imaging a monitoring area based on a monitoring point predicted to have a tracking target by outputting an image transmission command for transmitting monitoring information including image information. Since it is possible to obtain an image of the tracking target from, it is possible to continuously monitor the moving tracking target. By selecting the monitoring terminal device 10 present in the monitoring area based on the monitoring point as the “monitoring terminal device 10 capable of capturing an image of the monitoring area”, it is possible to efficiently collect a captured image of the tracking target.
  • the monitoring terminal device 10 uses the monitoring terminal device 10 mounted on the moving object V which moves randomly while using the camera 11 mounted on the moving object V. Thus, it is possible to continuously monitor a tracking target which moves randomly.
  • the passenger car V on which the person witnessed the tracking object is located in the vicinity of the surveillance point, and there is a high possibility that the surveillance terminal 10 can image the surveillance area
  • the monitoring information including the sighting information notifying that the tracking target has been witnessed is acquired from the monitoring terminal device 10
  • the tracking target is obtained based on the position information of the monitoring terminal device 10 to which the sighting information is output.
  • the movement direction of the tracking target is calculated based on the image information acquired over time, and the monitoring point is set on the calculated movement direction side. It is possible to appropriately set a monitoring point where an object is predicted to exist and a monitoring area based thereon.
  • the moving speed of the tracking target is calculated based on the image information acquired over time, and the monitoring point according to the calculated moving speed is set. Even when the vehicle escapes or the vehicle escapes, the monitoring point and the monitoring area can be appropriately set.
  • the monitoring information includes notification information notifying the existence of the tracking target. Since the output is performed, the position of the randomly moving tracking target can be quickly identified, and the image information can be effectively collected. As a result, even if the subject of tracking is not lost or the subject of tracking is lost, it is possible to accurately trace the footprint of the target of tracking by searching for a subject whose feature is the same as that of the target of tracking.
  • the tracking target is a human
  • an image transmission command notifying the feature of the face is sent out, so the occupant of the mobile V and the monitoring terminal device 10 perform specific tracking.
  • the target face can be recognized, and the monitoring accuracy can be improved.
  • the monitoring system 1 of this example when the image information of the subject similar to the face to be tracked (the similarity is equal to or more than the predetermined value) is retrieved from the image information captured in the past. Since the notification information notifying the existence of the tracking target is included in the monitoring information and output, the existence position of the tracking target moving randomly can be identified quickly, and the image information can be collected effectively. As a result, even if the subject of tracking is not lost or the subject of tracking is lost, it is possible to accurately trace the footprint of the target of tracking by searching for a subject whose feature is the same as that of the target of tracking.
  • the monitoring terminal device 10 mounted on another mobile V Since the image information of the face to be tracked is output via wireless communication (inter-vehicle communication), the person to be tracked is between the monitoring terminal device 10 capable of imaging the monitoring area and the monitoring terminal device 10 located in the vicinity thereof. Since the phase information can be shared, it is possible to strengthen the search for the tracking target. Ru.
  • the monitoring terminal device 10 is attached to a predetermined position in addition to the monitoring terminal device 10 mounted on the mobile unit V, and images the surroundings to generate image information
  • the existing fixed camera can be effectively used by including the fixed type monitoring terminal device 10 having the image generation function.
  • the monitoring system 1 of this example selects the monitoring terminal device 10 of a mobile capable of imaging the monitoring point where the tracking target is predicted to be present, and includes the image information for the selected monitoring terminal device 10 By outputting an image transmission command for transmitting monitoring information, it is possible to acquire an image of the tracking object from the monitoring terminal device 10 mounted on a mobile body capable of imaging the monitoring point where the tracking object is predicted to be present. The subject can be monitored continuously.
  • the monitoring system 1 of this example selects the monitoring terminal device 10 of a mobile capable of capturing an image of the monitoring area where the tracking target is expected to be present, and includes image information for the selected monitoring terminal device 10 By outputting an image transmission command for transmitting monitoring information, it is possible to acquire an image of the tracking target from the monitoring terminal device 10 mounted on a mobile body capable of imaging the monitoring area where the tracking target is predicted to be present. The subject can be monitored continuously.
  • the monitoring method of this example has the same operation and effect as the monitoring system including the monitoring terminal device 10 and the central monitoring device 20.
  • the position information of the passenger car V and the image information from the on-vehicle cameras 11a to 11e are acquired, but it is combined with the image information from the fixed camera 11f installed in the city shown in FIG. May be acquired.
  • passenger car V which acquires position information and picture information as shown in Figure 1, it is desirable to use taxi V1 and the bus which run the territory which is decided beforehand, but even if private passenger car V2 and emergency passenger car V3 are used Good.
  • five in-vehicle cameras are mounted on the passenger car V, and an image of 360 ° around is acquired as image information using four in-vehicle cameras 11a to 11d. It may be omitted. Further, particularly in an environment where image information can be acquired from many passenger cars V, such as an area where traffic is heavy, the four on-vehicle cameras 11a to 11d may be three or less.
  • the passenger car V corresponds to a moving body according to the present invention
  • the position detection device 15 corresponds to a position detection means according to the present invention
  • the on-vehicle camera 11 and the image processing device 12 correspond to an image generation means according to the present invention
  • the in-vehicle control device 14 corresponds to the image search means, the storage means, and the output means according to the present invention
  • the CPU of the in-vehicle control device 14 corresponds to the time detection means according to the present invention
  • the notification button 16 corresponds to the present invention.
  • the communication device 13 corresponds to command receiving means and information output means according to the present invention.
  • the central control unit 21 corresponds to monitoring point setting means, selection means, feature extraction means, the database 26 corresponds to a database, and the communication device 23 and the input device 25 receive information acquisition means according to the present invention, abnormality information acceptance.
  • the display 24 corresponds to display means according to the present invention.

Abstract

A surveillance system (1) is equipped with surveillance terminal devices (10) and a central surveillance device (20) capable of communicating via a telecommunication line network (30). The central surveillance device (20) acquires surveillance information that includes at least position information and is output from the surveillance terminal devices (10), sets a surveillance site where the subject of tracking that has been tracked by a surveillant is predicted to be, selects a surveillance terminal device (10) that monitors a mobile body approaching and/or moving away from the prescribed surveillance area in reference to the set surveillance site, and outputs image transmission commands for transmitting surveillance information, including image information, to the selected surveillance terminal device (10).

Description

監視システムMonitoring system
 本発明は、監視システムに関するものである。
 本出願は、2012年2月24日に出願された日本国特許出願の特願2012―39328に基づく優先権を主張するものであり、文献の参照による組み込みが認められる指定国については、上記の出願に記載された内容を参照により本出願に組み込み、本出願の記載の一部とする。
The present invention relates to a monitoring system.
This application claims the priority of Japanese Patent Application No. 2012-39328 filed on Feb. 24, 2012, and the above-mentioned designated countries are permitted to be incorporated by reference. The contents described in the application are incorporated into the present application by reference and are part of the description of the present application.
 商店街、店舗の出入り口、家庭の玄関その他の街中に複数の防犯カメラ装置を設置し、当該防犯カメラ装置により撮像された周囲の映像を監視することで、異常の発生を検出する防犯装置が知られている(特許文献1)。 A security device that detects occurrences of abnormalities is known by installing a plurality of security camera devices in shopping streets, store entrances, home entrances and other streets, and monitoring surrounding images captured by the security camera devices. (Patent Document 1).
特開2011-215767号公報JP 2011-215767 A
 しかしながら、固定された防犯カメラ装置の撮像エリアは固定であるため、人間や車両などの移動可能な追跡対象が移動して固定の防犯カメラ装置の撮像エリアを出てしまうと、追跡対象を継続して監視することができないという問題がある。 However, since the imaging area of the fixed security camera device is fixed, when the movable tracking target such as a human or a vehicle moves and leaves the imaging area of the fixed security camera device, the tracking target is continued. Problem that it can not be monitored.
 本発明は、移動体に搭載されたカメラを用いて、ランダム移動する追跡対象を継続して監視することができる監視システムを提供することを目的とする。 An object of the present invention is to provide a monitoring system capable of continuously monitoring a tracking object moving at random by using a camera mounted on a moving body.
 本発明は、追跡対象が存在すると予測される監視地点を基準とする監視エリアを撮像可能な移動体の監視端末装置を選択し、選択された監視端末装置に対して画像情報を含む監視情報を送信する画像送信指令を出力することにより、上記目的を達成する。 The present invention selects a monitoring terminal device of a mobile capable of imaging a monitoring area based on a monitoring point where a tracking target is predicted to exist, and monitoring information including image information for the selected monitoring terminal device. The above object is achieved by outputting an image transmission command to be transmitted.
 本発明によれば、追跡対象が存在すると予測される監視地点を基準とする監視エリアを撮像可能な移動体に搭載された監視端末装置から追跡対象の画像を取得できるので、移動する追跡対象を継続的に監視することができる。この結果、中央監視装置はランダムに動く移動体に搭載された監視端末装置を用いて、ランダムに動く追跡対象を継続して監視することができる。 According to the present invention, it is possible to acquire an image of the tracking target from a monitoring terminal apparatus mounted on a mobile capable of imaging a monitoring area based on a monitoring point where a tracking target is predicted to be present. It can be monitored continuously. As a result, the central monitoring device can continuously monitor the randomly moving tracking target by using the monitoring terminal device mounted on the randomly moving mobile object.
本発明の一実施の形態に係る監視システムを示す模式図である。It is a schematic diagram which shows the monitoring system which concerns on one embodiment of this invention. 図1の監視システムを示すブロック図である。It is a block diagram which shows the monitoring system of FIG. 図1の監視システムにおける車載カメラの配置及びその撮像範囲を示す斜視図である。It is a perspective view which shows arrangement | positioning of the vehicle-mounted camera in the monitoring system of FIG. 1, and its imaging range. 図1の監視システムにおける車載カメラの配置及びその撮像範囲を示す平面図である。It is a top view which shows arrangement | positioning of the vehicle-mounted camera in the monitoring system of FIG. 1, and its imaging range. フロントの車載カメラの撮影画像の一例を示す図である。It is a figure showing an example of a photography picture of a front in-vehicle camera. 右サイドの車載カメラの撮影画像の一例を示す図である。It is a figure which shows an example of the picked-up image of the vehicle-mounted camera of the right side. リアの車載カメラの撮影画像の一例を示す図である。It is a figure showing an example of a photography picture of a rear in-vehicle camera. 左サイドの車載カメラの撮影画像の一例を示す図である。It is a figure which shows an example of the picked-up image of the vehicle-mounted camera of the left side. 室内の車載カメラの撮影画像の一例を示す図である。It is a figure which shows an example of the picked-up image of the vehicle-mounted camera of indoor. 複数の画像に基づいて生成された監視画像の一例を示す図である。It is a figure showing an example of a surveillance picture generated based on a plurality of pictures. 監視画像の歪み補正処理を説明するための図である。It is a figure for demonstrating distortion correction processing of a surveillance image. 投影モデルの一例を示す模式図である。It is a schematic diagram which shows an example of a projection model. 図8に示す投影モデルのxy面に沿う断面模式図である。It is a cross-sectional schematic diagram in alignment with xy surface of the projection model shown in FIG. 中央監視装置のディスプレイに表示される画像例を示す図である。It is a figure which shows the example of an image displayed on the display of a central monitoring apparatus. 図1の監視システムの監視端末装置側の主たる制御内容を示すフローチャートである。It is a flowchart which shows the main control content by the side of the monitoring terminal device of the monitoring system of FIG. 図1の監視システムの中央監視装置側の主たる制御内容を示すフローチャート(その1)である。It is a flowchart (the 1) which shows the main control content by the side of the central monitoring apparatus of the monitoring system of FIG. 図1の監視システムの中央監視装置側の主たる制御内容を示すフローチャート(その2)である。It is a flowchart (the 2) which shows the main control content by the side of the central monitoring apparatus of the monitoring system of FIG. データベースの情報例を示す図である。It is a figure which shows the example of information of a database.
 以下に示す一実施の形態は、本発明に係る監視システムを、街中の治安を警察署や消防署などの当局にて集中監視する監視システム1に具体化したものである。すなわち、複数の移動体のそれぞれの位置情報と、当該移動体の周囲の画像情報と、時刻情報とを所定のタイミングで取得し、これら位置情報と画像情報と時刻情報とを、無線通信を介して、当局に設置された中央監視装置へ送信し、これら位置情報を地図情報上に表示するとともに必要に応じて画像情報と時刻情報とをディスプレイに表示するものである。そのため、本例の監視システム1は、図1に示すように位置情報及び画像情報などの監視情報を取得する監視端末装置10と、電気通信回線網30を介して監視情報を取得して処理する中央監視装置20とを備える。 One embodiment shown below embodies the surveillance system concerning the present invention to surveillance system 1 which carries out central surveillance of the security in a town with authorities, such as a police station and a fire department. That is, position information of each of a plurality of moving bodies, image information around the moving bodies, and time information are acquired at a predetermined timing, and the position information, the image information and the time information are transmitted via wireless communication. The information is transmitted to a central monitoring device installed at the authority, and the position information is displayed on the map information, and the image information and the time information are displayed on the display as needed. Therefore, as shown in FIG. 1, the monitoring system 1 of this example acquires and processes monitoring information via the telecommunication network 30 and the monitoring terminal device 10 for acquiring monitoring information such as position information and image information. A central monitoring device 20 is provided.
 図2は、監視端末装置10及び中央監視装置20の具体的構成を示すブロック図である。本実施形態の監視システムは、移動可能な特定の追跡対象に関する関する監視情報を継続的に取得する。 FIG. 2 is a block diagram showing a specific configuration of the monitoring terminal device 10 and the central monitoring device 20. As shown in FIG. The monitoring system of the present embodiment continuously acquires monitoring information regarding a movable specific tracking target.
 監視端末装置10は、複数の移動体Vに搭載される端末装置であって、これら複数の移動体Vのそれぞれの位置情報を検出する位置検出機能と、複数の移動体のそれぞれに装着されたカメラで当該移動体の周囲を撮像して画像情報を生成する画像生成機能と、時刻検出機能と、所定のタイミングで位置情報、画像情報及び時刻情報を取得する情報取得制御機能と、取得された位置情報及び/又は画像情報を含む監視情報を生成する監視情報生成機能と、これら位置情報、画像情報及び時刻情報を中央監視装置20へ出力するとともに中央監視装置20からの指令を取得する通信機能と、異常の発生を通報する機能とを有する。監視情報生成機能は、異常時に異常発生現場から逃走する人物を目撃した旨や、中央監視装置20から示された追跡対象を目撃した旨などの目撃情報を含む監視情報を生成することができる。この目撃情報は、監視エリアを設定する際に用いられる。 The monitoring terminal device 10 is a terminal device mounted on a plurality of mobile units V, and is mounted on each of a plurality of mobile units and a position detection function for detecting position information of each of the plurality of mobile units V. An image generation function that captures the surroundings of the moving object with a camera to generate image information, a time detection function, an information acquisition control function that acquires position information, image information, and time information at a predetermined timing, and Monitoring information generation function for generating monitoring information including position information and / or image information, and communication function for outputting the position information, image information and time information to the central monitoring device 20 and acquiring a command from the central monitoring device 20 And the function of reporting the occurrence of an abnormality. The monitoring information generation function can generate monitoring information including sighting information such as having witnessed a person fleeing from an abnormal occurrence site at the time of abnormality, and having witnessed a tracking target indicated by the central monitoring device 20. This sighting information is used when setting the surveillance area.
 そのため、複数の車載カメラ11a~11e、画像処理装置12、通信装置13、車載制御装置14、位置検出装置15及び通報ボタン16を備える。また、本実施形態の監視端末装置10は、車速センサ18、ナビゲーション装置19その他の車載電子機器を集中制御する車両コントローラ17と情報の授受が可能である。監視端末装置10は、車速センサ18、車両コントローラ17を介して取得した車速情報は監視情報の一部として中央監視装置20へ送信することができる。なお、時刻情報は主として事象の事後解析に供される情報であるため省略してもよい。 Therefore, a plurality of in-vehicle cameras 11a to 11e, an image processing device 12, a communication device 13, an in-vehicle control device 14, a position detection device 15, and a notification button 16 are provided. In addition, the monitoring terminal device 10 according to the present embodiment can exchange information with the vehicle controller 17 that centrally controls the vehicle speed sensor 18, the navigation device 19, and other in-vehicle electronic devices. The monitoring terminal device 10 can transmit the vehicle speed information acquired via the vehicle speed sensor 18 and the vehicle controller 17 to the central monitoring device 20 as part of the monitoring information. The time information may be omitted because it is information mainly used for post hoc analysis of events.
 監視端末装置10が搭載される移動体Vは、目的とする監視領域を走行するものであれば特に限定されず、乗用車、二輪車、産業車両、路面電車などの移動体を含み、乗用車には業務車両V1や自家用乗用車V2や緊急乗用車V3が含まれるが、なかでも特に予め決められた領域をランダム且つ常時走行するタクシーや路線バスV1などが好適に含まれる。図1には、タクシーV1、自家用乗用車V2、パトカー、消防車又は救急車などの緊急乗用車V3を例示するが、これらを総称する場合は移動体Vまたは乗用車Vという。 The mobile V on which the monitoring terminal device 10 is mounted is not particularly limited as long as it travels a target monitoring area, and includes mobiles such as passenger cars, two-wheeled vehicles, industrial vehicles, and trams. Vehicle V1, private passenger car V2 and emergency passenger car V3 are included, but taxis and route buses V1 that run at random and always in a predetermined area are particularly preferable. FIG. 1 exemplifies a taxi V1, a private passenger car V2, an emergency passenger car V3 such as a police car, a fire engine or an ambulance, but when these are generically referred to as a mobile V or a passenger car V.
 それぞれの移動体Vには、複数の車載カメラ11a~11e(以下、総称する場合はカメラ11という。)、画像処理装置12、通信装置13、車載制御装置14、位置検出装置15及び通報ボタン16が搭載されている。カメラ11は、CCDカメラなどで構成され、移動体Vの周囲を撮像し、その撮像信号を画像処理装置12へ出力する。画像処理装置12は、カメラ11からの撮像信号を読み出し、画像情報に画像処理する。この画像処理の詳細は後述する。 Each moving body V includes a plurality of on-vehicle cameras 11a to 11e (hereinafter collectively referred to as cameras 11), an image processing device 12, a communication device 13, an on-vehicle control device 14, a position detection device 15, and a notification button 16 Is mounted. The camera 11 is configured of a CCD camera or the like, captures an image of the periphery of the moving body V, and outputs an imaging signal to the image processing apparatus 12. The image processing device 12 reads an imaging signal from the camera 11 and performs image processing on image information. Details of this image processing will be described later.
 位置検出装置15は、GPS装置及びその補正装置などで構成され、当該移動体Vの現在位置を検出し、車載制御装置14へ出力する。通報ボタン16は、車室内に設置された入力ボタンであって、運転手や同乗者がインシデント(事故、火事、犯罪など治安に関する出来事)を発見した際に異常を通報するための情報(異常情報)を入力する手動ボタンである。この情報には、異常を通報した移動体Vの位置情報を含めることができる。 The position detection device 15 includes a GPS device and its correction device, detects the current position of the mobile V, and outputs the current position to the on-vehicle control device 14. The notification button 16 is an input button installed in the vehicle compartment, and information (error information) for reporting an abnormality when a driver or a passenger discovers an incident (an incident related to security such as an accident, a fire, or a crime) It is a manual button to input). This information can include position information of the mobile V that has reported an abnormality.
 車載制御装置14は、CPU,ROM,RAMにより構成され、通報ボタン16が押されたときに、画像処理装置12、通信装置13及び位置検出装置15を制御し、画像処理装置12で生成された画像情報と、位置検出装置15で検出された移動体Vの位置情報と、CPUが内蔵する時計からの時刻情報とを通信装置13及び電気通信回線網30を介して中央監視装置20へ出力する。また、電気通信回線網30及び通信装置13を介して受信された中央監視装置20から画像送信指令などの情報を要求する指令を取得し、画像処理装置12、通信装置13及び位置検出装置15を制御し、画像処理装置12で生成された画像情報と、位置検出装置15で検出された移動体Vの位置情報と、CPUが内蔵する時計からの時刻情報とを含む監視情報を、通信装置13及び電気通信回線網30を介して中央監視装置20、他の移動体Vに搭載された監視端末装置10へ出力する。なお、車載制御装置14は、画像情報、位置情報、時刻情報などを含む監視情報を少なくとも所定時間記憶しておくことができる。 The on-vehicle control device 14 includes a CPU, a ROM, and a RAM, and controls the image processing device 12, the communication device 13, and the position detection device 15 when the notification button 16 is pressed. The image information, the position information of the mobile V detected by the position detection device 15, and the time information from a clock built in the CPU are output to the central monitoring device 20 via the communication device 13 and the telecommunication network 30. . In addition, a command requesting information such as an image transmission command is acquired from the central monitoring device 20 received via the telecommunication network 30 and the communication device 13, and the image processing device 12, the communication device 13 and the position detection device 15 are obtained. The communication device 13 controls monitoring information including image information generated by the image processing device 12, position information of the moving object V detected by the position detection device 15, and time information from a clock built in the CPU. And, it outputs to the central monitoring device 20 via the telecommunication network 30 and the monitoring terminal device 10 mounted on another mobile unit V. The in-vehicle control device 14 can store monitoring information including image information, position information, time information and the like for at least a predetermined time.
 また、車載制御装置14は、取得した画像情報については、画像情報に含まれる被写体を抽出し、さらにその被写体の特徴を抽出して、被写体情報及び特徴情報を各画像情報に対応づけて記憶しておくことができる。車載制御装置14は、中央監視装置20から追跡対象の特徴を取得した場合には、記憶された監視情報から、その特徴を備える被写体が含まれる画像情報を検索することができる。そして、車載制御装置14は、追跡対象の特徴を備えた被写体の画像情報が検索された場合には、追跡対象の存在を知らせる通報情報を監視情報に含ませて出力することができる。 Further, for the acquired image information, the in-vehicle control device 14 extracts the subject included in the image information, further extracts the feature of the subject, and stores the subject information and the feature information in association with each piece of image information. Can be When the on-vehicle control device 14 acquires a feature to be tracked from the central monitoring device 20, it can search the stored monitoring information for image information including a subject having the feature. Then, when the image information of the subject having the feature to be tracked is retrieved, the in-vehicle control device 14 can include report information indicating the presence of the trace object in the monitoring information and output it.
 さらに、車載制御装置14は、取得した画像情報については、画像情報に含まれる被写体を抽出し、さらにその被写体の特徴を抽出して、被写体が人間である場合には人間の頭部、特に人間の顔の類似度を計測するための特徴を抽出して、被写体情報及び特徴情報を各画像情報に対応づけて記憶しておくことができる。被写体が人間であるか否かは、大きさ、形状、四肢の動きなどによって判断することができ、頭部は被写体における位置(上部)、色、形状などによって判断することができ、頭部のうち顔の部分は、色、目、鼻、口、眉などの各特徴部分の配置などによって抽出することができる。また、顔の特徴としては、目、鼻、口、眉などの各特徴部分の大きさ、位置関係などを抽出することができる。 Furthermore, for the acquired image information, the in-vehicle control device 14 extracts the subject included in the image information, and further extracts the feature of the subject, and when the subject is a human, a human head, particularly a human The subject information and the feature information can be stored in association with each piece of image information by extracting features for measuring the face similarity. Whether or not the subject is a human can be determined by the size, shape, movement of the limbs, etc. The head can be determined by the position (upper part), color, shape, etc. of the subject. Among them, the face part can be extracted by the arrangement of the feature parts such as color, eyes, nose, mouth and eyebrows. In addition, as the feature of the face, it is possible to extract the size, positional relationship, and the like of each feature portion such as eyes, nose, mouth and eyebrows.
 そして、車載制御装置14は、中央監視装置20から追跡対象の顔の画像情報又は顔の特徴を取得した場合には、追跡対象と顔の類似度が所定値以上の画像情報を、記憶された監視情報の中から検索することができる。類似度が所定値以上の顔を含む画像情報が検索された場合には、追跡対象の存在を知らせる通報情報を監視情報に含ませて出力することができる。ちなみに、顔の類似度の判断は、顔の輪郭や顔を構成する目、眉、鼻および口の位置関係などにより定量的に行うことができる。顔の類似度の判断手法は特に限定されず、出願時に知られる技術を適用することができる。なお、追跡対象の顔の特徴は、中央監視装置20側で求めてもよいし、監視端末装置10側で求めてもよい。中央監視装置20側で顔の特徴を求める場合には、求めた顔の特徴を画像送信指令に含めて監視端末装置10へ送出する。他方、監視端末装置10側で顔の特徴を求める場合には、中央監視装置20は顔の画像情報を監視端末装置10へ送出する。 Then, when the in-vehicle control device 14 acquires the image information or face feature of the face to be tracked from the central monitoring device 20, the in-vehicle control device 14 stores the image information of which the similarity between the face to be tracked and the face is a predetermined value or more. It is possible to search from within the monitoring information. When image information including a face whose similarity is equal to or more than a predetermined value is searched, notification information indicating the presence of a tracking target can be included in monitoring information and output. Incidentally, the determination of the similarity of the face can be quantitatively performed based on the contour of the face and the positional relationship of the eyes, eyebrows, nose and mouth constituting the face, and the like. The determination method of the face similarity is not particularly limited, and techniques known at the time of filing can be applied. The feature of the face to be tracked may be determined on the side of the central monitoring device 20, or may be determined on the side of the monitoring terminal device 10. When the central monitoring device 20 side obtains face features, the obtained face features are included in an image transmission command and sent to the monitoring terminal device 10. On the other hand, when the feature of the face is obtained on the monitoring terminal device 10 side, the central monitoring device 20 sends image information of the face to the monitoring terminal device 10.
 通信装置13は、無線通信が可能な通信手段であり、電気通信回線網30を介して中央監視装置20の通信装置23と情報の授受を実行する。電気通信回線網30が商用電話回線網である場合は携帯電話通信装置を汎用することができ、電気通信回線網30が本例の監視システム1の専用電気通信回線網である場合は、それ専用の通信装置13,23を用いることができる。なお、電気通信回線網30に代えて、無線LAN、WiFi(登録商標)、WiMAX(登録商標)、Bluetooth(登録商標)、専用無線回線などを用いることもできる。 The communication device 13 is a communication means capable of wireless communication, and exchanges information with the communication device 23 of the central monitoring device 20 via the telecommunication network 30. When the telecommunication network 30 is a commercial telephone network, a mobile telephone communication device can be used for general purpose, and when the telecommunication network 30 is a dedicated telecommunication network of the monitoring system 1 of this example, it is dedicated Communication devices 13 and 23 can be used. Note that, instead of the telecommunication network 30, a wireless LAN, WiFi (registered trademark), WiMAX (registered trademark), Bluetooth (registered trademark), a dedicated wireless channel, or the like can be used.
 中央監視装置20は、上述した監視端末装置10から出力された位置情報及び画像情報を取得する情報取得機能と、取得した監視情報を位置情報と対応づけて少なくとも一時的にデータベース26に記憶する蓄積機能と、地図データベースからの地図情報を表示するとともに、受信した位置情報を地図情報上に表示制御し、受信した画像情報をディスプレイ24に表示する表示制御機能と、を有する。また、中央監視装置20は、データベース26の監視情報を参照し、監視者が追跡する追跡対象を特定するとともに、その追跡対象が存在すると予測される監視地点を設定する監視地点設定機能と、設定された監視地点を基準とする監視エリアに属する乗用車Vの監視端末装置10を選択する選択機能と、選択した監視端末装置10に対して、画像情報を少なくとも含み、必要に応じて画像情報を含む監視情報を送信させる画像送信指令を出力する指令出力機能とを有する。 The central monitoring device 20 stores the information acquisition function of acquiring the position information and the image information output from the monitoring terminal device 10 described above, and at least temporarily stores the acquired monitoring information in the database 26 in association with the position information. It has a function and a display control function for displaying the map information from the map database, displaying and controlling the received position information on the map information, and displaying the received image information on the display 24. Further, the central monitoring device 20 refers to the monitoring information in the database 26 and specifies a tracking target to be tracked by the monitoring person, and sets a monitoring point setting function of setting a monitoring point where the tracking target is predicted to be present, A selection function of selecting the monitoring terminal device 10 of the passenger car V belonging to the monitoring area based on the selected monitoring point, and at least including image information for the selected monitoring terminal device 10, including image information as necessary It has an instruction output function of outputting an image transmission instruction for transmitting monitoring information.
 本実施形態の監視地点設定機能は、各乗用車Vから取得した監視情報に基づいて監視エリアを設定することもできる。この場合において、監視地点設定機能は、監視情報から所定の監視地点を設定する処理を行うことなく、監視情報の位置情報に基づいて監視エリアを設定する。つまり、監視エリアは、各監視情報の位置情報に対応づけて定義してもよいし、各監視情報から求められた監視地点に対応づけて定義してもよい。本実施形態の監視地点設定機能は、監視地点の設定を所定周期で繰り返し、追跡対象の移動後の位置に応じた監視地点又は監視エリアを設定する。 The monitoring point setting function of the present embodiment can also set a monitoring area based on the monitoring information acquired from each passenger car V. In this case, the monitoring point setting function sets the monitoring area based on the position information of the monitoring information without performing processing of setting a predetermined monitoring point from the monitoring information. That is, the monitoring area may be defined in association with the position information of each monitoring information, or may be defined in association with the monitoring point obtained from each monitoring information. The monitoring point setting function of the present embodiment repeats setting of the monitoring point at a predetermined cycle, and sets a monitoring point or a monitoring area according to the moved position of the tracking target.
 本実施形態の選択機能は、新たな監視情報又は監視地点を基準として順次設定される監視エリアを撮像することができる乗用車Vを選択する。本例では、監視エリア内に属する(存在する)乗用車Vを選択する。また、本実施形態の選択機能は、監視地点を撮像することができる乗用車Vを選択する。例えば、選択機能は、監視地点と乗用車Vの現在位置との位置関係(距離・方向)、乗用車Vの進行方向(撮像方向)に基づいて、監視地点を撮像することができる乗用車Vを選択する。
 これにより、事件の容疑者などの追跡対象が逃走する場合であっても、各監視エリアを撮像可能な異なる複数の乗用車Vに搭載された監視端末装置10で撮像し、協働して追跡対象に関する監視情報を収集するので、追跡対象の動きを継続的して捉えることができる。
The selection function of this embodiment selects a passenger car V capable of imaging a monitoring area which is sequentially set with reference to new monitoring information or a monitoring point. In this example, a passenger car V belonging to (exists) in the surveillance area is selected. Moreover, the selection function of the present embodiment selects a passenger car V capable of imaging a monitoring point. For example, the selection function selects the passenger car V capable of imaging the monitoring point based on the positional relationship (distance / direction) between the monitoring point and the current position of the passenger car V and the traveling direction (imaging direction) of the passenger car V .
As a result, even if a tracking target such as a suspect of the incident escapes, each monitoring area is imaged by the monitoring terminal devices 10 mounted on a plurality of different passenger cars V capable of imaging, and the tracking targets are cooperated in cooperation Since the monitoring information on the object is collected, the movement of the tracking object can be captured continuously.
 中央制御装置21は、CPU,ROM,RAMにより構成され、画像処理装置22、通信装置23及びディスプレイ24を制御して、監視端末装置10から送信された位置情報、画像情報及び時刻情報を受信し、必要に応じて画像処理を施したうえでディスプレイ24に表示する。 The central control unit 21 includes a CPU, a ROM, and a RAM, and controls the image processing unit 22, the communication unit 23, and the display 24 to receive position information, image information, and time information transmitted from the monitoring terminal 10. The image processing is performed as necessary, and the image is displayed on the display 24.
 画像処理装置22は、地図データベースを有し、当該地図データベースからの地図情報をディスプレイ24に表示するとともに、監視端末装置10の位置検出装置15により検出された位置情報を当該地図情報上に重畳表示する。また、監視端末装置10の車載カメラ11で撮像され、画像処理装置12で処理された画像情報をディスプレイ24に表示するための画像処理を施す。 The image processing device 22 has a map database, displays the map information from the map database on the display 24, and superimposes the position information detected by the position detection device 15 of the monitoring terminal device 10 on the map information. Do. Further, image processing is performed to display the image information captured by the on-vehicle camera 11 of the monitoring terminal device 10 and processed by the image processing device 12 on the display 24.
 ディスプレイ24は、たとえば一つの画面上に2つのウィンド画面が表示できる大きさの液晶表示装置又は2つのウィンド画面をそれぞれ表示する2つの液晶表示装置により構成することができる。そして、一方のウィンド画面には、地図情報上に各移動体Vの位置情報を重ね合わせた画面を表示し(図1参照)、他方のウィンド画面には、車載カメラ11で撮像された映像に係る画像情報を表示する。 The display 24 can be configured by, for example, a liquid crystal display device of a size capable of displaying two window screens on one screen or two liquid crystal display devices each of which displays two window screens. Then, on one of the window screens, a screen in which the position information of each moving object V is superimposed on the map information is displayed (see FIG. 1), and on the other window screen, the image captured by the on-vehicle camera 11 is displayed. Display the relevant image information.
 入力装置25は、キーボード又はマウスを備え、所望の移動体Vに対して出力される情報取得指令を入力したり、ディスプレイ24に表示される各種情報の処理指令を入力したりする場合に用いられる。先述した監視端末装置10の選択処理において、監視エリア領域の基準となる監視地点は入力装置25を介して監視者が入力することもできる。特に限定されないが、監視者は、地図情報上に重畳表示された各地点のアイコンをクリック(選択入力)することにより監視地点を指定し、この監視地点を基準とする監視エリアを設定することができる。 The input device 25 includes a keyboard or a mouse, and is used to input an information acquisition command to be output to a desired moving body V, or to input a processing command to various information displayed on the display 24. . In the above-described selection process of the monitoring terminal device 10, the monitoring point as the reference of the monitoring area area can also be input by the monitoring person via the input device 25. Although not particularly limited, the observer may designate a monitoring point by clicking (selecting and inputting) an icon of each point superimposed and displayed on the map information, and may set a monitoring area based on this monitoring point. it can.
 本実施形態における監視地点は、追跡対象が存在すると予測される地点である。この監視地点の設定手法は上述の手法に限定されず、監視端末装置10からの通報に基づいて追跡対象が存在すると予測される監視地点を設定してもよい。たとえば、中央制御装置21は、監視端末装置10から出力された追跡対象を目撃した旨を通報する目撃情報を含む監視情報を取得し、目撃情報が出力された監視端末装置10の位置情報に基づいて、追跡対象の存在が予測される監視地点を設定することができる。追跡対象を目撃した者が搭乗する乗用車Vは、監視地点近傍に位置し、その監視端末装置10は監視エリアを撮像することができる可能性が高いからである。この手法によれば、追跡対象の目撃情報の送出に遅れることなく、自動的に監視地点を正確に設定することができる。 The monitoring point in the present embodiment is a point at which a tracking target is predicted to exist. The setting method of this monitoring point is not limited to the above-mentioned method, and based on the report from the monitoring terminal device 10, the monitoring point predicted to have a tracking target may be set. For example, the central control device 21 acquires monitoring information including sighting information for reporting that the tracking target is output from the monitoring terminal device 10 and based on the position information of the monitoring terminal device 10 for which the sighting information is output. Thus, it is possible to set a monitoring point where the existence of the tracking target is predicted. This is because the passenger car V on which a person who witnesses the tracking target is located near the monitoring point, and the monitoring terminal device 10 is likely to be able to image the monitoring area. According to this method, it is possible to automatically set the monitoring point accurately without delaying the transmission of tracking information to be tracked.
 また、中央制御装置21は、画像情報を含む監視情報を所定周期で経時的に取得し、時系列で移動体周囲の様子が撮像された画像情報に基づいて、監視者により特定された追跡対象の移動方向を算出し、この算出した移動方向側に監視地点を設定することができる。所定周期で移動体周囲を撮像した画像情報によれば、追跡対象がどちらの方向に移動(逃走)したかを判断することができるので、次の監視タイミングにおいて追跡対象が存在すると予測される監視地点を正確に設定することができる。 Further, the central control device 21 sequentially acquires monitoring information including image information in a predetermined cycle, and the tracking target specified by the observer based on the image information in which the situation around the moving object is captured in time series. The movement direction can be calculated, and the monitoring point can be set on the side of the calculated movement direction. According to the image information obtained by imaging the surroundings of the moving object in a predetermined cycle, it is possible to determine in which direction the tracking target has moved (run away). The point can be set accurately.
 同様の観点によれば、中央制御装置21は、画像情報を含む監視情報を所定周期で経時的に取得し、時系列で移動体周囲の様子が撮像された画像情報に基づいて、監視者により特定された追跡対象の移動速度を算出し、この算出した移動速度に応じた監視地点を設定することができる。所定周期で移動体周囲を撮像した画像情報によれば、追跡対象の移動速度、例えば追跡対象が徒歩で逃走しているか、車両を使って逃走しているかなどを判断することができるので、次の監視タイミングにおいて追跡対象が存在すると予測される監視地点を正確に設定することができる。 According to the same viewpoint, the central control device 21 acquires monitoring information including image information over time in a predetermined cycle, and based on the image information in which the situation around the moving object is captured in time series, The movement speed of the identified tracking target can be calculated, and the monitoring point can be set according to the calculated movement speed. According to the image information obtained by imaging the surroundings of the moving object in a predetermined cycle, it is possible to determine whether the movement speed of the tracking object, for example, whether the tracking object escapes on foot or escapes using a vehicle, etc. It is possible to accurately set a monitoring point where a tracking target is predicted to exist at the monitoring timing of.
 なお、移動速度を考慮した監視地点の設定手法は、上述した移動方向を考慮した監視地点の設定手法とともに利用することができる。移動方向および移動方向を考慮することにより、さらに正確に監視地点を設定することができる。最初に設定された監視地点が高速道路などの車両専用道路である、橋である又は一方通行の道路であるなどの移動方向が限定される場合には、移動速度のみを考慮して監視地点を設定することもできる。移動方向が限定されるか否かを判断するための道路情報は監視者が入力してもよいし、予め含ませておいた道路情報を地図情報から読み出してもよい。 In addition, the setting method of the monitoring point which considered the moving speed can be utilized with the setting method of the monitoring point which considered the moving direction mentioned above. The monitoring point can be set more accurately by considering the moving direction and the moving direction. In the case where the first set monitoring point is a moving road such as a highway, a bridge or a one-way road, and the moving direction is limited, only the moving speed is considered. It can also be set. The road information for determining whether or not the moving direction is limited may be input by the supervisor or road information included in advance may be read out from the map information.
 また、監視情報に時刻情報が含まれている場合には、その時刻情報に基づいて次の監視タイミング(追跡対象が監視地点に存在するタイミング)を設定することができる。特に限定されないが、監視端末装置10は、追跡対象を捉えた監視情報が生成又は取得された時刻から所定時間経過後における監視タイミングを設定し、その監視タイミングにおいて追跡対象が存在する監視地点を設定することができる。 In addition, when time information is included in the monitoring information, it is possible to set the next monitoring timing (the timing when the tracking target exists at the monitoring point) based on the time information. Although not particularly limited, the monitoring terminal device 10 sets the monitoring timing after a predetermined time has elapsed from the time when the monitoring information capturing the tracking target is generated or acquired, and sets the monitoring point where the tracking target exists at the monitoring timing. can do.
 中央制御装置21は、設定された監視地点を基準とする所定の監視エリアを設定し、この監視エリアを撮像可能な移動体Vの監視端末装置10を選択する。監視エリアの設定手法は特に限定されず、監視地点から所定距離のエリアを監視エリアとして設定することができる。上述したように、監視地点は追跡対象の速度を考慮して設定することができるが、監視エリアを設定する際に追跡対象の速度を考慮してもよい。具体的には、追跡対象の移動速度が高い場合には監視エリアを広く設定し、追跡対象の移動速度が低い場合には監視エリアを狭く設定することができる。このとき、監視端末装置10が搭載された乗用車Vの車速センサ18にて検出された車速を考慮して、追跡対象の速度を、監視端末装置10を搭載する乗用車Vに対する相対速度としてもよい。 The central control unit 21 sets a predetermined monitoring area based on the set monitoring point, and selects the monitoring terminal device 10 of the mobile V capable of imaging the monitoring area. The setting method of the monitoring area is not particularly limited, and an area having a predetermined distance from the monitoring point can be set as the monitoring area. As described above, the monitoring point can be set in consideration of the speed of the tracking target, but the speed of the tracking target may be considered when setting the monitoring area. Specifically, when the moving speed of the tracking target is high, the monitoring area can be set wide, and when the moving speed of the tracking target is low, the monitoring area can be set narrow. At this time, in consideration of the vehicle speed detected by the vehicle speed sensor 18 of the passenger car V on which the monitoring terminal device 10 is mounted, the speed to be tracked may be set as the relative speed with respect to the passenger car V on which the monitoring terminal device 10 is mounted.
 中央制御装置21は、選択された移動体Vの監視端末装置10に対して、画像情報を含む監視情報を送信させる画像送信指令を出力する。これに呼応して監視端末装置10の車載制御装置14は監視エリアの画像情報を含む監視情報を送出する、先述したように、固定カメラでは撮像エリアが限定されるので追跡対象が撮像エリアから出てしまうと撮像することはできなくなる。他方、移動体に搭載されたカメラで追跡対象を撮像する場合であっても、カメラ及び追跡対象の両方がランダムに動くため、特定の追跡対象を継続して監視することができないという問題がある。これに対し、本実施形態の監視システム1は、追跡対象が存在すると予測される監視地点に基づく監視エリアを撮像可能な監視端末装置10に対して画像情報を含む監視情報を送出させるので、ランダムに移動する追跡対象が存在すると予測される監視エリアの画像情報を効果的に収集することができる。つまり、追跡対象がどこに存在しても、移動可能な監視端末装置10により追跡対象を撮像することができる。 The central control device 21 outputs an image transmission instruction to transmit monitoring information including image information to the monitoring terminal device 10 of the selected mobile unit V. In response to this, the on-vehicle control device 14 of the monitoring terminal device 10 sends out monitoring information including image information of the monitoring area. As described above, in the fixed camera, since the imaging area is limited, the tracking target is out of the imaging area If it does, it can not be imaged. On the other hand, even in the case where a camera mounted on a moving object captures an image of a tracking target, there is a problem that both the camera and the tracking target move randomly, so that it is not possible to continuously monitor a specific tracking target. . On the other hand, since the monitoring system 1 according to the present embodiment sends monitoring information including image information to the monitoring terminal device 10 capable of imaging the monitoring area based on the monitoring point where the tracking target is predicted to exist, random It is possible to effectively collect image information of a monitoring area where it is predicted that there will be tracking objects moving to the. That is, regardless of where the tracking target exists, the movable monitoring terminal device 10 can capture the tracking target.
 また、中央制御装置21は、追跡対象の特徴を知らせる画像送信指令を送出することもできる。具体的に、中央制御装置21は過去に取得した画像情報から追跡対象の特徴を抽出する特徴抽出機能を備える。本実施形態において、追跡対象の特徴とは、追跡対象の色、追跡対象の大きさ、追跡対象の移動速度、追跡対象の数、追跡対象の姿態などを含む。具体的に追跡対象が人間である場合には、着衣の色、背の高さ、走って逃走又は車で逃走などの逃走手段、人数、髪型、性別などを追跡対象の特徴とすることができる。このような特徴を画像送信指令に含ませることにより、監視端末装置10が過去に撮像した画像情報から追跡対象に関する情報を取得することができる。たとえば、中央制御装置21は、抽出された特徴を備える被写体に関する監視情報を送信させる画像送信指令を、選択された監視端末装置10に対して出力する。これにより、移動体Vの乗員および監視端末装置10は、具体的な追跡対象を認識することができ、監視精度を向上させることができる。例えば、「赤い着衣、190cm長身の男性」という特徴が監視エリアを撮像可能な移動体V(例えば、監視エリア内に存在する移動体V)に対して送出された場合には、その移動体Vの乗員の注意を喚起し、乗員が周囲をより注視することが期待できる。この結果、追跡対象の目撃情報を収集し、追跡対象を見失うことを防止することができる。 The central control unit 21 can also send out an image transmission command informing of the feature to be tracked. Specifically, the central control device 21 has a feature extraction function of extracting a feature to be tracked from image information acquired in the past. In the present embodiment, the features of the tracking target include the color of the tracking target, the size of the tracking target, the moving speed of the tracking target, the number of tracking targets, the posture of the tracking target, and the like. Specifically, when the object to be tracked is a human, the color of the clothes, height, running means such as running away or running away by car, number of people, hairstyle, gender, etc. can be the features to be tracked. . By including such a feature in the image transmission command, it is possible to obtain information on the tracking target from the image information captured by the monitoring terminal device 10 in the past. For example, the central control device 21 outputs, to the selected monitoring terminal device 10, an image transmission command for transmitting monitoring information on a subject having the extracted feature. As a result, the occupant of the mobile unit V and the monitoring terminal device 10 can recognize a specific tracking target, and the monitoring accuracy can be improved. For example, when the feature of “red clothes, 190 cm tall male” is sent to a mobile V capable of capturing an image of the monitoring area (for example, the mobile V present in the monitoring area), the mobile V It can be expected that the crew members pay more attention to their surroundings and draw attention. As a result, it is possible to collect tracking information of sighting targets and prevent the tracking targets from being lost.
 画像送信指令を受信した監視端末装置10の車載制御装置14は、過去に撮像した画像情報について特徴分析がされたデータを参照して、画像送信指令に含まれる特徴を備える被写体を検索する。監視端末装置10の車載制御装置14は、追跡対象の特徴を備えた被写体の画像情報が検索された場合には、追跡対象の存在を知らせる通報情報を監視情報に含ませて出力するので、ランダムに移動する追跡対象の存在位置を迅速に特定し、その画像情報を効果的に収集することができる。これにより、追跡対象を見失うことなく又は追跡対象を見失っても、追跡対象と特徴が共通する被写体を検索することにより、追跡対象の足跡を確実に辿ることができる。 The in-vehicle control device 14 of the monitoring terminal device 10 having received the image transmission command refers to the data subjected to the feature analysis on the image information captured in the past, and searches for the subject having the feature included in the image transmission command. The on-vehicle control device 14 of the monitoring terminal device 10 includes the notification information indicating the presence of the tracking target in the monitoring information and outputs it when the image information of the subject having the feature to be tracked is retrieved. It is possible to quickly identify the location of the tracking target moving to the site, and to effectively collect the image information. As a result, even if the subject of tracking is not lost or the subject of tracking is lost, the footprint of the subject of tracking can be reliably traced by searching for a subject whose feature is the same as the target of tracking.
 さらに、中央制御装置21は、追跡対象が人間である場合にはその顔の特徴を知らせる画像送信指令を送出することもできる。具体的に、中央制御装置21は過去に取得した画像情報から追跡対象の特徴を抽出する特徴抽出機能と、この特徴に基づいて追跡対象が人間であるか否かを判断する特徴評価機能とを備える。人間であるか否かは、画像情報に含まれる被写体の大きさ、形状、四肢の動きなどの特徴に基づいて、出願時に既知の手法を用いて判断することができる。中央制御装置21は、追跡対象が人間であると判断された場合には、画像情報から取得した追跡対象の顔の画像情報を含む監視情報を送信させる画像送信指令を、選択された移動体Vの監視端末装置10に対して出力する。また、既知の特徴抽出技術を用いて、人間である追跡対象の顔の部分を特定することができる。 Furthermore, when the tracking target is a human being, the central control unit 21 can also send out an image transmission command notifying the feature of the face. Specifically, the central control unit 21 extracts a feature of a tracking target from image information acquired in the past, and a feature evaluation function of determining whether the tracking target is a human based on the feature. Prepare. Whether or not a person is a human can be determined using a method known at the time of application based on features such as the size, shape, and movements of limbs of a subject included in image information. When it is determined that the tracking target is a human, the central control device 21 selects an image transmission command for transmitting monitoring information including the image information of the tracking target face acquired from the image information, as the selected mobile V Output to the monitoring terminal device 10 of FIG. Also, known feature extraction techniques can be used to identify human tracking target face portions.
 画像送信指令を受信した監視端末装置10の車載制御装置14は、過去に撮像した画像情報に含まれる顔の画像について類似度判断を目的として特徴分析がされたデータを参照して、画像送信指令に含まれる画像情報の顔と共通の特徴を備える被写体を検索する。監視端末装置10の車載制御装置14は、追跡対象の顔と似ている(類似度が所定値以上である)被写体の画像情報が検索された場合には、追跡対象の存在を知らせる通報情報を監視情報に含ませて出力するので、ランダムに移動する追跡対象の存在位置を迅速に見つけだし、その画像情報を効果的に収集することができる。これにより、各監視端末装置に追跡対象である人物の顔を他の移動体Vの乗員に知らせることができるため、追跡対象を見失うことなく又は追跡対象を見失っても、追跡対象と特徴が共通する被写体を検索することにより、追跡対象の足跡を正確に辿ることができる。 The in-vehicle control device 14 of the monitoring terminal device 10 having received the image transmission command refers to the image transmission command with reference to the data subjected to feature analysis for the purpose of similarity determination with respect to the face image included in the image information captured in the past. Search for subjects having features in common with the face of the image information included in. The in-vehicle control device 14 of the monitoring terminal device 10, when the image information of the subject similar to the face of the tracking target (the similarity is equal to or more than the predetermined value) is searched, Since the information is included in the monitoring information and output, it is possible to quickly find out the position of the randomly moving tracking target and effectively collect the image information. This makes it possible to notify each monitoring terminal device of the face of the person who is the object of tracking to the occupant of another mobile V. Therefore, even if the object of tracking is not lost or the object of tracking is lost, the features of the object are common to the object of tracking By searching for the subject to be tracked, it is possible to accurately trace the footprint of the tracking target.
 さらに、監視端末装置10は、追跡対象の顔の画像情報と類似度が所定値以上の画像情報が検索された場合には、他の移動体Vに搭載された監視端末装置10へ無線通信(車車間通信)を介して検索された画像情報、つまり追跡対象の顔の画像情報を出力することができる。これにより、監視エリアを撮像可能な監視端末装置10とその近傍に位置する監視端末装置10との間で追跡対象の人相の情報を共有することができるので、追跡対象の捜索を強化することができる。 Furthermore, the monitoring terminal device 10 wirelessly communicates with the monitoring terminal device 10 mounted on another moving object V when the image information of the tracking target face and the image information having a degree of similarity equal to or higher than a predetermined value are searched. It is possible to output image information retrieved through inter-vehicle communication, that is, image information of a face to be tracked. As a result, since it is possible to share information on the tracking target persona between the monitoring terminal device 10 capable of imaging the monitoring area and the monitoring terminal device 10 located in the vicinity thereof, the search for the tracking object is strengthened. Can.
 なお、上述した監視端末装置10としては、移動体Vに搭載されている監視端末装置10に加えて、所定の位置に装着され、周囲を撮像して画像情報を生成する画像生成機能を備える固定型の監視端末装置10を含めることができる。これにより、既存の固定カメラを有効に利用することができるとともに、異なる視点からの画像情報を取得することができる。なお、固定型の監視端末装置10の位置情報は予め記憶しておくことができる。 In addition to the monitoring terminal device 10 mounted on the mobile unit V, the monitoring terminal device 10 described above is fixed to be mounted at a predetermined position and have an image generation function for capturing an image and generating image information. A type of surveillance terminal 10 may be included. Thus, the existing fixed camera can be effectively used, and image information from different viewpoints can be acquired. The position information of the fixed monitoring terminal device 10 can be stored in advance.
 通信装置23は、無線通信が可能な通信手段であり、電気通信回線網30を介して監視端末装置10の通信装置13と情報の授受を実行する。電気通信回線網30が商用電話回線網である場合は携帯電話通信装置を汎用することができ、電気通信回線網30が本例の監視システム1の専用電気通信回線網である場合は、それ専用の通信装置13,23を用いることができる。 The communication device 23 is a communication means capable of wireless communication, and exchanges information with the communication device 13 of the monitoring terminal device 10 via the telecommunication network 30. When the telecommunication network 30 is a commercial telephone network, a mobile telephone communication device can be used for general purpose, and when the telecommunication network 30 is a dedicated telecommunication network of the monitoring system 1 of this example, it is dedicated Communication devices 13 and 23 can be used.
 次に車載カメラ11a~11eの装着位置と撮像範囲について説明する。ここでは移動体Vとして乗用車Vを例に挙げて説明する。カメラ11a~11eはCCD等の撮像素子を用いて構成され、4つの車載カメラ11a~11dは乗用車Vの外部の異なる位置にそれぞれ設置され、車両周囲の4方向をそれぞれ撮影する。1つの車載カメラ11eは乗用車Vの室内に設置され、車室内を撮影する。本実施形態のカメラ1は、被写体を拡大して撮像するズームアップ機能を備え、制御指令に従って任意に焦点距離を変更し、又は制御指令に従って任意に撮像倍率を変更することができる。 Next, mounting positions and imaging ranges of the on-vehicle cameras 11a to 11e will be described. Here, a passenger car V will be described as an example of the moving body V. The cameras 11a to 11e are configured using imaging devices such as CCDs, and the four on-vehicle cameras 11a to 11d are respectively installed at different positions outside the passenger car V, and respectively capture four directions around the vehicle. One in-vehicle camera 11e is installed in the passenger compartment of the passenger car V and captures an image of the passenger compartment. The camera 1 of the present embodiment is provided with a zoom-up function for imaging a subject in an enlarged manner, and can arbitrarily change the focal length in accordance with the control command or can arbitrarily change the imaging magnification in accordance with the control command.
 例えば、図3に示すように、フロントグリル部分などの乗用車Vの前方の所定位置に設置された車載カメラ11aは、乗用車Vの前方のエリアSP1内及びその前方の空間に存在する物体又は路面(フロントビュー)を撮影する。また、左サイドミラー部分などの乗用車Vの左側方の所定位置に設置された車載カメラ11bは、乗用車Vの左側方のエリアSP2内及びその周囲の空間に存在する物体又は路面(左サイドビュー)を撮影する。また、リアフィニッシャー部分やルーフスポイラー部分などの乗用車Vの後方部分の所定位置に設置された車載カメラ11cは、乗用車Vの後方のエリアSP3内及びその後方の空間に存在する物体又は路面(リアビュー)を撮影する。また、右サイドミラー部分などの乗用車Vの右側方の所定位置に設置された車載カメラ11dは、乗用車Vの右側方のエリアSP4内及びその周囲の空間に存在する物体又は路面(右サイドビュー)を撮影する。なお、図3には図示を省略したが、1つの車載カメラ11eは、乗用車の室内の例えば天井部に設置され、図4に示すように室内のエリアSP5を撮像し、タクシーの無賃乗車や強盗などの犯罪防止又は犯罪通報に供される。 For example, as shown in FIG. 3, the on-vehicle camera 11a installed at a predetermined position in front of the passenger car V such as the front grille is an object or road surface existing in the space SP1 in front of the passenger car V Shoot the front view). An on-vehicle camera 11b installed at a predetermined position on the left side of the passenger car V, such as the left side mirror portion, is an object or road surface existing in the space SP2 on the left side of the passenger car V and its surrounding (left side view) To shoot. An on-vehicle camera 11c installed at a predetermined position in the rear portion of the passenger car V, such as a rear finisher portion or a roof spoiler portion, is an object or road surface present in the space SP3 in the rear portion of the passenger car V To shoot. An on-vehicle camera 11d installed at a predetermined position on the right side of the passenger car V, such as the right side mirror portion, is an object or road surface existing in the space SP4 on the right side of the passenger car V and the surrounding area (right side view) To shoot. Although not shown in FIG. 3, one on-vehicle camera 11e is installed, for example, on the ceiling of the passenger compartment, and captures an image of the indoor area SP5 as shown in FIG. Etc. for crime prevention or crime notification.
 図4は、各車載カメラ11a~11eの配置を乗用車Vの上空から見た図である。同図に示すように、エリアSP1を撮像する車載カメラ11a、エリアSP2を撮像する車載カメラ11b、エリアSP3を撮像する車載カメラ11c、エリアSP4を撮像する車載カメラ11dの4つは、乗用車Vのボディの外周VEに沿って左回り(反時計回り)又は右回り(時計回り)に沿って設置されている。つまり、同図に矢印Cで示す左回り(反時計回り)に乗用車Vのボディの外周VEに沿って見ると、車載カメラ11aの左隣りに車載カメラ11bが設置され、車載カメラ11bの左隣りに車載カメラ11cが設置され、車載カメラ11cの左隣りに車載カメラ11dが設置され、車載カメラ11dの左隣りに車載カメラ11aが設置されている。逆に同図に示す矢印Cの方向とは反対に(時計回り)に乗用車Vのボディの外周VEに沿って見ると、車載カメラ11aの右隣りに車載カメラ11dが設置され、車載カメラ11dの右隣りに車載カメラ11cが設置され、車載カメラ11cの右隣りに車載カメラ11bが設置され、車載カメラ11bの右隣りに車載カメラ11aが設置されている。 FIG. 4 is a view of the arrangement of the on-vehicle cameras 11a to 11e viewed from above the passenger car V. As shown in FIG. As shown in the figure, four of the on-vehicle camera 11a for capturing the area SP1, the on-vehicle camera 11b for capturing the area SP2, the on-vehicle camera 11c for capturing the area SP3, and the on-vehicle camera 11d for capturing the area SP4 are the passenger cars V It is installed along a left turn (counterclockwise) or a right turn (clockwise) along the outer periphery VE of the body. That is, when viewed along the outer periphery VE of the body of the passenger car V counterclockwise (counterclockwise) shown by the arrow C in the figure, the on-vehicle camera 11b is installed on the left next to the on-vehicle camera 11a. The on-vehicle camera 11c is installed on the left side of the on-vehicle camera 11c, and the on-vehicle camera 11a is installed on the left side of the on-vehicle camera 11d. Conversely, when viewed along the outer periphery VE of the body of the passenger car V in the opposite direction (clockwise) to the direction of the arrow C shown in the figure, the on-vehicle camera 11d is installed on the right of the on-vehicle camera 11a. An on-vehicle camera 11c is installed on the right, an on-vehicle camera 11b is installed on the right of the on-vehicle camera 11c, and an on-vehicle camera 11a is installed on the right of the on-vehicle camera 11b.
 図5Aは、フロントの車載カメラ11aがエリアSP1を撮像した画像GSP1の一例を示し、図5Bは、左サイドの車載カメラ11bがエリアSP2を撮像した画像GSP2の一例を示し、図5Cは、リアの車載カメラ11cがエリアSP3を撮像した画像GSP3の一例を示し、図5Dは、右サイドの車載カメラ11dがエリアSP4を撮像した画像GSP4の一例を示し、図5Eは、室内の車載カメラ11eが室内エリアSP5を撮像した画像GSP5の一例を示す画像図である。ちなみに、各画像のサイズは、縦480ピクセル×横640ピクセルである。画像サイズは特に限定されず、一般的な端末装置で動画再生が可能なサイズであればよい。 5A shows an example of an image GSP1 obtained by imaging the area SP1 by the front on-vehicle camera 11a, FIG. 5B shows an example of an image GSP2 obtained by imaging the area SP2 by the left-side on-vehicle camera 11b, and FIG. 5D shows an example of an image GSP4 in which the in-vehicle camera 11d in the right side images an area SP4. FIG. 5E shows an in-vehicle camera 11e in the room. It is an image figure which shows an example of image GSP5 which imaged indoor area SP5. Incidentally, the size of each image is 480 pixels long by 640 pixels wide. The image size is not particularly limited, as long as it can be reproduced by a general terminal device.
 なお、車載カメラ11の配置数及び配置位置は、乗用車Vの大きさ、形状、検出領域の設定手法等に応じて適宜に決定することができる。上述した複数の車載カメラ11は、それぞれの配置に応じた識別子が付されており、車載制御装置14は、各識別子に基づいて各車載カメラ11のそれぞれを識別することができる。また、車載制御装置14は、指令信号に識別子を付することにより、特定の車載カメラ11に撮像指令その他の指令を送信することができる。 The number and position of the on-vehicle cameras 11 can be appropriately determined according to the size, shape, detection area setting method, and the like of the passenger car V. The plurality of on-vehicle cameras 11 described above are provided with identifiers according to their positions, and the on-vehicle control device 14 can identify each of the on-vehicle cameras 11 based on the respective identifiers. Further, the on-vehicle control device 14 can transmit an imaging instruction and other instructions to the specific on-vehicle camera 11 by attaching an identifier to the instruction signal.
 車載制御装置14は、画像処理装置12を制御して車載カメラ11によって撮像された撮像信号をそれぞれ取得し、画像処理装置12は、各車載カメラ11からの撮像信号を処理して図5A~図5Eに示す画像情報に変換する。そして、車載制御装置14は、図5A~図5Dに示す4つの画像情報に基づいて監視画像を生成するとともに(画像生成機能)、この監視画像を柱体の投影モデルの側面に設定された投影面に投影するためのマッピング情報を監視画像に対応づけ(マッピング情報付加機能)、中央監視装置20へ出力する。以下、画像生成機能とマッピング情報付加機能について詳述する。 The in-vehicle control device 14 controls the image processing device 12 to acquire an imaging signal captured by the in-vehicle camera 11, and the image processing device 12 processes the imaging signal from each in-vehicle camera 11 to obtain FIG. Convert to image information shown in 5E. Then, the on-vehicle control device 14 generates a monitoring image based on the four pieces of image information shown in FIGS. 5A to 5D (image generation function), and projects this monitoring image on the side surface of the projection model of the cylinder. Mapping information for projection onto a surface is associated with a monitoring image (mapping information addition function), and is output to the central monitoring device 20. Hereinafter, the image generation function and the mapping information addition function will be described in detail.
 なお、乗用車Vの周囲を撮像した4つの画像情報に基づいて監視画像を生成し、これにマッピング情報を関連付ける処理は、本例のように監視端末装置10で実行するほか、中央監視装置20で実行することもできる。この場合には、乗用車Vの周囲を撮像した4つの画像情報を監視端末装置10から中央監視装置20へそのまま送信し、これを中央監視装置20の画像処理装置22及び中央制御装置21にて監視画像を生成するとともにマッピング情報を関連付け、投影変換すればよい。 The monitoring image is generated based on the four image information obtained by imaging the surroundings of the passenger car V, and the processing of associating the mapping information with this is executed by the monitoring terminal device 10 as in this example, and by the central monitoring device 20 It can also be performed. In this case, four pieces of image information obtained by imaging the periphery of the passenger car V are transmitted as they are from the monitoring terminal device 10 to the central monitoring device 20, and are monitored by the image processing device 22 and the central control device 21 of the central monitoring device 20. The image may be generated, the mapping information may be associated, and projection conversion may be performed.
 まず、画像生成機能について説明する。本実施形態の監視端末装置10の車載制御装置14は、画像処理装置12を制御して各車載カメラ11a~11eの撮像信号をそれぞれ取得し、さらに乗用車Vのボディの外周に沿って右回り又は左回りの方向に設置された車載カメラ11a~11dの画像情報がこれらの車載カメラ11a~11dの設置順に配置されるように、一枚の監視画像を生成する。 First, the image generation function will be described. The on-vehicle control device 14 of the monitoring terminal device 10 according to the present embodiment controls the image processing device 12 to obtain imaging signals of the on-vehicle cameras 11a to 11e, respectively. One monitoring image is generated such that the image information of the on-vehicle cameras 11a to 11d installed in the counterclockwise direction is arranged in the installation order of the on-vehicle cameras 11a to 11d.
 上述したように、本実施形態において、4つの車載カメラ11a~11dは乗用車Vのボディの外周VEに沿って左回り(反時計回り)にカメラ11a、11b、11c、11dの順に設置されているので、車載制御装置14は、これらの車載カメラ11a~11dの設置の順序(車載カメラ11a→11b→11c→11d)に従って、各車載カメラ11a~11dが撮像した4枚の画像が一体となるように水平方向に繋げ、一枚の監視画像を生成する。本実施形態の監視画像において、各画像は乗用車Vの接地面(路面)が下辺となるように配置され、各画像は路面に対して高さ方向(垂直方向)の辺で互いに接続される。 As described above, in the present embodiment, the four on-vehicle cameras 11a to 11d are installed in the order of the cameras 11a, 11b, 11c, and 11d counterclockwise (counterclockwise) along the outer periphery VE of the body of the passenger car V Therefore, the on-vehicle control device 14 integrates the four images captured by the on-vehicle cameras 11 a to 11 d according to the order of installation of the on-vehicle cameras 11 a to 11 d (in-vehicle cameras 11 a → 11 b → 11 c → 11 d). Connect horizontally to generate a single surveillance image. In the monitoring image of the present embodiment, the images are arranged such that the ground contact surface (road surface) of the passenger car V is the lower side, and the images are connected to each other at the side in the height direction (vertical direction).
 図6は、監視画像Kの一例を示す図である。同図に示すように、本実施形態の監視画像Kは、図面左側から図面右側へ向かう方向Pに沿って、フロントの車載カメラ11aがエリアSP1を撮像した撮像画像GSP1、左サイドの車載カメラ11bがエリアSP2を撮像した撮像画像GSP2、リアの車載カメラ11cがエリアSP3を撮像した撮像画像GSP3、及び右サイドの車載カメラ11dがエリアSP4を撮像した撮像画像GSP4が、水平方向にこの順序で並べて配置され、これら4つの画像が一連の画像とされている。このように生成された監視画像Kを、路面(車両の接地面)に対応する画像を下にして左端から右側へ順番に表示することにより、監視者は、車両Vの周囲を反時計回りに見回したのと同様にディスプレイ24上で視認することができる。 FIG. 6 is a diagram showing an example of the monitoring image K. As shown in the figure, the monitoring image K of this embodiment is a captured image GSP1 obtained by imaging the area SP1 by the front on-vehicle camera 11a along the direction P from the left to the right in the drawing, and the on-vehicle camera 11b on the left side. The captured image GSP2 captured the area SP2, the captured image GSP3 captured the area SP3 by the rear on-vehicle camera 11c, and the captured image GSP4 captured the area SP4 by the on-vehicle camera 11d on the right side are arranged in this order in this order. These four images are arranged as a series of images. By displaying the monitoring image K generated in this way, from the left end to the right side in order, with the image corresponding to the road surface (the ground contact surface of the vehicle) down, the observer turns counterclockwise around the vehicle V It can be viewed on the display 24 in the same way as looking around.
 なお、一つの監視画像Kを生成する際には、各車載カメラ11a~11dの撮影タイミングを略同時にして取得した4つの画像が用いられる。これにより、監視画像Kに含まれる情報を同期させることができるので、所定タイミングにおける車両周囲の状況を正確に表現することができる。 In addition, when generating one monitoring image K, four images acquired at substantially the same time as the imaging timings of the on-vehicle cameras 11a to 11d are used. Thus, the information included in the monitoring image K can be synchronized, so that the situation around the vehicle at a predetermined timing can be accurately represented.
 また、カメラの撮像タイミングが略同時である各撮像画像から生成した監視画像Kを経時的に記憶し、所定の単位時間あたりに複数の監視画像Kが含まれる動画の監視画像Kを生成するようにしてもよい。撮像タイミングが同時の画像に基づいて動画の監視画像Kを生成することにより、車両周囲の状況の変化を正確に表現することができる。 Further, the monitoring image K generated from each captured image at which the imaging timing of the camera is substantially simultaneous is stored over time, and a monitoring image K of a moving image including a plurality of monitoring images K is generated per predetermined unit time. You may By generating the monitoring image K of the moving image on the basis of the images at the same imaging timing, it is possible to accurately express the change of the situation around the vehicle.
 ところで、各撮像領域の画像をそれぞれ経時的に記憶し、各撮像領域ごとに生成した動画の監視画像Kを中央監視装置20へ送信した場合には、中央監視装置20の機能によっては、複数の動画を同時に再生できない場合がある。このような従来の中央監視装置20においては、複数の動画を同時に再生表示することができないため、各動画を再生する際には画面を切り替えて動画を一つずつ再生しなければならない。つまり、従来の中央監視装置20では、複数方向の映像(動画)を同時に見ることができず、車両周囲の全体を一画面で監視することができないという不都合がある。 By the way, when the image of each imaging area is stored over time and the monitoring image K of the moving image generated for each imaging area is transmitted to the central monitoring device 20, depending on the function of the central monitoring device 20, a plurality of In some cases, you can not play the video simultaneously. In such a conventional central monitoring device 20, a plurality of moving pictures can not be reproduced and displayed at the same time. Therefore, when reproducing each moving picture, it is necessary to switch screens and reproduce moving pictures one by one. That is, in the conventional central monitoring device 20, it is not possible to simultaneously view videos (moving images) in a plurality of directions, and there is a disadvantage that it is not possible to monitor the entire periphery of the vehicle on one screen.
 これに対して本実施形態の車載制御装置14は、複数の画像から一つの監視画像Kを生成するので、中央監視装置20の機能にかかわらず、異なる撮像方向の画像を同時に動画再生することができる。つまり、監視画像Kを連続して再生(動画再生)することにより、監視画像Kに含まれる4枚の画像を同時に連続して再生(動画再生)し、方向の異なる領域の状態変化を一画面で監視することができる。 On the other hand, since the on-vehicle control device 14 of the present embodiment generates one monitoring image K from a plurality of images, it is possible to simultaneously reproduce moving images of different imaging directions regardless of the function of the central monitoring device 20. it can. That is, by continuously reproducing the monitoring image K (moving image reproduction), the four images included in the monitoring image K are simultaneously and continuously reproduced (moving image reproduction), and the state change of the area different in direction is displayed on one screen Can be monitored.
 また、本実施形態の監視端末装置10は、監視画像Kの画素数が各車載カメラ11a~11dの画像の画素数と略同一になるように画像のデータ量を圧縮して監視画像Kを生成することもできる。図5A~図5Eに示す各画像のサイズは480×640ピクセルであるのに対し、本実施形態では、図6に示すように監視画像Kのサイズが1280×240ピクセルとなるように圧縮処理を行う。これにより、監視画像Kのサイズ(1280×240=307,200ピクセル)が、各画像のサイズ(480×640×4枚=307,200ピクセル)と等しくなるので、監視画像Kを受信した中央監視装置20側の機能にかかわらず、画像処理及び画像再生を行うことができる。 Further, the monitoring terminal device 10 of the present embodiment generates a monitoring image K by compressing the data amount of the image so that the number of pixels of the monitoring image K becomes substantially the same as the number of pixels of the images of the onboard cameras 11a to 11d. You can also While the size of each image shown in FIGS. 5A to 5E is 480 × 640 pixels, in the present embodiment, compression processing is performed so that the size of the monitoring image K is 1280 × 240 pixels as shown in FIG. Do. As a result, the size (1280 × 240 = 307,200 pixels) of the monitoring image K becomes equal to the size (480 × 640 × 4 sheets = 307,200 pixels) of each image, so that the central monitoring that received the monitoring image K Image processing and image reproduction can be performed regardless of the function of the apparatus 20 side.
 さらに、本実施形態の車載制御装置14は、配置された各画像の境界を示す線図形を、監視画像Kに付することもできる。図6に示す監視画像Kを例にすると、車載制御装置14は、配置された各画像の境界を示す線図形として、各画像の間に矩形の仕切り画像Bb,Bc,Bd,Ba,Ba´を監視画像Kに付することができる。このように、4つの画像の境界に仕切り画像を配置することにより、一連にされた監視画像Kの中で、撮像方向が異なる各画像をそれぞれ別個に認識させることができる。つまり、仕切り画像は各撮像画像の額縁として機能する。また、各撮像画像の境界付近は画像の歪みが大きいので、撮像画像の境界に仕切り画像を配置することにより、歪みの大きい領域の画像を隠すことや、歪みが大きいことを示唆することができる。 Furthermore, the in-vehicle control device 14 according to the present embodiment can also add a line figure indicating the boundaries of the arranged images to the monitoring image K. Taking the monitoring image K shown in FIG. 6 as an example, the on-vehicle control device 14 takes a rectangular partition image Bb, Bc, Bd, Ba, Ba 'between the respective images as a line figure indicating the boundaries of the arranged images. Can be attached to the surveillance image K. As described above, by arranging the partition images at the boundaries of the four images, it is possible to separately recognize each of the images having different imaging directions in the series of monitoring images K. That is, the partition image functions as a frame of each captured image. In addition, since the distortion of the image is large near the boundary of each captured image, arranging the partition image at the boundary of the captured image can suggest that the image in the region with large distortion is hidden or that the distortion is large. .
 また、本実施形態の車載制御装置14は、後述する投影モデルの側面に設定された投影面に4つの画像を投影させた場合の歪みを補正してから、監視画像Kを生成することもできる。撮影された画像の周辺領域は画像の歪みが生じやすく、特に広角レンズを用いた車載カメラ11である場合には撮像画像の歪みが大きくなる傾向があるため、画像の歪みを補正するために予め定義された画像変換アルゴリズムと補正量とを用いて、撮像画像の歪みを補正することが望ましい。 The on-vehicle control device 14 according to the present embodiment can also generate the monitoring image K after correcting distortion when four images are projected on the projection plane set on the side surface of the projection model described later. . The peripheral region of the captured image is prone to distortion of the image, and in the case of the on-vehicle camera 11 using a wide-angle lens in particular, the distortion of the captured image tends to be large. It is desirable to correct the distortion of the captured image using the defined image conversion algorithm and the correction amount.
 特に限定されないが、車載制御装置14は、図7に示すように、中央監視装置20において監視画像Kを投影させる投影モデルと同じ投影モデルの情報をROMから読み出し、この投影モデルの投影面に撮像画像を投影し、投影面において生じた歪みを予め補正することもできる。なお、画像変換アルゴリズムと補正量は車載カメラ11の特性、投影モデルの形状に応じて適宜定義することができる。このように、投影モデルの投影面に関し画像Kを投影した場合の歪みを予め補正しておくことにより、歪みの少ない視認性の良い監視画像Kを提供することができる。また、歪みを予め補正しておくことにより、並べて配置された各画像同士の位置ズレを低減させることができる。 Although not particularly limited, as shown in FIG. 7, the on-vehicle control device 14 reads information of the same projection model as that of the projection model on which the monitoring image K is projected in the central monitoring device 20 from the ROM, and captures the image on the projection plane of this projection model. Images can also be projected and pre-corrected for distortions that occur in the projection plane. The image conversion algorithm and the correction amount can be appropriately defined according to the characteristics of the on-vehicle camera 11 and the shape of the projection model. As described above, by correcting distortion in the case where the image K is projected on the projection plane of the projection model in advance, it is possible to provide a highly visible monitoring image K with less distortion. In addition, by correcting distortion in advance, it is possible to reduce positional deviation between the images arranged side by side.
 次に、マッピング情報付加機能について説明する。本実施形態の監視端末装置10において、車載制御装置14は、乗用車Vの接地面を底面とする柱体の投影モデルMの側面に設定された投影面に、生成された監視画像Kを投影するためのマッピング情報を監視画像Kに対応づける処理を実行する。マッピング情報は、監視画像Kを受信した中央監視装置20に、容易に投影基準位置を認識させるための情報である。図8は本実施形態の投影モデルMの一例を示す図、図9は図8に示す投影モデルMのxy面に沿う断面模式図である。 Next, the mapping information addition function will be described. In the monitoring terminal device 10 of the present embodiment, the on-vehicle control device 14 projects the generated monitoring image K on the projection plane set on the side surface of the projection model M of a column whose bottom surface is the ground plane of the passenger car V. A process of associating the mapping information for the image with the monitoring image K is performed. The mapping information is information for making the central monitoring device 20 that has received the monitoring image K easily recognize the projection reference position. FIG. 8 is a view showing an example of a projection model M according to this embodiment, and FIG. 9 is a schematic cross-sectional view along the xy plane of the projection model M shown in FIG.
 図8,9に示すように、本実施形態の投影モデルMは、底面が正八角形で、鉛直方向(図中z軸方向)に沿って高さを有する正八角柱体である。なお、投影モデルMの形状は、底面の境界に沿って隣接する側面を有する柱体であれば特に限定されず、円柱体、若しくは三角柱体、四角柱体、六角柱体などの角柱体、又は底面が多角形で側面が三角形の反角柱体とすることもできる。 As shown in FIGS. 8 and 9, the projection model M of the present embodiment is a regular octagonal prism whose base is a regular octagon and has a height along the vertical direction (z-axis direction in the figures). The shape of the projection model M is not particularly limited as long as it is a cylindrical body having side surfaces adjacent along the boundary of the bottom, and a cylindrical body, or a rectangular cylindrical body such as a triangular cylindrical body, a rectangular cylindrical body, or a hexagonal cylindrical body It can also be an anti-prism body whose bottom is a polygon and whose side is a triangle.
 また、同図に示すように、本実施形態の投影モデルMの底面は乗用車Vの接地面と平行である。また、投影モデルMの側面の内側面には、投影モデルMの底面に接地する乗用車Vの周囲の映像を映し出す投影面Sa,Sb,Sc,Sd(以下、投影面Sと総称する。)が設定されている。投影面Sは、投影面Saの一部と投影面Sbの一部、投影面Sbの一部と投影面Scの一部、投影面Scの一部と投影面Sdの一部、投影面Sdの一部と投影面Saの一部により構成することもできる。監視画像Kは、乗用車Vを取り囲む投影モデルMの上方の視点R(R1~R8、以下、視点Rと総称する。)から乗用車Vを俯瞰した映像として投影面Sに投影される。 Moreover, as shown to the same figure, the bottom face of the projection model M of this embodiment is parallel to the ground plane of the passenger car V. In addition, on the inner side surface of the side surface of the projection model M, projection surfaces Sa, Sb, Sc, Sd (hereinafter collectively referred to as projection surface S) that project an image around the passenger car V grounded on the bottom surface of the projection model M. It is set. The projection surface S is a portion of the projection surface Sa and a portion of the projection surface Sb, a portion of the projection surface Sb and a portion of the projection surface Sc, a portion of the projection surface Sc and a portion of the projection surface Sd, and the projection surface Sd It can also be constituted by a part of and a part of projection plane Sa. The monitoring image K is projected on the projection surface S as a video of the passenger car V over the viewpoint R (R1 to R8, hereinafter collectively referred to as a viewpoint R) above the projection model M surrounding the passenger car V.
 本実施形態の車載制御装置14は、右端又は左端に配置された撮像画像の基準座標を、マッピング情報として監視画像Kに対応づける。図6に示す監視画像Kを例にすると、車載制御装置14は、投影モデルMに投影される際の、監視画像Kの始端位置又は終端位置を示すマッピング情報(基準座標)として、右端に配置された撮像画像GSP1の左上頂点の座標A(x、y)と、左端に配置された撮像画像GSP2の右上頂点の座標B(x、y)とを監視画像Kに付する。なお、始端位置又は終端位置を示す撮像画像の基準座標は特に限定されず、左端に配置された監視画像Kの左下頂点、又は右端に配置された監視画像Kの右下頂点としてもよい。またマッピング情報は、監視画像Kの画像データの各画素に付してもよいし、監視画像Kとは別のファイルとして管理してもよい。 The in-vehicle control device 14 according to the present embodiment associates the reference coordinates of the captured image disposed at the right end or the left end with the monitoring image K as mapping information. Taking the monitoring image K shown in FIG. 6 as an example, the on-vehicle controller 14 is disposed at the right end as mapping information (reference coordinates) indicating the start position or the end position of the monitoring image K when projected onto the projection model M. Coordinates A (x, y) of the upper left vertex of the captured image GSP1 and coordinates B (x, y) of the upper right vertex of the captured image GSP2 arranged at the left end are attached to the monitoring image K. The reference coordinates of the captured image indicating the start position or the end position are not particularly limited, and may be the lower left vertex of the monitoring image K disposed at the left end or the lower right vertex of the monitoring image K disposed at the right end. The mapping information may be attached to each pixel of the image data of the monitoring image K, or may be managed as a separate file from the monitoring image K.
 このように、監視画像Kの始端位置又は終端位置を示す情報、つまり投影処理において基準とする基準座標をマッピング情報として監視画像Kに対応づけることにより、監視画像Kを受信した中央監視装置20が、容易に投影処理時における基準位置を認識することができるので、車載カメラ11a~11dの配置順に並べられた監視画像Kを、投影モデルMの側面の投影面Sに容易且つ迅速に順次投影することができる。すなわち、図9に示すように車載カメラ11aの撮像方向に位置する投影面Saに車両前方の撮像画像GSP1を投影し、車載カメラ11bの撮像方向に位置する投影面Sbに車両右側方の撮像画像GSP2を投影し、車載カメラ11cの撮像方向に位置する投影面Scに車両後方の撮像画像GSP3を投影し、車載カメラ11dの撮像方向に位置する投影面Sdに車両左側方の撮像画像GSP4を投影することができる。 As described above, the central monitoring device 20 that has received the monitoring image K by associating the monitoring image K with the information indicating the start position or the end position of the monitoring image K, that is, the reference coordinates used as a reference in projection processing as mapping information. Since the reference position at the time of projection processing can be easily recognized, the monitoring images K arranged in the arrangement order of the on-vehicle cameras 11a to 11d are sequentially and easily projected on the projection plane S on the side surface of the projection model M be able to. That is, as shown in FIG. 9, the captured image GSP1 ahead of the vehicle is projected on the projection surface Sa located in the imaging direction of the on-vehicle camera 11a, and the captured image on the right side of the vehicle on the projection surface Sb located in the imaging direction GSP2 is projected, the captured image GSP3 behind the vehicle is projected on the projection surface Sc located in the imaging direction of the on-vehicle camera 11c, and the captured image GSP4 in the left side of the vehicle is projected on the projection surface Sd located in the imaging direction of the on-vehicle camera 11d can do.
 これにより、投影モデルMに投影された監視画像Kは、あたかも乗用車Vの周囲を見回したときに見える映像を示すことができる。つまり、車載カメラ11a~11dの設置順序に応じて水平方向一列に配置された4つの画像を含む監視画像Kは、投影モデルMの柱体において、同じく水平方向に並ぶ側面に投影されるので、柱体の投影モデルMの投影面Sに投影された監視画像Kに、乗用車Vの周囲の映像をその位置関係を維持したまま再現することができる。 Thereby, the surveillance image K projected on the projection model M can show an image that can be seen as if the surroundings of the passenger car V are looked around. That is, since the surveillance image K including four images arranged in a row in the horizontal direction according to the installation order of the on-vehicle cameras 11a to 11d is projected on the side in the same horizontal direction in the column of the projection model M, In the surveillance image K projected on the projection plane S of the projection model M of the column, the image around the passenger car V can be reproduced while maintaining its positional relationship.
 なお、本実施形態の車載制御装置14は、監視画像Kの各座標値と投影モデルMの各投影面Sの座標値との対応関係をマッピング情報として記憶し、監視画像Kに付することができるが、中央監視装置20に予め記憶させてもよい。 Note that the on-vehicle control device 14 according to the present embodiment stores the correspondence relationship between each coordinate value of the monitoring image K and the coordinate value of each projection surface S of the projection model M as mapping information and adds it to the monitoring image K. Although it can be, it may be stored in advance in the central monitoring device 20.
 また、図8,9に示す視点R、投影面Sの位置は例示であり、任意に設定することができる。特に、視点Rは、操作者の操作によって変更可能である。視点Rと監視画像Kの投影位置との関係は予め定義されており、視点Rの位置が変更された場合には所定の座標変換を実行することにより、新たに設定された視点Rから見た監視画像Kを投影面S(Sa~Sd)に投影することができる。この視点変換処理には公知の手法を用いることができる。 Further, the positions of the viewpoint R and the projection plane S shown in FIGS. 8 and 9 are merely examples, and can be set arbitrarily. In particular, the viewpoint R can be changed by the operation of the operator. The relationship between the viewpoint R and the projection position of the surveillance image K is defined in advance, and when the position of the viewpoint R is changed, it is viewed from the newly set viewpoint R by executing predetermined coordinate conversion. The surveillance image K can be projected onto the projection plane S (Sa to Sd). A known method can be used for this viewpoint conversion process.
 以上のように、本実施形態の車載制御装置14は、所定タイミングで撮影された画像情報に基づいて監視画像Kを生成し、この監視画像Kにマッピング情報、基準座標、境界を示す線図形(仕切り画像)の情報を対応づけ、撮像タイミングに従って経時的に記憶する。特に限定されないが、車載制御装置14は、所定の単位時間あたりに複数の監視画像Kを含む一つの動画ファイルとして監視画像Kを記憶してもよいし、ストリーミング方式で転送・再生が可能な形態で監視画像Kを記憶してもよい。 As described above, the on-vehicle control device 14 of the present embodiment generates the monitoring image K based on the image information captured at a predetermined timing, and the monitoring image K shows mapping information, reference coordinates, and a line figure The information of the partition image is associated and stored temporally according to the imaging timing. Although not particularly limited, the on-vehicle control device 14 may store the monitoring image K as one moving image file including a plurality of monitoring images K per predetermined unit time, or a mode in which transfer / reproduction can be performed by streaming method Monitoring image K may be stored.
 中央監視装置20の通信装置23は、監視端末装置10から送信された監視画像Kとこの監視画像Kに対応づけられたマッピング情報を受信する。また、室内の車載カメラ11eにて撮影された画像情報は別途受信する。この監視画像Kは、上述したとおり乗用車Vのボディの異なる位置に設置された4つの車載カメラ11の画像が、乗用車Vのボディの外周に沿って右回り又は左回りの方向に沿って設置された車載カメラ11a~11dの設置順序(車両Vのボディの外周に沿う右回り又は左回りの順序)に従って配置されたものである。また、この監視画像Kには、監視画像Kを八角柱体の投影モデルMの投影面Sに投影させるためのマッピング情報が対応づけられている。通信装置23は取得した監視画像K及びマッピング情報を画像処理装置22へ送信する。 The communication device 23 of the central monitoring device 20 receives the monitoring image K transmitted from the monitoring terminal device 10 and the mapping information associated with the monitoring image K. Further, the image information taken by the in-vehicle camera 11e in the room is separately received. In this surveillance image K, images of the four on-vehicle cameras 11 installed at different positions of the body of the passenger car V as described above are installed along the clockwise or counterclockwise direction along the outer periphery of the body of the passenger car V The in-vehicle cameras 11a to 11d are arranged according to the installation order (the clockwise or counterclockwise order along the outer periphery of the body of the vehicle V). Further, mapping information for causing the monitoring image K to be projected on the projection surface S of the octagonal prism projection model M is associated with the monitoring image K. The communication device 23 transmits the acquired monitoring image K and the mapping information to the image processing device 22.
 画像処理装置22は、予め記憶している投影モデルMを読み出し、マッピング情報に基づいて、図8及び図9に示す乗用車Vの接地面を底面とする八角柱体の投影モデルMの側面に設定された投影面Sa~Sdに監視画像Kを投影させた表示画像を生成する。具体的には、マッピング情報に従い、受信した監視画像Kの各画素を、投影面Sa~Sdの各画素に投影する。また、画像処理装置22は、監視画像Kを投影モデルMに投影する際に、監視画像Kと共に受信した基準座標に基づいて、監視画像Kの開始点(監視画像Kの右端又は左端)を認識し、この開始点が予め投影モデルM上に定義された開始点(投影面Sの右端又は左端)と合致するように投影処理を行う。また、画像処理装置22は、監視画像Kを投影モデルMに投影する際に、各画像の境界を示す線図形(仕切り画像)を投影モデルM上に配置する。仕切り画像は、予め投影モデルMに付しておくこともでき、投影処理後に監視画像Kに付すこともできる。 The image processing device 22 reads out the projection model M stored in advance, and sets it on the side surface of the octagonal cylinder projection model M whose bottom surface is the contact surface of the passenger car V shown in FIGS. 8 and 9 based on the mapping information. A display image is generated by projecting the monitoring image K on the projection planes Sa to Sd. Specifically, each pixel of the received monitoring image K is projected on each pixel of the projection planes Sa to Sd according to the mapping information. Further, when projecting the surveillance image K onto the projection model M, the image processing device 22 recognizes the start point (right end or left end of the surveillance image K) of the surveillance image K based on the reference coordinates received together with the surveillance image K. Then, the projection processing is performed so that the start point coincides with the start point (the right end or the left end of the projection surface S) defined in advance on the projection model M. Further, when projecting the monitoring image K onto the projection model M, the image processing device 22 arranges a line figure (partition image) indicating the boundary of each image on the projection model M. The partition image may be attached to the projection model M in advance, or may be attached to the monitoring image K after the projection processing.
 ディスプレイ24は、投影モデルMの投影面Sに投影した監視画像Kを表示する。図10は、監視画像Kの表示画像の一例を示す。なお、マウスやキーボードなどの入力装置25又はディスプレイ24をタッチパネル式の入力装置25とすることで、監視者の操作により視点を自在に設定・変更することができる。視点位置と投影面Sとの対応関係は上述の画像処理装置22又はディスプレイ24において予め定義されているので、この対応関係に基づいて、変更後の視点に応じた監視画像Kをディスプレイ24に表示することができる。 The display 24 displays the monitoring image K projected on the projection plane S of the projection model M. FIG. 10 shows an example of a display image of the monitoring image K. In addition, by using the input device 25 such as a mouse or a keyboard or the display 24 as the touch panel input device 25, the viewpoint can be freely set and changed by the operation of the supervisor. Since the correspondence between the viewpoint position and the projection surface S is previously defined in the image processing device 22 or the display 24 described above, the monitor image K corresponding to the changed viewpoint is displayed on the display 24 based on the correspondence. can do.
 次に本実施形態に係る監視システム1の動作について説明する。図11は監視端末装置10側の動作を示すフローチャート、図12A,12Bは中央監視装置20側の動作を示すフローチャート、図13はデータベースの情報例を示す図である。 Next, the operation of the monitoring system 1 according to the present embodiment will be described. FIG. 11 is a flowchart showing the operation of the monitoring terminal device 10, FIGS. 12A and 12B are flowcharts showing the operation of the central monitoring device 20, and FIG. 13 is a diagram showing an example of database information.
 図11に示すように、監視端末装置10においては、所定の時間間隔(同図に示す1ルーチン)で車載カメラ11から周囲の映像と室内の映像を取得し、画像処理装置12によって画像情報に変換する(ステップST1)。また、GPSを備える位置検出装置15から当該監視端末装置10が搭載された乗用車Vの現在位置情報を検出する(ステップST2)。なお、位置検出装置15は、ナビゲーション装置19の一部として構成することもできる。 As shown in FIG. 11, in the monitoring terminal device 10, a surrounding image and an indoor image are acquired from the in-vehicle camera 11 at predetermined time intervals (one routine shown in the same drawing), and the image processing device 12 converts it into image information. Convert (step ST1). Further, the current position information of the passenger car V on which the monitoring terminal device 10 is mounted is detected from the position detection device 15 provided with the GPS (step ST2). The position detection device 15 can also be configured as part of the navigation device 19.
 ステップST3では、異常を通報する通報ボタン16が押されたか否かを判断し、通報ボタン16が押された場合はステップST4へ進み、ステップST1にて取得した画像情報と、ステップST2で取得した位置情報と、CPUの時刻情報とを関連付け、これらを、異常が発生した旨の異常情報とともに、監視情報として通信装置13及び電気通信回線網30を介して中央監視装置20へ送信する。これにより、事故、犯罪などの治安に関する異常が発生したことを、乗用車Vの位置情報と、乗用車Vの周囲の画像情報と共に中央監視装置20へ自動送信されるので、街中の監視がより一層強化されることになる。なお、本例では最初のステップST1及びST2において画像情報と位置情報とを取得するが、ステップST3とST4との間のタイミングでこれら画像情報と位置情報とを取得してもよい。 In step ST3, it is determined whether or not the report button 16 for reporting an abnormality is pressed. If the report button 16 is pressed, the process proceeds to step ST4, and the image information obtained in step ST1 and the information obtained in step ST2 The position information and the time information of the CPU are associated, and these are transmitted as monitoring information to the central monitoring device 20 via the communication device 13 and the telecommunication network 30 together with the abnormal information indicating that an abnormality has occurred. As a result, since the occurrence of abnormalities related to security, such as accidents and crimes, is automatically transmitted to the central monitoring device 20 together with the position information of the passenger car V and the image information around the passenger car V, surveillance in the city is further strengthened. It will be done. In the present example, the image information and the position information are acquired in the first steps ST1 and ST2, but the image information and the position information may be acquired at the timing between the steps ST3 and ST4.
 ステップST3に戻り、通報ボタン16が押されていない場合はステップST5へ進み、中央監視装置20と通信し、制御指令を取得する。 Returning to step ST3, when the notification button 16 is not pressed, the process proceeds to step ST5, and communicates with the central monitoring device 20 to acquire a control command.
 続いて、ステップST6において、監視端末装置10は、中央監視装置20から画像送信指令を取得したか否かを判断し、画像送信指令を取得した場合にはステップST7へ進み、画像情報、位置情報、時刻情報を含む監視情報を中央監視装置20に送信する。監視端末装置10は、画像送信指令に追跡対象の特徴が含まれ、この特徴を備えた被写体の画像情報が検索された場合には、追跡対象が存在する旨を知らせる通報情報を含む監視情報を送信することができる。また、監視端末装置10は、画像送信指令に追跡対象の顔の画像情報が含まれ、この顔と似ている(類似度が所定値以上である)顔の被写体の画像情報が検索された場合には、追跡対象が存在する旨を知らせる通報情報を含む監視情報を送信することができる。また、画像送信指令に記憶指令が含まれている場合には画像情報、位置情報、時刻情報を記憶する。 Subsequently, in step ST6, the monitoring terminal device 10 determines whether or not an image transmission instruction has been acquired from the central monitoring device 20. If an image transmission instruction has been acquired, the process proceeds to step ST7 and image information and position information are obtained. , Monitoring information including time information to the central monitoring device 20. The monitoring terminal device 10 includes monitoring information including notification information notifying that the tracking target exists when the image transmission command includes the feature of the tracking target, and the image information of the subject having this feature is searched. Can be sent. Further, in the monitoring terminal device 10, when the image transmission command includes the image information of the face to be tracked, and the image information of the subject of the face similar to this face (the similarity is equal to or more than a predetermined value) is searched. , May transmit monitoring information including notification information notifying that a tracking target exists. In addition, when the storage instruction is included in the image transmission instruction, the image information, the position information and the time information are stored.
 ステップST6に戻り、中央監視装置20から画像送信指令を取得しない場合であっても、ステップST8において乗用車Vが予め定義された重点監視領域に存在する場合には、ステップST10へ進み、画像情報を含む監視情報を送信する。他方、画像送信指令を取得せず、重点監視領域でもない場合には、ステップST9へ進み、画像情報を含まない監視情報、つまり時刻情報、位置情報を中央監視装置20へ送信する。 Returning to step ST6, even if the image transmission command is not acquired from the central monitoring device 20, if the passenger car V exists in the key monitoring area defined in advance in step ST8, the process proceeds to step ST10 and the image information Send including monitoring information. On the other hand, when the image transmission command is not obtained and it is not the key monitoring area, the process proceeds to step ST9, and monitoring information not including the image information, that is, time information and position information is transmitted to the central monitoring device 20.
 続いて、図12A、図12Bに基づいて、中央監視装置20側の動作を説明する。図12AのステップST11では、すべての乗用車Vから位置情報、時刻情報を取得し、データベース26に少なくとも一時的に蓄積する。図13は、データベース26に蓄積される情報の一例を示す図である。図13に示すように、乗用車V(監視端末装置10)から取得された画像情報、位置情報、時刻情報、追跡対象の特徴、追跡対象の顔の特徴を含む監視情報は、位置情報に対応づけて記憶されている。つまり、位置情報を指定すると、一連の監視情報を呼び出すことができる。また、この監視情報には、監視端末装置10を特定するための移動体ID(監視端末装置ID)を含ませることができる。移動体IDは監視端末装置10の通信装置13のアドレスであってもよい。 Subsequently, the operation of the central monitoring device 20 will be described based on FIGS. 12A and 12B. In step ST11 of FIG. 12A, position information and time information are acquired from all the passenger cars V, and are stored at least temporarily in the database 26. FIG. 13 is a diagram showing an example of information stored in the database 26. As shown in FIG. As shown in FIG. 13, monitoring information including image information, position information, time information, tracking target features, and tracking target face features acquired from passenger car V (monitoring terminal device 10) is associated with the position information. Are stored. That is, when position information is specified, a series of monitoring information can be called. Further, this monitoring information can include a mobile unit ID (monitoring terminal device ID) for identifying the monitoring terminal device 10. The mobile unit ID may be the address of the communication device 13 of the monitoring terminal device 10.
 ステップST12において、ステップST11で取得した位置情報に基づいて乗用車Vを、ディスプレイ24に表示された地図データベースの地図情報上に図1の左上に示すように表示する。乗用車Vの位置情報は、図11の1ルーチン毎の所定のタイミングにて取得され送信されるので、監視者は乗用車Vの現在位置をタイムリーに把握することができる。 In step ST12, the passenger car V is displayed on the map information of the map database displayed on the display 24 as shown in the upper left of FIG. 1 based on the position information acquired in step ST11. The position information of the passenger car V is acquired and transmitted at a predetermined timing for each routine in FIG. 11, so that the supervisor can grasp the current position of the passenger car V in a timely manner.
 ステップST13では、乗用車Vの監視端末装置10から通報される異常情報、すなわち事故、犯罪などの治安に関する異常が発生した旨の通報を受信したか否か、又は乗用車Vの監視端末装置10から通報される目撃情報、すなわち追跡対象を目撃した旨の通報を受信したか否かを判断する。この異常情報又は目撃情報は、乗用車Vの搭乗者が監視端末装置10の通報ボタン16を押すことで出力される。 In step ST13, whether or not abnormality information notified from the monitoring terminal device 10 of the passenger car V, that is, a notification that an abnormality related to security such as an accident or a crime has occurred has been received, or notification from the monitoring terminal device 10 of the passenger car V It is determined whether the received sighting information, that is, the report that the sighting object has been witnessed has been received. The abnormality information or sighting information is output when the passenger of the passenger car V presses the notification button 16 of the monitoring terminal device 10.
 異常情報又は目撃情報がある場合は、ステップST14にて異常情報が出力された乗用車Vを特定し、その乗用車の監視端末装置10から画像情報および時刻情報を受信し、画像情報をディスプレイ24に表示する。また、図1左上に示すように、地図情報上に表示されたその乗用車を他の乗用車と識別できるように色彩を変更するなど、強調表示を行う。これにより、異常が発生又は目撃者の通報があった位置を地図情報上で視認することができるとともに、異常内容又は目撃された追跡対象をディスプレイ24にて把握することができる。なお、ステップST13~ステップ20までの処理は、異常情報又は目撃情報が通報された場合を例にした処理であり、異常情報又は目撃情報を通報した乗用車Vの存在位置が監視地点として選択されるが、異常情報又は目撃情報が通報されず、監視者が任意に監視すべき場所(監視地点)を指定した場合であっても同様にステップST13~ステップ20までの処理を実行することができる。この場合は、監視者が指定した場所が監視地点となる。 If there is abnormality information or sighting information, the passenger car V for which the abnormality information is output is specified in step ST14, image information and time information are received from the monitoring terminal device 10 of the passenger car, and the image information is displayed on the display 24 Do. Further, as shown in the upper left of FIG. 1, highlighting is performed such as changing a color so that the passenger car displayed on the map information can be distinguished from other passenger cars. Thereby, while being able to visually recognize on a map information the position where abnormality generate | occur | produced or the report of a witness was visible, it is possible to grasp the abnormal content or the traced object witnessed on the display 24. The processes in steps ST13 to 20 are processes in the case where abnormality information or sighting information is reported as an example, and the existing position of the passenger car V who has reported the abnormality information or sighting information is selected as a monitoring point However, even if abnormal information or sighting information is not notified, and the observer arbitrarily designates a place to be monitored (monitoring point), the processing from step ST13 to step 20 can be executed similarly. In this case, the place designated by the supervisor becomes the surveillance point.
 ステップST15において、中央監視装置20は、異常情報を出力した乗用車Vが存在する位置を注目して監視をするべき追跡対象が存在すると予測される監視地点として設定する。目撃情報が通報された地点には、追跡対象が存在する可能性が高く、異常情報が通報された地点には、その異常を引き起こした者又はその異常に関わる者が存在する可能性が高いからである。本例では、異常情報又は目撃情報を通報した乗用車Vの位置であるが、監視者が任意に設定することもできる。 In step ST15, the central monitoring device 20 sets a monitoring point where it is predicted that there is a tracking target to be monitored paying attention to the position where the passenger car V which has output the abnormality information is present. There is a high possibility that there is a tracking target at the point where the sighting information was reported, and there is a high possibility that there is a person who caused the abnormality or a person involved in the abnormality at the point where the anomaly information was reported. It is. In this example, although it is a position of passenger car V which reported unusual information or sighting information, a supervisor can also set up arbitrarily.
 さらに、ステップST16において、中央監視装置20は、監視地点を基準として、この監視地点を基準に設定された監視エリアを撮像することができる他車両、つまり監視端末装置10を選択する。一例ではあるが、中央監視装置20は、監視地点から所定距離以内の監視エリア内に存在する他車両、つまり監視端末装置10を選択する。中央監視装置20は、監視地点を撮像することができる他車両、つまり監視端末装置10を選択する。一例ではあるが、中央監視装置20は、監視地点に対して所定距離以内に存在する他車両、監視地点に対して所定方向に存在する他車両(撮像方向が監視地点に向いている他車両)を選択する。
 先述したように、監視エリアは、監視地点から同一距離の円形エリアとしてもよいし、監視地点を含む道路に沿う所定距離の帯状エリアとしてもよいし、交差点などで右折又は左折が考えられる場合には、所定距離、所定中心角の扇形状エリアとしてもよい。監視エリアは、監視情報に含まれる位置情報から同一距離の円形エリアとしてもよいし、監視地点を含む道路に沿う所定距離の帯状エリアとしてもよいし、交差点などで右折又は左折が考えられる場合には、所定距離、所定中心角の扇形状エリアとしてもよい。また、中央監視装置20は、所定時間以内に監視エリア内に進入することができる監視エリアに接近している他車両の監視端末装置10を選択することもできる。現在は監視エリア内に存在しなくても、監視エリアに接近している他車両であれば、所定時間経過後には監視エリアを撮像することができるからである。
Further, in step ST16, the central monitoring device 20 selects, based on the monitoring point, another vehicle capable of imaging the monitoring area set on the basis of the monitoring point, that is, the monitoring terminal device 10. Although it is an example, central monitoring device 20 chooses other vehicles, ie, monitoring terminal device 10, which are present in the monitoring area within a predetermined distance from the monitoring point. The central monitoring device 20 selects another vehicle capable of imaging the monitoring point, that is, the monitoring terminal device 10. Although it is one example, the central monitoring device 20 is another vehicle existing within a predetermined distance from the monitoring point, and another vehicle existing in a predetermined direction with respect to the monitoring point (other vehicles whose imaging direction is directed to the monitoring point) Choose
As described above, the monitoring area may be a circular area of the same distance from the monitoring point, or may be a band area of a predetermined distance along the road including the monitoring point, or when a right turn or left turn is considered at an intersection. A may be a fan-shaped area of a predetermined distance and a predetermined central angle. The monitoring area may be a circular area of the same distance from the position information included in the monitoring information, or may be a band area of a predetermined distance along the road including the monitoring point, or a right turn or left turn at an intersection etc. A may be a fan-shaped area of a predetermined distance and a predetermined central angle. The central monitoring device 20 can also select the monitoring terminal device 10 of another vehicle approaching the monitoring area that can enter the monitoring area within a predetermined time. This is because other vehicles approaching the monitored area can image the monitored area after a predetermined time has elapsed, even if not currently present in the monitored area.
 なお、本実施形態では、追跡対象を見失い、監視地点の選択が追跡対象の動きに対して遅れる可能性があることを考慮して、監視エリアに接近する乗用車Vのみならず監視エリアから離隔する乗用車Vをも選択の対象とする。監視エリアから離隔する乗用車Vは、過去において監視エリアを撮像した可能がある。中央監視装置20は、監視エリアから離隔する乗用車Vに対しては、画像情報の撮像時刻を指定した画像送信指令を送出することが好ましい。これにより、目撃情報が通報されてから時間が経過してしまった後、又はインシデントが発生してから時間が経過した後に監視エリアが設定された場合であっても、目撃情報の通報又はインシデント発生時刻の前後の撮像時刻を指定した画像送信指令により、目撃情報の通報又はインシデント発生前後の画像情報を遡って収集することができる。 In the present embodiment, not only the passenger car V approaching the monitoring area but also the monitoring area is separated in consideration of the possibility that the tracking target is lost and the selection of the monitoring point may be delayed with respect to the movement of the tracking target. The passenger car V is also targeted for selection. The passenger car V separated from the surveillance area may have imaged the surveillance area in the past. The central monitoring device 20 preferably transmits an image transmission command specifying the imaging time of the image information to the passenger car V separated from the monitoring area. Thereby, even if the monitoring area is set after the time has passed since the sighting information was reported or after the time has elapsed since the occurrence of the incident, the reporting or incident occurrence of the sighting information According to the image transmission command specifying the imaging time before and after the time, it is possible to retroactively collect the report of sighting information or the image information before and after the occurrence of the incident.
 監視エリアを撮像可能な乗用車Vを選択する手法は特に限定されないが、例えば、まず、異常通報をした乗用車、目撃情報を送信した乗用車、又は監視者が任意に選択した乗用車の現在位置その他の監視地点(Y)が存在する道路を特定し、データベース26を参照して、その道路を走行する乗用車Vを抽出する。そして、特定した乗用車Vの位置(X)、移動速度(V)、及び進行方向を特定する。乗用車Vの移動速度、進行方向は、位置情報の経時的変化に基づいて求めてもよいし、監視情報に含ませて取得した移動速度に基づいて求めてもよい。続いて、中央監視装置20は、乗用車Vの進行方向が監視地点に向かって接近する方向であり、(Y-X)/Vが所定値未満の乗用車Vを選択する。この(Y-X)/Vが小さすぎると、すぐに監視地点を通過してしまうので、下限値を設けてもよい。そして、同ステップにおいて、中央監視装置20は、選択された監視端末装置10へ画像送信指令を送信する。なお、監視地点及びこれに基づいて設定される監視エリアは所定周期で更新され、追跡対象の移動に伴い時々刻々に変化するため、追跡対象がランダムに移動しても監視エリアを撮像できる乗用車を選択することができる。 The method of selecting the passenger car V capable of imaging the surveillance area is not particularly limited. For example, first, the passenger car who reported abnormality, the passenger car who sent sighting information, or the current position of the passenger car arbitrarily selected by the supervisor The road where the point (Y) exists is identified, and the passenger car V traveling on the road is extracted with reference to the database 26. Then, the specified position (X) of the passenger car V, the moving speed (V), and the traveling direction are specified. The moving speed and the traveling direction of the passenger car V may be obtained based on the temporal change of the position information, or may be obtained based on the moving speed acquired by being included in the monitoring information. Subsequently, the central monitoring device 20 selects the passenger car V whose (Y−X) / V is less than a predetermined value as the traveling direction of the passenger car V approaches the monitoring point. If this (Y−X) / V is too small, the monitoring point is immediately passed, so a lower limit may be set. Then, in the same step, the central monitoring device 20 transmits an image transmission command to the selected monitoring terminal device 10. In addition, since the monitoring point and the monitoring area set based on this are updated at a predetermined cycle and change from moment to moment as the tracking target moves, a passenger car capable of imaging the monitoring area even if the tracking target moves randomly It can be selected.
 画像送信指令には、撮像方向を指定する情報を含めることができる。撮像方向は、監視地点と監視エリアとの位置関係に基づいて中央監視装置20が算出する。撮像方向は、方位で表現してもよいし、車載カメラ11の位置が既知であれば車載カメラ11の識別情報で表現してもよい。これにより、監視エリア内の乗用車Vのカメラ11で監視地点の映像を確実に取得することができる。また、必要な画像情報のみを送信させることができるので、送信データ量を低減させることができる。 The image transmission instruction can include information for specifying an imaging direction. The central monitoring device 20 calculates the imaging direction based on the positional relationship between the monitoring point and the monitoring area. The imaging direction may be expressed by an azimuth, or may be expressed by identification information of the on-vehicle camera 11 if the position of the on-vehicle camera 11 is known. Thereby, the video of the surveillance point can be reliably acquired by the camera 11 of the passenger car V in the surveillance area. Moreover, since only necessary image information can be transmitted, the amount of transmission data can be reduced.
 なお、監視端末装置10がナビゲーション装置を備えている場合には、中央監視装置20が発信する監視地点と現在位置とから、自車両が監視エリアに侵入したタイミングで自動的に画像情報を含む監視情報を中央監視装置20へ送出することもできる。 In addition, when the monitoring terminal device 10 is provided with the navigation device, the monitoring including the image information is automatically performed at the timing when the own vehicle intrudes into the monitoring area from the monitoring point transmitted by the central monitoring device 20 and the current position. Information can also be sent to central monitoring device 20.
 ステップST17では、異常情報を出力した乗用車Vの位置情報をパトカー、救急車、消防車等の緊急乗用車へ送信する。この場合に、異常内容を報知するために画像情報を添付して送信してもよい。これにより、現場からの通報が入る前に緊急乗用車を出動させることができ、事故や犯罪に対する迅速な対処が可能となる。 At step ST17, the position information of the passenger car V which has output the abnormality information is transmitted to an emergency passenger car such as a police car, an ambulance, a fire engine or the like. In this case, image information may be attached and transmitted in order to notify of abnormal content. As a result, it is possible to dispatch an emergency passenger car before a notification from the field comes in, and it is possible to promptly cope with an accident or a crime.
 ステップST18では、監視端末装置10から受信した全ての位置情報、画像情報および時刻情報を記録媒体へ記録する。この記録は、事故や犯罪の発生後においてこれらを解決する際に用いられる。なお、ステップST13にて異常情報がない場合はステップST14~ST18の処理を行うことなくステップST21へ進む。 At step ST18, all position information, image information and time information received from the monitoring terminal device 10 are recorded on the recording medium. This record is used to resolve them after an accident or crime. If there is no abnormality information in step ST13, the process proceeds to step ST21 without performing the processing in steps ST14 to ST18.
 ステップST19では、追跡対象の追跡監視状態が解除されたか否かを判断し、解除されていれば、ステップ21以降の処理を行う。他方解除されていない場合には、監視地点を継続して監視し続けるため、先のステップST16において選択された乗用車Vが監視エリアを通過するなど、監視エリアを撮像できなくなった場合には、ステップST16へ戻り、新たに監視エリアを撮像可能な乗用車Vを選択する。 In step ST19, it is determined whether or not the tracking monitoring state of the tracking target has been released, and if the tracking monitoring state has been released, the processing from step 21 is performed. On the other hand, if it has not been released, in order to continue monitoring the monitoring point, if the passenger car V selected in the previous step ST16 passes the monitoring area or the monitoring area can not be imaged, the step Returning to ST16, a passenger car V capable of imaging a monitoring area anew is selected.
 新たに、監視エリアを撮像する乗用車Vを選択する手法は特に限定されないが、ステップST16の処理と同じ手法を用いることができる。 Although a method for selecting a passenger car V for imaging a surveillance area is not particularly limited, the same method as the process of step ST16 can be used.
 このように、追跡対象の追跡監視状態が継続される限り、監視エリアを撮像できる監視乗用車Vを順次、選択し続けるため、追跡対象がランダムに移動しても、さらにはカメラ11が搭載された乗用車Vがランダムに移動しても特定の追跡対象を継続的に追跡して撮像し、追跡対象に関する監視情報を取得することができる。 As described above, as long as the tracking target of the tracking target is continued, the monitoring passenger car V capable of imaging the monitoring area is successively selected, so that the camera 11 is mounted even if the tracking target moves randomly. Even if the passenger car V moves randomly, it is possible to continuously track and image a specific tracking target and obtain monitoring information on the tracking target.
 追跡監視が解除された後において、図12Bに示すステップ21以降を実行することができる。ステップST21では、パトカー、救急車又は消防車などの緊急乗用車から画像情報の送信指令があるか否かを判断し、画像送信指令が入力された場合にはステップST22へ進む。ステップST22では、画像情報の送信指令で特定された地域に乗用車Vが存在するか否かを判断し、乗用車Vが存在する場合はステップST23へ進む。そして、ステップST23において、画像情報の送信指令で特定された地域に存在する乗用車Vに対して画像情報の送信指令を出力する。これにより、次のルーチンの図12AのステップST11にてその乗用車Vからの画像情報を取得することができ、これを緊急乗用車に転送したり、緊急乗用車からの送信指令の意味を把握したりすることができる。なお、ステップST21及びST22に該当しない場合はステップST21~ST23の処理を行うことなくステップST24へ進む。 After the tracking monitoring is canceled, step 21 and subsequent steps shown in FIG. 12B can be performed. In step ST21, it is determined whether there is an instruction to transmit image information from an emergency passenger car such as a police car, an ambulance or a fire engine. If an image transmission instruction is input, the process proceeds to step ST22. In step ST22, it is determined whether the passenger car V exists in the area specified by the transmission instruction of the image information. If the passenger car V exists, the process proceeds to step ST23. Then, in step ST23, a transmission instruction of the image information is output to the passenger car V present in the area specified by the transmission instruction of the image information. Thereby, the image information from the passenger car V can be acquired in step ST11 of FIG. 12A of the next routine, and this is transferred to the emergency passenger car or the meaning of the transmission command from the emergency passenger car is grasped. be able to. If the process does not correspond to steps ST21 and ST22, the process proceeds to step ST24 without performing the process of steps ST21 to ST23.
 ステップST24では、予め設定された犯罪多発地帯などの不審箇所の近傍領域に乗用車Vが存在するか否かを判断し、存在する場合はステップST25へ進んでその乗用車Vに対して画像情報の送信指令を出力する。不審箇所とは治安の悪い通り、街などである。これにより、不審箇所である通りや街の監視を強化することができ、犯罪の未然防止が期待できる。なお、不審箇所の近傍領域に乗用車Vが存在しない場合はステップST22の処理を行うことなくステップST26へ進む。 In step ST24, it is determined whether a passenger car V exists in the vicinity of a suspicious area such as a crime prone area set in advance, and if it exists, the process proceeds to step ST25 to transmit image information to the passenger car V Output a command. Suspicious places are streets and streets that are insecure. This makes it possible to strengthen surveillance of streets and streets that are suspicious places, and can be expected to prevent crime in advance. When the passenger car V does not exist in the vicinity area of the suspicious part, the process proceeds to step ST26 without performing the process of step ST22.
 ステップST26では、詳細を監視しておくべき重点監視対象を撮像できる重点監視位置の近傍に乗用車Vが存在するか否かを判断し、重点監視位置の近傍に乗用車Vが存在する場合はステップST27へ進んでその乗用車Vに対して重点監視対象を拡大した画像情報の送信を求める重点監視指令を出力する。これにより、重点監視対象を詳細に監視することができ、特定された重点監視対象において事件や事故の原因となる不審物の発見を効果的に行うことができ、犯罪の未然防止が期待できる。なお、重点監視位置の近傍に乗用車Vが存在しない場合はステップST27の処理を行うことなくステップST28へ進む。 In step ST26, it is determined whether the passenger car V exists in the vicinity of the important point monitoring position where it is possible to image the important point monitoring target whose details should be monitored. If the passenger car V exists in the vicinity of the important point monitoring position Then, it outputs a priority monitoring command requesting transmission of image information to which the focus monitoring target is expanded for the passenger car V. As a result, it is possible to monitor the focus monitoring target in detail, to effectively detect a suspicious object that causes an event or an accident in the specified focus monitoring target, and it is possible to expect crime prevention. When the passenger car V does not exist in the vicinity of the important point monitoring position, the process proceeds to step ST28 without performing the process of step ST27.
 ステップST28では、各乗用車Vから受信した位置情報に基づいて、監視が必要とされる所定領域(不審箇所及び重点監視領域には限定されない)内に、一定時間内に乗用車Vが走行していない路線があるか否かを判断し、そのような路線があった場合において、その路線を走行する乗用車Vがあるか否かを監視する。そして、直近にその路線を走行する乗用車Vが存在すれば、ステップST29へ進み、その乗用車Vに対して画像情報の送信指令を出力する。これにより、不審箇所や重点監視領域以外の区域であって乗用車Vの通行量が少ない路線の画像情報を自動的に取得することができる。なお、ステップST28の条件を満足する路線がない場合はステップST29の処理を行うことなく図12AのステップST11へ戻る。 In step ST28, the passenger car V does not travel within a predetermined time within a predetermined area (not limited to the suspicious spot and the key monitoring area) where monitoring is required based on the position information received from each passenger car V It is determined whether or not there is a route, and when there is such a route, it is monitored whether there is a passenger car V traveling on the route. Then, if there is a passenger car V traveling the route most recently, the process proceeds to step ST29, and a transmission instruction of image information is output to the passenger car V. As a result, it is possible to automatically acquire image information of a route other than a suspicious spot or a key monitoring area and in which the passenger car V has a small traffic volume. If there is no route that satisfies the condition of step ST28, the process returns to step ST11 of FIG. 12A without performing the process of step ST29.
 以上のとおり、本実施形態の監視システムは以下の効果を奏する。
 (1)本例の監視システム1は、追跡対象が存在すると予測される監視地点を基準とする監視エリアを撮像可能な移動体の監視端末装置10を選択し、選択された監視端末装置10に対して画像情報を含む監視情報を送信する画像送信指令を出力することにより、追跡対象が存在すると予測される監視地点を基準とする監視エリアを撮像可能な移動体に搭載された監視端末装置10から追跡対象の画像を取得できるので、移動する追跡対象を継続的に監視することができる。
 監視地点を基準とする監視エリア内に存在する監視端末装置10を「監視エリアを撮像可能な監視端末装置10」として選択することにより、追跡対象の撮像画像を効率良く収集することができる。
 一般的な固定の防犯カメラ装置の撮像エリアは固定であるため、人間や車両などの移動可能な追跡対象が移動して固定の防犯カメラ装置10の撮像エリアを出てしまうと、追跡対象を継続して監視することができないという問題がある。他方、移動体Vに搭載されたカメラを用いる場合には、カメラ及び追跡対象の両方がランダムに動くため、特定の追跡対象を継続して監視することができないという問題がある。これに対し、本実施形態の監視システム1によれば、監視端末装置10に移動体Vに搭載されたカメラ11を用いつつも、ランダムに動く移動体Vに搭載された監視端末装置10を用いて、ランダムに動く追跡対象を継続して監視することができる。
As described above, the monitoring system of the present embodiment has the following effects.
(1) The monitoring system 1 of this example selects the monitoring terminal device 10 of a mobile capable of imaging a monitoring area based on the monitoring point where the tracking target is predicted to exist, and selects the selected monitoring terminal device 10 A monitoring terminal device 10 mounted on a mobile capable of imaging a monitoring area based on a monitoring point predicted to have a tracking target by outputting an image transmission command for transmitting monitoring information including image information. Since it is possible to obtain an image of the tracking target from, it is possible to continuously monitor the moving tracking target.
By selecting the monitoring terminal device 10 present in the monitoring area based on the monitoring point as the “monitoring terminal device 10 capable of capturing an image of the monitoring area”, it is possible to efficiently collect a captured image of the tracking target.
Since the imaging area of a general fixed security camera device is fixed, when the movable tracking target such as a human or a vehicle moves and leaves the imaging area of the fixed security camera device 10, the tracking target is continued And there is a problem that it can not be monitored. On the other hand, in the case of using a camera mounted on the moving body V, there is a problem that it is not possible to continuously monitor a specific tracking target because both the camera and the tracking target move randomly. On the other hand, according to the monitoring system 1 of the present embodiment, the monitoring terminal device 10 uses the monitoring terminal device 10 mounted on the moving object V which moves randomly while using the camera 11 mounted on the moving object V. Thus, it is possible to continuously monitor a tracking target which moves randomly.
 (2)本例の監視システム1では、追跡対象を目撃した者が搭乗する乗用車Vは、監視地点近傍に位置し、その監視端末装置10は監視エリアを撮像することができる可能性が高いという観点から、追跡対象を目撃した旨を通報する目撃情報を含む監視情報を監視端末装置10から取得した場合には、目撃情報が出力された監視端末装置10の位置情報に基づいて、追跡対象の存在が予測される監視地点を設定するので、追跡対象の目撃情報の送出に遅れることなく、自動的に監視地点およびこれに基づく監視エリアを適切に設定することができる。 (2) In the surveillance system 1 of this embodiment, the passenger car V on which the person witnessed the tracking object is located in the vicinity of the surveillance point, and there is a high possibility that the surveillance terminal 10 can image the surveillance area From the viewpoint, when the monitoring information including the sighting information notifying that the tracking target has been witnessed is acquired from the monitoring terminal device 10, the tracking target is obtained based on the position information of the monitoring terminal device 10 to which the sighting information is output. By setting a monitoring point whose presence is predicted, it is possible to automatically set a monitoring point and a monitoring area based thereon without delaying the transmission of tracking information to be tracked.
 (3)本例の監視システム1では、経時的に取得した画像情報に基づいて追跡対象の移動方向を算出し、この算出した移動方向側に監視地点を設定するので、次の監視タイミングにおいて追跡対象が存在すると予測される監視地点およびこれに基づく監視エリアを適切に設定することができる。 (3) In the monitoring system 1 of this example, the movement direction of the tracking target is calculated based on the image information acquired over time, and the monitoring point is set on the calculated movement direction side. It is possible to appropriately set a monitoring point where an object is predicted to exist and a monitoring area based thereon.
 (4)本例の監視システム1では、経時的に取得した画像情報に基づいて追跡対象の移動速度を算出し、この算出した移動速度に応じた監視地点を設定するので、追跡対象が徒歩で逃走した場合であっても、車両を使って逃走した場合であっても、監視地点および監視エリアを適切に設定することができる。 (4) In the monitoring system 1 of the present example, the moving speed of the tracking target is calculated based on the image information acquired over time, and the monitoring point according to the calculated moving speed is set. Even when the vehicle escapes or the vehicle escapes, the monitoring point and the monitoring area can be appropriately set.
 (5)本例の監視システム1では、追跡対象の特徴を知らせる画像送信指令を監視端末装置10に送出するので、移動体Vの乗員および監視端末装置10は、具体的な追跡対象を認識することができ、監視精度を向上させることができる。 (5) In the monitoring system 1 of this example, since the image transmission command notifying the feature of the tracking object is sent to the monitoring terminal device 10, the occupant of the mobile V and the monitoring terminal device 10 recognize the specific tracking object Monitoring accuracy can be improved.
 (6)本例の監視システム1では、監視端末装置1が追跡対象の特徴を備えた被写体の画像情報を検索できた場合には、追跡対象の存在を知らせる通報情報を監視情報に含ませて出力するので、ランダムに移動する追跡対象の存在位置を迅速に特定し、その画像情報を効果的に収集することができる。これにより、追跡対象を見失うことなく又は追跡対象を見失っても、追跡対象と特徴が共通する被写体を検索することにより、追跡対象の足跡を正確に辿ることができる。 (6) In the monitoring system 1 of this example, when the monitoring terminal device 1 can search for the image information of the subject having the feature of the tracking target, the monitoring information includes notification information notifying the existence of the tracking target. Since the output is performed, the position of the randomly moving tracking target can be quickly identified, and the image information can be effectively collected. As a result, even if the subject of tracking is not lost or the subject of tracking is lost, it is possible to accurately trace the footprint of the target of tracking by searching for a subject whose feature is the same as that of the target of tracking.
 (7)本例の監視システム1では、追跡対象が人間である場合にはその顔の特徴を知らせる画像送信指令を送出するので、移動体Vの乗員および監視端末装置10は、具体的な追跡対象の人相を認識することができ、監視精度を向上させることができる。 (7) In the monitoring system 1 of this example, when the tracking target is a human, an image transmission command notifying the feature of the face is sent out, so the occupant of the mobile V and the monitoring terminal device 10 perform specific tracking. The target face can be recognized, and the monitoring accuracy can be improved.
 (8)本例の監視システム1では、過去に撮像した画像情報の中から追跡対象の顔と似ている(類似度が所定値以上である)被写体の画像情報が検索された場合には、追跡対象の存在を知らせる通報情報を監視情報に含ませて出力するので、ランダムに移動する追跡対象の存在位置を迅速に特定し、その画像情報を効果的に収集することができる。これにより、追跡対象を見失うことなく又は追跡対象を見失っても、追跡対象と特徴が共通する被写体を検索することにより、追跡対象の足跡を正確に辿ることができる。 (8) In the monitoring system 1 of this example, when the image information of the subject similar to the face to be tracked (the similarity is equal to or more than the predetermined value) is retrieved from the image information captured in the past. Since the notification information notifying the existence of the tracking target is included in the monitoring information and output, the existence position of the tracking target moving randomly can be identified quickly, and the image information can be collected effectively. As a result, even if the subject of tracking is not lost or the subject of tracking is lost, it is possible to accurately trace the footprint of the target of tracking by searching for a subject whose feature is the same as that of the target of tracking.
 (9)本例の監視システム1では、追跡対象の顔の画像情報と類似度が所定値以上の画像情報が検索された場合には、他の移動体Vに搭載された監視端末装置10へ無線通信(車車間通信)を介して追跡対象の顔の画像情報を出力するので、監視エリアを撮像可能な監視端末装置10とその近傍に位置する監視端末装置10との間で追跡対象の人相の情報を共有することができるので、追跡対象の捜索を強化することができる。
る。
(9) In the monitoring system 1 of this example, when the image information of the face to be tracked and the image information having the similarity equal to or more than the predetermined value are searched, the monitoring terminal device 10 mounted on another mobile V is Since the image information of the face to be tracked is output via wireless communication (inter-vehicle communication), the person to be tracked is between the monitoring terminal device 10 capable of imaging the monitoring area and the monitoring terminal device 10 located in the vicinity thereof. Since the phase information can be shared, it is possible to strengthen the search for the tracking target.
Ru.
 (10)本例の監視システム1では、監視端末装置10としては、移動体Vに搭載されている監視端末装置10に加えて、所定の位置に装着され、周囲を撮像して画像情報を生成する画像生成機能を備える固定型の監視端末装置10を含めることにより、既存の固定カメラを有効に利用することができる。 (10) In the monitoring system 1 of the present embodiment, the monitoring terminal device 10 is attached to a predetermined position in addition to the monitoring terminal device 10 mounted on the mobile unit V, and images the surroundings to generate image information The existing fixed camera can be effectively used by including the fixed type monitoring terminal device 10 having the image generation function.
 (11)本例の監視システム1は、追跡対象が存在すると予測される監視地点を撮像可能な移動体の監視端末装置10を選択し、選択された監視端末装置10に対して画像情報を含む監視情報を送信する画像送信指令を出力することにより、追跡対象が存在すると予測される監視地点を撮像できる移動体に搭載された監視端末装置10から追跡対象の画像を取得できるので、移動する追跡対象を継続的に監視することができる。 (11) The monitoring system 1 of this example selects the monitoring terminal device 10 of a mobile capable of imaging the monitoring point where the tracking target is predicted to be present, and includes the image information for the selected monitoring terminal device 10 By outputting an image transmission command for transmitting monitoring information, it is possible to acquire an image of the tracking object from the monitoring terminal device 10 mounted on a mobile body capable of imaging the monitoring point where the tracking object is predicted to be present. The subject can be monitored continuously.
 (12)本例の監視システム1は、追跡対象が存在すると予測される監視エリアを撮像可能な移動体の監視端末装置10を選択し、選択された監視端末装置10に対して画像情報を含む監視情報を送信する画像送信指令を出力することにより、追跡対象が存在すると予測される監視エリアを撮像できる移動体に搭載された監視端末装置10から追跡対象の画像を取得できるので、移動する追跡対象を継続的に監視することができる。 (12) The monitoring system 1 of this example selects the monitoring terminal device 10 of a mobile capable of capturing an image of the monitoring area where the tracking target is expected to be present, and includes image information for the selected monitoring terminal device 10 By outputting an image transmission command for transmitting monitoring information, it is possible to acquire an image of the tracking target from the monitoring terminal device 10 mounted on a mobile body capable of imaging the monitoring area where the tracking target is predicted to be present. The subject can be monitored continuously.
 (13)本例の監視方法は、上記監視端末装置10と中央監視装置20とを備える監視システムと同様の作用及び効果を奏する。 (13) The monitoring method of this example has the same operation and effect as the monitoring system including the monitoring terminal device 10 and the central monitoring device 20.
 なお、上述した実施形態では、乗用車Vの位置情報と車載カメラ11a~11eからの画像情報を取得するようにしたが、図1に示す、街中に設置された固定カメラ11fからの画像情報と組み合わせて取得してもよい。また、位置情報と画像情報を取得する乗用車Vは、図1に示すように予め決められた領域を走行するタクシーV1やバスを用いることが望ましいが、自家用乗用車V2や緊急乗用車V3を用いてもよい。 In the embodiment described above, the position information of the passenger car V and the image information from the on-vehicle cameras 11a to 11e are acquired, but it is combined with the image information from the fixed camera 11f installed in the city shown in FIG. May be acquired. In addition, as for passenger car V which acquires position information and picture information, as shown in Figure 1, it is desirable to use taxi V1 and the bus which run the territory which is decided beforehand, but even if private passenger car V2 and emergency passenger car V3 are used Good.
 また、上述した実施形態では、乗用車Vに5つの車載カメラを搭載し、このうち4つの車載カメラ11a~11dを用いて360°周囲の映像を画像情報として取得したが、室内の車載カメラ11eを省略してもよい。また、交通量が多い領域のように多くの乗用車Vから画像情報が取得できる環境等であれば特に、4つの車載カメラ11a~11dを3つ以下にしてもよい。 In the above-described embodiment, five in-vehicle cameras are mounted on the passenger car V, and an image of 360 ° around is acquired as image information using four in-vehicle cameras 11a to 11d. It may be omitted. Further, particularly in an environment where image information can be acquired from many passenger cars V, such as an area where traffic is heavy, the four on-vehicle cameras 11a to 11d may be three or less.
 上記乗用車Vは本発明に係る移動体に相当し、上記位置検出装置15は本発明に係る位置検出手段に相当し、上記車載カメラ11及び画像処理装置12は本発明に係る画像生成手段に相当し、上記車載制御装置14は本発明に係る画像検索手段、記憶手段、出力手段に相当し、上記車載制御装置14のCPUは本発明に係る時刻検出手段に相当し、上記通報ボタン16は本発明に係る指令入力手段に相当し、上記通信装置13は本発明に係る指令受付手段及び情報出力手段に相当する。上記中央制御装置21は、監視地点設定手段、選択手段、特徴抽出手段に相当し、データベース26はデータベースに相当し、上記通信装置23、入力装置25は本発明に係る情報取得手段、異常情報受付手段及び指令出力手段に相当し、上記ディスプレイ24は本発明に係る表示手段に相当する。 The passenger car V corresponds to a moving body according to the present invention, the position detection device 15 corresponds to a position detection means according to the present invention, and the on-vehicle camera 11 and the image processing device 12 correspond to an image generation means according to the present invention The in-vehicle control device 14 corresponds to the image search means, the storage means, and the output means according to the present invention, the CPU of the in-vehicle control device 14 corresponds to the time detection means according to the present invention, and the notification button 16 corresponds to the present invention. The communication device 13 corresponds to command receiving means and information output means according to the present invention. The central control unit 21 corresponds to monitoring point setting means, selection means, feature extraction means, the database 26 corresponds to a database, and the communication device 23 and the input device 25 receive information acquisition means according to the present invention, abnormality information acceptance. The display 24 corresponds to display means according to the present invention.
1…車両監視システム
 10…監視端末装置
  11,11a~11e…車載カメラ
  11f…街中固定カメラ
  12…画像処理装置
  13…通信装置
  14…車載制御装置
  15…位置検出装置
  16…通報ボタン
  17…車両コントローラ
  18…車速センサ
  19…ナビゲーション装置
 20…中央監視装置
  21…中央制御装置
  22…画像処理装置
  23…通信装置
  24…ディスプレイ
  25…入力装置
 30…電気通信回線網
V,V1,V2,V3…移動体
M…投影モデル
S,Sa,Sb、Sc、Sd…投影面
R1~R8…視点
DESCRIPTION OF SYMBOLS 1 ... Vehicle monitoring system 10 ... Monitoring terminal device 11, 11a-11e ... Vehicle-mounted camera 11f ... Fixed camera in the town 12 ... Image processing device 13 ... Communication device 14 ... Vehicle-mounted control device 15 ... Position detection device 16 ... Report button 17 ... Vehicle controller 18 Vehicle speed sensor 19 Navigation device 20 Central monitoring device 21 Central control device 22 Image processing device 23 Communication device 24 Display 25 Input device 30 Telecommunication network V, V1, V2, V3 Mobile body M: Projection model S, Sa, Sb, Sc, Sd: Projection surface R1 to R8: Perspective

Claims (13)

  1.  複数の移動体のそれぞれの位置情報を検出する位置検出手段と、前記複数の移動体のそれぞれに装着され、当該移動体の周囲を撮像して画像情報を生成する画像生成手段と、を備える監視端末装置から無線通信を介して監視情報を取得する中央監視装置を備える監視システムであって、
     前記中央監視装置は、
     前記監視端末装置の位置検出手段が検出した位置情報及び時刻検出手段が検出した時刻情報を少なくとも含む監視情報を取得する情報取得手段と、
     前記取得した監視情報に基づいて、監視者により特定された追跡対象が存在すると予測される監視地点を設定する監視地点設定手段と、
     前記設定された監視地点を基準とする所定の監視エリアを撮像可能な移動体の監視端末装置を選択する選択手段と、
     前記選択された移動体の監視端末装置に対して、前記画像情報を含む監視情報を送信させる画像送信指令を出力する指令出力手段と、を備えることを特徴とする監視システム。
    Monitoring comprising: position detecting means for detecting positional information of each of a plurality of moving bodies, and image generating means mounted on each of the plurality of moving bodies and imaging the periphery of the moving bodies to generate image information A monitoring system comprising a central monitoring device for obtaining monitoring information from a terminal device via wireless communication, comprising:
    The central monitoring device
    Information acquisition means for acquiring monitoring information including at least position information detected by the position detection means of the monitoring terminal device and time information detected by the time detection means;
    Monitoring point setting means for setting a monitoring point where a tracking target specified by the observer is predicted to exist, based on the acquired monitoring information;
    Selecting means for selecting a monitoring terminal apparatus of a mobile capable of imaging a predetermined monitoring area based on the set monitoring point;
    And a command output unit configured to output an image transmission command for transmitting monitoring information including the image information to the monitoring terminal apparatus of the selected mobile object.
  2.  前記中央監視装置の前記情報取得手段は、前記監視端末装置の指令入力手段を介して入力され、情報出力手段を介して出力された追跡対象を目撃した旨を通報する目撃情報を含む前記監視情報を取得し、
     前記監視地点設定手段は、前記目撃情報が出力された監視端末装置の位置情報に基づいて、前記追跡対象の存在が予測される監視地点を設定することを特徴とする請求項1に記載の監視システム。
    The monitoring information including sighting information notifying that the information processing means of the central monitoring device has witnessed the tracking target, which is inputted through the command input means of the monitoring terminal device and is outputted through the information output means Get
    The monitoring according to claim 1, wherein the monitoring point setting means sets a monitoring point at which the presence of the tracking target is predicted, based on position information of the monitoring terminal device from which the sighting information is output. system.
  3.  前記情報取得手段が取得する監視情報は、前記画像生成手段により生成された画像情報を含み、
     前記監視地点設定手段は、前記監視端末装置から経時的に取得した画像情報に基づいて前記監視者により特定された追跡対象の移動方向を算出し、当該算出した移動方向側に監視地点を設定することを特徴とする請求項1又は2に記載の監視システム。
    The monitoring information acquired by the information acquisition unit includes the image information generated by the image generation unit,
    The monitoring point setting means calculates the moving direction of the tracking target specified by the observer based on the image information acquired temporally from the monitoring terminal device, and sets the monitoring point on the calculated moving direction side. The monitoring system according to claim 1 or 2, characterized in that:
  4.  前記情報取得手段が取得する監視情報は、前記画像生成手段により生成された画像情報を含み、
     前記監視地点設定手段は、前記監視端末装置から経時的に取得した画像情報に基づいて前記監視者により特定された追跡対象の移動速度を算出し、当該算出した移動速度に基づいて監視地点を設定することを特徴とする請求項1~3の何れか一項に記載の監視システム。
    The monitoring information acquired by the information acquisition unit includes the image information generated by the image generation unit,
    The monitoring point setting means calculates the moving speed of the tracking target specified by the monitor based on the image information acquired temporally from the monitoring terminal device, and sets the monitoring point based on the calculated moving speed. The monitoring system according to any one of claims 1 to 3, characterized in that:
  5.  前記中央監視装置は、前記監視端末装置から取得した画像情報から前記追跡対象の特徴を抽出する特徴抽出手段をさらに備え、
     前記中央監視装置の前記指令出力手段は、前記特徴抽出手段により抽出された前記追跡対象の特徴を備えた被写体に関する監視情報を送信させる画像送信指令を、前記選択手段により選択された移動体の監視端末装置に対して出力することを特徴とする請求項1~4の何れか一項に記載の監視システム。
    The central monitoring device further comprises a feature extraction unit for extracting the feature of the tracking object from the image information acquired from the monitoring terminal device,
    The command output unit of the central monitoring device monitors an image transmission command for transmitting monitoring information on a subject having the feature of the tracking target extracted by the feature extraction unit, the monitoring of the mobile object selected by the selection unit The monitoring system according to any one of claims 1 to 4, wherein the monitoring system outputs data to a terminal device.
  6.  前記監視端末装置の前記画像生成手段は、生成した前記画像情報を含む監視情報を少なくとも一時的に記憶手段に記憶し、
     前記監視端末装置は、
     前記記憶手段に記憶された監視情報から、前記中央監視装置から取得した前記追跡対象の特徴を備える被写体が含まれる画像情報を検索する画像検索手段と、
     前記画像検索手段により前記追跡対象の特徴を備えた被写体の画像情報が検索された場合には、前記追跡対象の存在を知らせる通報情報を前記監視情報に含ませて出力する出力手段と、をさらに備えることを特徴とする請求項5に記載の監視システム。
    The image generation means of the monitoring terminal device at least temporarily stores in the storage means monitoring information including the generated image information.
    The monitoring terminal device
    Image search means for searching for image information including a subject having the feature of the tracking object acquired from the central monitoring device from the monitoring information stored in the storage means;
    And output means for outputting notification information indicating the presence of the tracking object in the monitoring information when the image information of the object having the feature of the tracking object is searched by the image search means. The monitoring system according to claim 5, comprising.
  7.  前記中央監視装置は、前記監視端末装置から取得した画像情報から前記追跡対象の特徴を抽出する特徴抽出手段をさらに備え、
     前記特徴抽出手段により抽出された特徴に基づいて前記追跡対象が人間であるか否かを判断する特徴評価手段と、をさらに備え、
     前記中央監視装置の前記指令出力手段は、前記特徴評価手段により前記追跡対象が人間であると判断された場合には、前記画像情報から取得した前記追跡対象の顔の画像情報又は顔の特徴を含む監視情報を送信させる画像送信指令を、前記選択手段により選択された移動体の監視端末装置に対して出力することを特徴とする請求項1~4の何れか一項に記載の監視システム。
    The central monitoring device further comprises a feature extraction unit for extracting the feature of the tracking object from the image information acquired from the monitoring terminal device,
    And a feature evaluation unit that determines whether the tracking target is a human based on the features extracted by the feature extraction unit.
    If the feature evaluation means determines that the tracking target is a human, the command output unit of the central monitoring device may use the face image information or the face feature of the tracking target acquired from the image information. The monitoring system according to any one of claims 1 to 4, characterized in that an image transmission instruction for transmitting the included monitoring information is output to the monitoring terminal device of the mobile unit selected by the selection means.
  8.  前記監視端末装置の前記画像生成手段は、生成した前記画像情報を含む監視情報を少なくとも一時的に記憶手段に記憶し、
     前記監視端末装置は、
     前記中央監視装置から取得した前記画像送信指令に含まれる前記追跡対象の顔の画像情報又は顔の特徴と類似度が所定値以上の画像情報を前記記憶手段に記憶された監視情報から検索する画像検索手段と、
     前記画像検索手段により前記追跡対象の顔の画像情報又は顔の特徴と類似度が所定値以上の画像情報が検索された場合には、前記追跡対象の存在を知らせる通報情報を前記監視情報に含ませて出力する出力手段と、をさらに備えることを特徴とする請求項7に記載の監視システム。
    The image generation means of the monitoring terminal device at least temporarily stores in the storage means monitoring information including the generated image information.
    The monitoring terminal device
    The image information of the face of the tracking target included in the image transmission command acquired from the central monitoring device or the image of which the feature and the similarity with the feature of the face are a predetermined value or more are searched from the monitoring information stored in the storage unit Search means,
    When the image search means searches for image information of a face to be tracked or image information having a similarity with a face feature and a feature having a predetermined value or more, the monitoring information includes notification information notifying the presence of the trackee. The monitoring system according to claim 7, further comprising: output means for outputting.
  9.  前記監視端末装置の出力手段は、前記検索手段により前記追跡対象の顔の画像情報又は顔の特徴と類似度が所定値以上の画像情報が検索された場合には、他の移動体に搭載された監視端末装置へ無線通信を介して前記検索された画像情報を出力することを特徴とする請求項8に記載の監視システム。 The output unit of the monitoring terminal device is mounted on another moving body when the search unit searches for image information of the face to be tracked or image information having a similarity to a feature of the face that is equal to or more than a predetermined value. The monitoring system according to claim 8, wherein the retrieved image information is output to the monitoring terminal device via wireless communication.
  10.  前記監視端末装置は、所定の位置に装着され、周囲を撮像して画像情報を生成する画像生成手段を備える固定型の監視端末装置を含むことを特徴とする請求項1~9の何れか一項に記載の監視システム。 10. The monitoring terminal device according to any one of claims 1 to 9, wherein the monitoring terminal device includes a fixed monitoring terminal device mounted at a predetermined position and including an image generation unit configured to capture an image of the surroundings to generate image information. The surveillance system described in Section.
  11.  複数の移動体のそれぞれの位置情報を検出する位置検出手段と、前記複数の移動体のそれぞれに装着され、当該移動体の周囲を撮像して画像情報を生成する画像生成手段と、を備える監視端末装置から無線通信を介して監視情報を取得する中央監視装置を備える監視システムであって、
     前記中央監視装置は、
     前記監視端末装置の位置検出手段が検出した位置情報及び時刻検出手段が検出した時刻情報を少なくとも含む監視情報を取得する情報取得手段と、
     前記取得した監視情報に基づいて、追跡対象が存在すると予測される監視地点を設定する監視地点設定手段と、
     前記監視地点を撮像可能な移動体の監視端末装置を選択する選択手段と、
     前記選択された移動体の監視端末装置に対して、前記画像情報を含む監視情報を送信させる画像送信指令を出力する指令出力手段と、を備えることを特徴とする監視システム。
    Monitoring comprising: position detecting means for detecting positional information of each of a plurality of moving bodies, and image generating means mounted on each of the plurality of moving bodies and imaging the periphery of the moving bodies to generate image information A monitoring system comprising a central monitoring device for obtaining monitoring information from a terminal device via wireless communication, comprising:
    The central monitoring device
    Information acquisition means for acquiring monitoring information including at least position information detected by the position detection means of the monitoring terminal device and time information detected by the time detection means;
    Monitoring point setting means for setting a monitoring point where a tracking target is predicted to exist based on the acquired monitoring information;
    Selecting means for selecting a monitoring terminal device of a mobile capable of imaging the monitoring point;
    And a command output unit configured to output an image transmission command for transmitting monitoring information including the image information to the monitoring terminal apparatus of the selected mobile object.
  12.  複数の移動体のそれぞれの位置情報を検出する位置検出手段と、前記複数の移動体のそれぞれに装着され、当該移動体の周囲を撮像して画像情報を生成する画像生成手段と、を備える監視端末装置から無線通信を介して監視情報を取得する中央監視装置を備える監視システムであって、
     前記中央監視装置は、
     前記監視端末装置の位置検出手段が検出した位置情報及び時刻検出手段が検出した時刻情報を少なくとも含む監視情報を取得する情報取得手段と、
     前記取得した監視情報に基づいて、追跡対象が存在すると予測される監視エリアを設定する監視地点設定手段と、
     前記監視エリアを撮像可能な移動体の監視端末装置を選択する選択手段と、
     前記選択された移動体の監視端末装置に対して、前記画像情報を含む監視情報を送信させる画像送信指令を出力する指令出力手段と、を備えることを特徴とする監視システム。
    Monitoring comprising: position detecting means for detecting positional information of each of a plurality of moving bodies, and image generating means mounted on each of the plurality of moving bodies and imaging the periphery of the moving bodies to generate image information A monitoring system comprising a central monitoring device for obtaining monitoring information from a terminal device via wireless communication, comprising:
    The central monitoring device
    Information acquisition means for acquiring monitoring information including at least position information detected by the position detection means of the monitoring terminal device and time information detected by the time detection means;
    Monitoring point setting means for setting a monitoring area where a tracking target is predicted to exist, based on the acquired monitoring information;
    Selection means for selecting a monitoring terminal device of a mobile capable of imaging the monitoring area;
    And a command output unit configured to output an image transmission command for transmitting monitoring information including the image information to the monitoring terminal apparatus of the selected mobile object.
  13.  複数の移動体に搭載された監視端末装置から、各移動体の位置情報及び時刻情報を少なくとも含む監視情報を取得するステップと、
     前記取得した監視情報に基づいて、監視者により特定された追跡対象が存在すると予測される監視地点を設定するステップと、
     前記監視時刻において、前記設定された監視地点を基準とする所定の監視エリアを撮像可能な移動体の監視端末装置を選択するステップと、
     前記選択された移動体に対して、前記移動体の周囲を撮像する画像情報を含む監視情報を送信する画像送信指令を出力するステップと、を備えることを特徴とする監視方法。
    Acquiring monitoring information including at least position information and time information of each mobile unit from monitoring terminal devices mounted on a plurality of mobile units;
    Setting a monitoring point where a tracking target specified by the observer is predicted to exist, based on the acquired monitoring information;
    Selecting a monitoring terminal apparatus of a mobile capable of imaging a predetermined monitoring area based on the set monitoring point at the monitoring time;
    Outputting an image transmission command for transmitting monitoring information including image information for imaging the periphery of the selected moving body to the selected moving body.
PCT/JP2013/051753 2012-02-24 2013-01-28 Surveillance system WO2013125301A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2012039328 2012-02-24
JP2012-039328 2012-02-24

Publications (1)

Publication Number Publication Date
WO2013125301A1 true WO2013125301A1 (en) 2013-08-29

Family

ID=49005494

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/051753 WO2013125301A1 (en) 2012-02-24 2013-01-28 Surveillance system

Country Status (1)

Country Link
WO (1) WO2013125301A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019091161A (en) * 2017-11-13 2019-06-13 トヨタ自動車株式会社 Rescue system and rescue method, and server and program used for the same
US11049374B2 (en) 2016-12-22 2021-06-29 Nec Corporation Tracking support apparatus, terminal, tracking support system, tracking support method and program

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001189927A (en) * 1999-12-28 2001-07-10 Tokyo Gas Co Ltd Mobile station, control station and virtual experience system
JP2006209681A (en) * 2005-01-31 2006-08-10 Sumitomo Electric Ind Ltd Vehicle search system, vehicle search apparatus, vehicle search center apparatus and vehicle search method
JP2006350520A (en) * 2005-06-14 2006-12-28 Auto Network Gijutsu Kenkyusho:Kk Peripheral information collection system
JP2008217218A (en) * 2007-03-01 2008-09-18 Denso Corp Accident information acquisition system
JP2009169540A (en) * 2008-01-11 2009-07-30 Toyota Infotechnology Center Co Ltd Monitoring system and security management system
JP2010257249A (en) * 2009-04-24 2010-11-11 Autonetworks Technologies Ltd On-vehicle security device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001189927A (en) * 1999-12-28 2001-07-10 Tokyo Gas Co Ltd Mobile station, control station and virtual experience system
JP2006209681A (en) * 2005-01-31 2006-08-10 Sumitomo Electric Ind Ltd Vehicle search system, vehicle search apparatus, vehicle search center apparatus and vehicle search method
JP2006350520A (en) * 2005-06-14 2006-12-28 Auto Network Gijutsu Kenkyusho:Kk Peripheral information collection system
JP2008217218A (en) * 2007-03-01 2008-09-18 Denso Corp Accident information acquisition system
JP2009169540A (en) * 2008-01-11 2009-07-30 Toyota Infotechnology Center Co Ltd Monitoring system and security management system
JP2010257249A (en) * 2009-04-24 2010-11-11 Autonetworks Technologies Ltd On-vehicle security device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11049374B2 (en) 2016-12-22 2021-06-29 Nec Corporation Tracking support apparatus, terminal, tracking support system, tracking support method and program
US11727775B2 (en) 2016-12-22 2023-08-15 Nec Corporation Tracking support apparatus, terminal, tracking support system, tracking support method and program
JP2019091161A (en) * 2017-11-13 2019-06-13 トヨタ自動車株式会社 Rescue system and rescue method, and server and program used for the same

Similar Documents

Publication Publication Date Title
WO2020113660A1 (en) Patrol robot and patrol robot management system
JP5786963B2 (en) Monitoring system
US10572737B2 (en) Methods and system for detecting a threat or other suspicious activity in the vicinity of a person
US10572738B2 (en) Method and system for detecting a threat or other suspicious activity in the vicinity of a person or vehicle
US20190356885A1 (en) Camera System Securable Within a Motor Vehicle
JP6451840B2 (en) Information presentation system
US10572740B2 (en) Method and system for detecting a threat or other suspicious activity in the vicinity of a motor vehicle
JP5890294B2 (en) Video processing system
US10572739B2 (en) Method and system for detecting a threat or other suspicious activity in the vicinity of a stopped emergency vehicle
JP4643860B2 (en) VISUAL SUPPORT DEVICE AND SUPPORT METHOD FOR VEHICLE
US20130021453A1 (en) Autostereoscopic rear-view display system for vehicles
WO2013111494A1 (en) Monitoring system
JP5811190B2 (en) Monitoring system
WO2013125301A1 (en) Surveillance system
WO2013111479A1 (en) Monitoring system
JP5790788B2 (en) Monitoring system
WO2013111491A1 (en) Monitoring system
WO2013111493A1 (en) Monitoring system
WO2013111492A1 (en) Monitoring system
WO2013161345A1 (en) Monitoring system and monitoring method
JP5796638B2 (en) Monitoring system
JP7140043B2 (en) Information processing equipment
JP2011258068A (en) Traffic information provision system
CN114425991A (en) Image processing method, medium, device and image processing system
JP6749470B2 (en) Local safety system and server

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13751119

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13751119

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP