WO2015001677A1 - Safety assistance system and safety assistance device - Google Patents

Safety assistance system and safety assistance device Download PDF

Info

Publication number
WO2015001677A1
WO2015001677A1 PCT/JP2013/068553 JP2013068553W WO2015001677A1 WO 2015001677 A1 WO2015001677 A1 WO 2015001677A1 JP 2013068553 W JP2013068553 W JP 2013068553W WO 2015001677 A1 WO2015001677 A1 WO 2015001677A1
Authority
WO
WIPO (PCT)
Prior art keywords
wireless communication
information
monitoring
safety support
image
Prior art date
Application number
PCT/JP2013/068553
Other languages
French (fr)
Japanese (ja)
Inventor
嘉郁 川村
隆之 黒澤
大和田 徹
Original Assignee
ルネサスエレクトロニクス株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ルネサスエレクトロニクス株式会社 filed Critical ルネサスエレクトロニクス株式会社
Priority to PCT/JP2013/068553 priority Critical patent/WO2015001677A1/en
Publication of WO2015001677A1 publication Critical patent/WO2015001677A1/en

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes

Definitions

  • the present invention relates to a safety support system and a safety support device, for example, a safety support system and a safety support device used at an intersection where a curve mirror is arranged.
  • Patent Document 1 discloses a driver support device mounted on an automobile.
  • the driver assistance device instructs the internal communication terminal to acquire information from the roadside device.
  • the roadside device installed at the intersection of the T-shaped road detects the distance between the other vehicle and the roadside device existing in the blind spot direction of the own vehicle with the radar, and detects the size of the other vehicle with the camera image. And send them to your vehicle.
  • the host vehicle displays a relative positional relationship between the host vehicle and another vehicle on a head-up display.
  • Patent Document 2 discloses a proximity moving body display system for an automobile.
  • an image processing apparatus connected to a camera installed on a road transmits data such as an imaging time, a position of an approaching mobile body, a type of an approaching mobile body, and an intersection shape to an in-vehicle device.
  • the in-vehicle device receives the data, estimates the position of the moving object at the moving object display time, and performs display on the windshield or display on the monitor screen.
  • Patent Document 3 discloses a vehicle vision support system that allows a driver of a vehicle approaching an intersection to recognize the current state of a blind spot area.
  • the image processing server that processes the image data of the intersection camera group generates an image obtained by synthesizing the image of the blind spot area with the image of the in-vehicle camera based on various information from the intersection camera group and the in-vehicle camera.
  • the image data is transmitted to the in-vehicle device.
  • Patent Document 4 discloses a vehicle road condition confirmation device using a camera installed on a curve mirror.
  • the camera side transmits captured video data to the vehicle, and the vehicle side analyzes the captured video data.
  • Patent Document 5 discloses an intersection situation recognition system that grasps the situation of a road that becomes a blind spot using intersection videos from a plurality of cameras installed at an intersection.
  • an identification number is set in advance for the imaging device installed at the intersection, and the in-vehicle device searches for the identification number of the imaging device corresponding to the intersection image required when entering the intersection, and the identification number Use the to get the intersection video.
  • Patent Document 6 discloses a traffic monitoring system that displays an image of a blind spot on an image display device installed at an intersection. Specifically, from the image of the camera installed in the building, the type, size, position and speed of the object moving on the road are analyzed, and similar images of the object recorded in the past are analyzed. A similar image similar to the analysis result is extracted, synthesized with a predetermined background image, and displayed on the image display device.
  • a vehicle accident is classified into a rear-end collision, encounter, head-on collision, right turn collision, vehicle mutual and others. Classified as vehicles and others.
  • the above-mentioned classifications are 6.7%, 15%, 10%, 5.5%, 6.1% for fatal accidents, and 6.3% for severe accidents, respectively. 28%, 5.4%, 11.2%, and 12.8%, and 35.4%, 26%, 2.1%, 8.2%, and 16.5% for minor injuries, respectively. ing.
  • the above-mentioned classifications are 26% and 10.2% for fatal accidents, 13.7% and 6.9% for serious accidents, respectively, and minor injury accidents.
  • the major locations where accidents occur are about 70% of intersections without traffic lights (18% at intersections with traffic lights and 10% of single roads), of which about 75% are in urban areas.
  • the main location of the accident is an intersection without a signal such as a residential area.
  • the main causes of accidents at encounters at intersections include human factors, road environment factors, and weather deterioration factors.
  • Human factors account for about 90% of the causes, and typically include recognition errors (overlooks) and judgment / prediction errors.
  • a recognition error corresponds to, for example, a situation such as “I didn't see a crossing vehicle”, “I saw a curve mirror but didn't notice”, or “I was distracted by others”.
  • the judgment / prediction mistake is a situation where the user has recognized but the behavior prediction / motion is wrong. For example, it corresponds to a situation such as “This is a priority road and the other party should stop”.
  • Road environmental factors correspond to situations in which the outlook is poor (inferior prospects due to parked vehicles such as buildings and plants) and inadequate facilities (curve mirror breakage, signs and traffic safety facilities are inadequate).
  • the weather deterioration factor corresponds to, for example, a situation in which the outlook has deteriorated due to rain, fog, and snow.
  • FIGS. 24 to 26 are explanatory diagrams showing an example of the usage situation of a general curve mirror.
  • FIG. 24 shows an example of a situation where the curve mirror is viewed from the viewpoint of the vehicle driver
  • FIG. 25 shows an example of a blind spot area viewed from the vehicle driver
  • FIG. 26 shows a curve mirror. An example of the image shown in is shown.
  • the curve mirror information is information reflected in a mirror viewed from a distance (that is, information reflected in a small mirror).
  • Information that is reversed to the left and right is provided to a vehicle driver or a pedestrian. With such information, there are cases where the driver and the pedestrian looking at the curve mirror individually change the way it looks and the judgment associated therewith.
  • the visibility of the curve mirror varies depending on the visual acuity of the vehicle driver or pedestrian.
  • the curve mirror information is reverse information on the left and right, there may be a misjudgment in the direction of the car, etc., or attention may be paid only to the direction of the curve mirror, and attention may not be paid to the opposite direction, or
  • the pedestrians and bicycles pass each other, there is a possibility that the distance between them will be narrowed due to the difference between the left and right sides.
  • road signs are difficult to see due to changes over time (such as rust and dirt) and environmental changes (such as changes due to trees and buildings, or changes in display angle due to wind and rain). Can be. Also, these are difficult to see at night, and may be overlooked depending on the installation location. Furthermore, it may be a factor causing misjudgment / prediction due to an assumption (illusion).
  • Patent Documents 1 to 6 In order to reduce such accidents at the time of encounter, for example, it is conceivable to use techniques as disclosed in Patent Documents 1 to 6.
  • a technique for transmitting camera image data to a vehicle increases the amount of image data to be transmitted, so that accurate information is transmitted to the user in real time. May be difficult.
  • Patent Document 6 it is necessary to provide an image display device at an intersection, and a high-performance server device for image processing may be required, which may result in an expensive system. .
  • Patent Document 1 and Patent Document 2 there is no particular consideration as to what specific information about the moving body is transmitted to the vehicle, and the amount of information increases depending on the format. It may be difficult to convey accurate information to the user in real time. Furthermore, the techniques of Patent Document 1 and Patent Document 2 are techniques premised on a limited specific scene. That is, for example, this is a technique in which a precondition that a camera captures the right side of an intersection and transmits information based on the image data to a vehicle entering the intersection from below is established in advance. However, in reality, there are various cases regarding where the camera takes an image and from which direction the vehicle enters, and a technique for constructing the preconditions described above is required.
  • a safety support system includes a monitoring device provided with a curve mirror and including a monitoring camera, an image processing unit, and a first wireless communication unit, and a second wireless communication unit and an information processing unit that are carried by a user. And a user device.
  • the image processing unit executes first to third processes.
  • N N is an integer equal to or greater than 1 moving bodies existing within a predetermined image recognition range are detected from the captured image of the monitoring camera.
  • the type of the moving object is determined for each of N moving objects, and a first identifier having a value corresponding to the determination result is generated from the first identifiers in which a plurality of values are defined in advance. .
  • the distance from the predetermined reference position is determined for each of the N moving bodies based on the coordinates on the captured image, and according to the determination result from the second identifiers in which a plurality of values are defined in advance.
  • a second identifier having the value is generated.
  • the first wireless communication unit transmits a data signal including N sets of first and second identifiers generated for N mobile objects.
  • the second wireless communication unit receives the data signal transmitted from the first wireless communication unit, and the information processing unit determines whether the mobile object is present based on the N sets of first and second identifiers included in the data signal. And performing a predetermined process for notifying the user of the presence state of the mobile object.
  • FIG. 3B is a block diagram showing a schematic configuration example of a user device different from FIG. 3A in the safety support system according to Embodiment 1 of the present invention. It is a top view which shows an example of the ideal installation method of a curve mirror. It is a top view which shows an example of the installation method of the monitoring apparatus (cyber curve mirror) of FIG. 2 according to the curve mirror of FIG. 4A.
  • FIG. 6B is a perspective view corresponding to FIG. 6A. It is a top view which shows an example of the installation method different from FIG. 6A of the monitoring apparatus (cyber curve mirror) of FIG. 2 according to the curve mirror of FIG. It is a perspective view corresponding to FIG. 7A. It is a figure which shows an example of the data format of the radio signal broadcast by the monitoring apparatus of FIG. It is a figure which shows an example of the detailed content of FIG. It is a figure which shows an example of the detailed content of FIG.
  • FIG. 9B is a diagram showing an example of display contents based on the image recognition ID of FIG. 9A in the user device of FIG. 3A or 3B.
  • FIG. 9D is a diagram showing an example of display contents based on the extended information of FIG. 9D in the user device of FIG. 3A or 3B.
  • FIG. 9 is a schematic diagram illustrating an example of a hierarchical structure of a data format in units of intersections when a plurality of monitoring devices installed at intersections transmit data based on the data format of FIG. 8. It is a flowchart which shows an example of the detailed processing content of the monitoring apparatus of FIG.
  • the safety assistance system by Embodiment 2 of this invention is a top view which shows the example of installation in the three-way way of the monitoring apparatus, and the operation example in that case. It is a top view which shows the application example of FIG. 18C. It is explanatory drawing which shows an example of the general characteristic in wireless LAN. It is explanatory drawing which shows an example of the general characteristic in wireless LAN. It is a sequence diagram which shows an example of the communication procedure between the access point (AP) and wireless LAN terminal in wireless LAN. In the safety assistance system by Embodiment 3 of this invention, it is a top view which shows the example of application to the road environment where an intersection is arrange
  • the constituent elements are not necessarily indispensable unless otherwise specified and apparently essential in principle. Needless to say.
  • the shapes, positional relationships, etc. of the components, etc. when referring to the shapes, positional relationships, etc. of the components, etc., the shapes are substantially the same unless otherwise specified, or otherwise apparent in principle. And the like are included. The same applies to the above numerical values and ranges.
  • FIG. 1 is an explanatory diagram showing an outline of a safety support system according to an embodiment of the present invention.
  • the safety support system according to the present embodiment has functions equivalent to those of curved mirrors installed at intersections with poor visibility, T-junctions, etc., surveillance cameras, wireless communication and wireless terminal devices, and AR (Augmented Reality) technology.
  • AR Augmented Reality
  • the AR technology is one of virtual reality, which literally expands the real world as seen by humans by adding, deleting, emphasizing, and attenuating information in the real environment surrounding the surroundings.
  • the safety support system includes a monitoring device 10 including a monitoring camera and a user device that is carried by a user (a vehicle driver, a pedestrian, etc.). Is provided.
  • the monitoring device 10 uses the image captured by the monitoring camera, determines the type of a moving object such as a pedestrian or vehicle and the distance from the intersection of each moving object by image recognition processing, and adds information on the imaging direction of the monitoring camera. And broadcast using wireless communication.
  • a monitoring device 10 is referred to as a “cyber curve mirror” in this specification.
  • a user device receives information from the monitoring device 10 and enters a moving object (such as a blind spot with a poor view or an invisible blind spot).
  • a moving object such as a blind spot with a poor view or an invisible blind spot.
  • the presence of a vehicle, a pedestrian, etc. is notified to the user using the image display unit 11, the audio output unit 12, the vibration unit 13, and the like.
  • the safety support system converts the information reflected on the curve mirror (depending on the approach direction) into the minimum necessary information (presence / absence of pedestrians, vehicles, etc. entering intersections and their position information), Broadcast to the user equipment side.
  • this information is broadcast to various users, when a photographed image or the like is broadcast as it is, in addition to the decrease in real-time property due to the amount of information, there are problems such as privacy infringement. May cause.
  • the monitoring apparatus 10 converts the recognition result into a specific icon / symbol ID (code) signal or the like after transmitting the image of the captured image, and transmits it.
  • the user device side recognizes whether it is a vehicle or a pedestrian or the like by this specific ID (code) signal, and displays an icon / symbol or pictogram or the like according to the recognition result, voice or warning sound, etc.
  • the user is warned via the voice output unit 12 that emits the sound, the vibration unit 13 having a predetermined vibration pattern, and the like.
  • vehicle drivers, pedestrians, etc. can obtain information on moving bodies (approaching vehicles, bicycles / bikes, pedestrians, etc.) existing in places with poor visibility or blind spots by directly viewing the curve mirror.
  • moving bodies approaching vehicles, bicycles / bikes, pedestrians, etc.
  • it can be obtained not only by vision but also by voice, vibration, or the like.
  • misjudgment / prediction belief / illusion, etc.
  • Mistakes can be reduced and accidents at intersections can be prevented.
  • the safety support system is a system in which the monitoring device 10 executes image processing and transmits the image
  • the user device side is, for example, a general-purpose information terminal device capable of wireless communication (navigation system, smartphone, mobile phone, drive recorder, etc.) ) Can be realized by installing dedicated application software.
  • an ITS system can be constructed at a low cost.
  • the safety support system can provide, as incidental information, road sign information at the time of entry to an intersection or the like and after entry (after going straight, left turn, right turn) by the image display unit 11 and the audio output unit 12.
  • the monitoring device 10 broadcasts to the user device by adding supplementary information set in advance as a part of the above-described icon / symbol ID (code) signal or the like, and the user device transmits this supplementary information. Recognize and notify the user. As a result, it is possible to notify the user of road sign information that may be overlooked due to inadequate signage facilities, poor visibility, or oversight, etc. become.
  • FIG. 2 is a block diagram illustrating a schematic configuration example of the monitoring device in the safety support system according to the first embodiment of the present invention.
  • the monitoring device 10 shown in FIG. 2 is attached to the curve mirror.
  • the monitoring device 10 includes a sensor unit 20, an image processing / signal generation unit 21, an orientation information generation unit 22, a wireless communication unit 23, an extended information generation unit 24, a processor unit (CPU) 25, and the like. These are connected by a bus 26.
  • the sensor unit 20 includes one or both of the camera sensor 27 and the infrared sensor 28, and may further include an ultrasonic radar 29 or the like.
  • the camera sensor 27 and the infrared sensor 28 are surveillance cameras, and perform imaging in a pre-designated imaging direction.
  • the infrared sensor 28 that is, an infrared image
  • it is possible to improve image recognition at night or the like detection of the moving body, determination of the type of the moving body, etc.
  • the ultrasonic radar 29 it is possible to improve the accuracy associated with distance measurement (that is, determination of the position of the moving body).
  • only the camera sensor 27 may be provided in the sensor unit 20.
  • the image processing / signal generation unit 21 includes an image recognition unit 30 and a distance measurement unit 31.
  • the image recognition unit 30 performs image recognition processing on the captured image from the monitoring camera of the sensor unit 20 using an existing image recognition processing algorithm, detects a moving body, and detects the type (vehicle, pedestrian, bicycle / motorcycles, etc.).
  • the distance measuring unit 31 is arranged between a reference position (for example, an entrance of an intersection) and each moving body for each moving body recognized by the image recognizing unit 30 based on coordinates on the captured image. Measure the distance of each. The measurement accuracy of this distance is not particularly limited, but an arbitrary unit can be set between the meter unit and several tens of meter units.
  • the ultrasonic radar 29 or the like may be used as described above.
  • the image recognition part 30 should just detect the mobile body which exists within the predetermined distance. Since this distance corresponds to the coordinates of the captured image, the image recognition unit 30 only has to detect a moving body that exists within a predetermined coordinate range on the captured image.
  • the azimuth information generation unit 22 generates the shooting azimuth of the camera sensor (surveillance camera) 27 using a gyro compass or the like, for example. That is, the camera sensor 27 and the gyrocompass are installed integrally on the monitoring device 10, and the gyrocompass detects the shooting direction of the camera sensor 27.
  • the imaging direction of the camera sensor 27 is also fixedly determined. Therefore, it is not always necessary to have a gyrocompass, and when the monitoring apparatus 10 is provided, a fixed value representing the imaging direction may be stored in the direction information storage unit 32. In this case, the operation associated with the gyro compass is unnecessary, so that the cost and power consumption of the monitoring device 10 can be reduced.
  • the wireless communication unit 23 uses the information (type and distance of each moving body) about each moving body obtained by the image processing / signal generating unit 21 and the information of the azimuth information generating unit 22 to define data described later.
  • a data signal having a format is generated, and the data signal is broadcast by a radio signal.
  • information from the extended information generation unit 24 may be added to the data signal as necessary.
  • the extended information generation unit 24 includes a traffic information storage unit 34.
  • the traffic information storage unit 34 stores information such as road signs that exist in the imaging direction of the camera sensor 27 in advance.
  • the wireless communication unit 23 includes a wireless interface [1] 33a based on IEEE 802.11a / b / g / n (so-called wireless LAN) and an IEEE 802.11p (so-called WAVE (Wireless Access Access in Vehicular). Environments)) based wireless interface [2] 33b.
  • the monitoring device 10 in FIG. 2 captures images at regular intervals using a surveillance camera in the sensor unit 20, and performs wireless processing at regular intervals through the processing of the image processing / signal generation unit 21 for the captured images.
  • the data signal is broadcast using the communication unit 23.
  • the entire sequence at this time is controlled by a processor unit (CPU) 25.
  • the image processing / signal generation unit 21 can be realized by software processing using a general-purpose processor or hardware processing using a processing circuit dedicated to image processing. From the viewpoint of securing real-time properties, it is preferable to use hardware processing.
  • FIG. 3A shows a user device 40 for an automobile, for example.
  • 3A includes a wireless communication unit 41, an information processing unit 47, and a user notification unit 45.
  • the information processing unit 47 may include at least one of the mobile phone / smartphone 42, the navigation system 43, and the drive recorder 44.
  • the user notification unit 45 corresponds to the image display unit 11, the audio output unit 12, the vibration unit 13, and the like described in FIG.
  • the wireless communication unit 41 includes a wireless interface [1] 46a based on a so-called wireless LAN and a wireless interface [2] 46b based on a so-called WAVE.
  • the information processing unit 47 includes an azimuth detecting unit (that is, a compass function) 49 that detects its own traveling direction.
  • an azimuth detecting unit that is, a compass function
  • the direction detection unit 49 is used as a part of the GPS function. It is prepared in advance.
  • the case where the information processing unit 47 is the navigation system 43 will be described as an example.
  • the navigation system 43 is connected to the wireless communication unit 41 by a dedicated or general-purpose interface (USB or the like).
  • the navigation system 43 transitions to the urban driving mode or “cyber curve mirror mode” after connecting to the wireless communication unit 41, or automatically uses the map information and incidental information that the navigation system 43 has to automatically display national roads, prefectural roads, etc.
  • the mode is changed to the urban travel mode or “cyber curve mirror mode”. Accordingly, the search mode for establishing a communication link with the monitoring apparatus 10 is entered in conjunction with the wireless communication unit 41.
  • the user device 40 mounted on the automobile is Detect the wireless signal transmitted from the cyber curve mirror.
  • the user device 40 starts communication synchronization with the monitoring device 10 based on a predetermined wireless communication standard (WAVE, wireless LAN, etc.) and establishes a link.
  • WAVE wireless communication standard
  • the user device 40 receives information (data) broadcast from the monitoring device 10 arranged at the intersection in the approach direction, and the information processing unit 47 processes the received information to obtain a desired user interface. It can be transmitted to the vehicle driver via (user notification unit 45).
  • FIG. 3B shows a user device 50 for a pedestrian or the like, for example.
  • 3B includes a wireless communication unit 51, an information processing unit 48, and a user notification unit 45.
  • the information processing unit 48 is typically composed of a mobile phone / smartphone 52 including an azimuth detection unit (that is, a compass function) 49 that detects its own traveling direction. If the mobile phone / smartphone 52 has a GPS function, the direction detection unit 49 is provided as a part of the GPS function.
  • the user notification unit 45 corresponds to the image display unit 11, the audio output unit 12, the vibration unit 13, and the like described in FIG.
  • the wireless communication unit 51 includes a wireless interface [1] 46a based on a so-called wireless LAN.
  • the wireless communication unit 51 and the user notification unit 45 can be mounted as one function of the mobile phone / smartphone 52.
  • the information processing unit 48 is not necessarily limited to the mobile phone / smartphone 52, and at least according to the function of processing the information from the direction detection unit 49 and the wireless communication unit 51, and the processing result.
  • a function for controlling the user notification unit 45 may be provided.
  • a smartphone or a mobile phone with a wireless LAN is used.
  • cybercurve mirror application software is installed in advance on a smartphone or mobile phone.
  • children / students who are attending or leaving school have smartphones or mobile phones with wireless LAN, and set the cyber curve mirror mode for the mobile phones or smartphones.
  • the user device 50 receives the information of the cyber curve mirror via the wireless LAN, The user is notified / warned of desired information via the user notification unit 45.
  • notification / warning is performed via a display, a speaker, a vibration function, or the like mounted on a mobile phone or a smartphone.
  • the child / student etc. can recognize the information on the moving body through the user device 50. .
  • the safety support system of this embodiment should be used. Can do. For example, if you are using normal headphones, you can warn by inserting a warning sound during playback of music, etc., or use headphones with a vibration device, head pad, helmet, etc. By vibrating, danger can be notified and danger avoidance can be easily performed.
  • the user notification unit 45 in FIG. 3A is not limited to a display method using an image display unit that is normally used by a navigation system, a smartphone, or the like, but a drive recorder device, a car instrument panel, A display method linked to a head-up display or the like may be used.
  • 3A and 3B one or more vibration devices are arranged in glasses, a headband, a helmet for a bicycle or a motorcycle, etc., and a navigation system or a smartphone is wired or wireless (Bluetooth device). Etc.), it is also possible to give a warning by the vibration device. At this time, for example, when two vibration devices are used, it is possible to vibrate either the left or right according to the object, or to vibrate both simultaneously when there are objects on both the left and right. .
  • FIG. 4A is a plan view showing an example of an ideal installation method of a curve mirror
  • FIG. 4B shows an example of an installation method of the monitoring device (cyber curve mirror) of FIG. 2 corresponding to the curve mirror of FIG. 4A. It is a top view.
  • FIG. 4A it is desirable that the four curve mirrors are installed on the diagonal lines at the intersection.
  • the monitoring device (cyber curve mirror) is provided in each of the four curve mirrors shown in FIG. 4A.
  • the imaging directions of the monitoring cameras of the monitoring devices (cyber curve mirrors) 10a, 10b, 10c, and 10d installed at the four corners (A, B, C, and D) of the intersection are respectively the corresponding curve mirrors.
  • the direction is the same as the projected direction.
  • FIG. 5 is a plan view showing an example of a practical installation method of the curve mirror.
  • optical curve mirrors of two surfaces are respectively provided through a column (pole) at the position B on the left side and the position A on the right side with respect to the traveling direction of the vehicle driver. is set up.
  • FIG. 6A is a plan view showing an example of an installation method of the monitoring device (cyber curve mirror) of FIG. 2 corresponding to the curve mirror of FIG.
  • monitoring devices (cyber curve mirrors) 10a, 10b, 10c, and 10d are attached to a total of four curve mirrors installed at two locations on a diagonal line, respectively.
  • the imaging direction of the monitoring camera of each monitoring device (cyber curve mirror) is the same as the direction displayed on the corresponding curve mirror. However, in this case, in order to image the required intersection area, it is necessary to change the depression angle in the imaging direction between the two monitoring devices (for example, 10a and 10b) installed at each location.
  • FIG. 6B is a perspective view corresponding to FIG. 6A.
  • each monitoring device such as a flat panel display
  • monitoring devices 10a and 10b 10c and 10d are also shown
  • the depression angle of the monitoring device 10a is ⁇ 1
  • the depression angle of the monitoring device 10b is ⁇ 2
  • FIGS. 7A and 7B are plan views showing an example of an installation method different from FIG. 6A of the monitoring device (cyber curve mirror) of FIG. 2 corresponding to the curve mirror of FIG.
  • FIG. 7B is a perspective view corresponding to FIG. 7A.
  • the monitoring device 10c plays the role (imaging region) of the monitoring device 10a in FIG. 6A, and conversely, the monitoring device 10a plays the role (imaging region) of the monitoring device 10c in FIG. 6A.
  • the four corners of the intersection are A (first corner), D (second corner), B (third corner), and C (fourth corner) in the clockwise order, for example, installed at A (first corner).
  • One of the monitoring devices 10c is installed so as to capture an area where the intersection is between D (second corner) and B (third corner), and the other monitoring device 10d is B (third corner).
  • C (fourth corner) are installed so as to image a region having an entrance of an intersection.
  • each of the monitoring cameras of the monitoring devices 10a, 10b, 10c, and 10d takes an image of the area ahead of the road width in the intersection.
  • both devices are installed at the same depression angle ⁇ 2
  • the coordinates of the entrance of the intersection on each captured image of each monitoring device can be kept equal, and the relationship between the distance and the coordinate on each captured image also matches.
  • each monitoring device can be installed uniformly at the same angle (shooting angle, shooting frame pattern direction), and the conditions of image recognition processing and distance measurement processing of each monitoring device can be made the same. .
  • FIG. 7A the field of view of the intersection seen from the vehicle driver from below (approach direction) is as shown in FIG.
  • an approaching vehicle from below the approach direction
  • the monitoring device 10b and the monitoring device 10c For example, for the approaching vehicle from the left, information from the monitoring device 10a and the monitoring device 10d is required.
  • the user device (40 or 50 in FIGS. 3A and 3B) can transmit information from any of the plurality of monitoring devices 10a, 10b, 10c, and 10d by radio signals according to the direction of entry of the user device. You must judge exactly whether you get it.
  • FIG. 8 is a diagram illustrating an example of a data format of a radio signal broadcast by the monitoring apparatus of FIG. 9A, 9B, 9C, and 9D are diagrams each showing an example of the detailed contents of FIG.
  • n (n is an integer of 1 or more) sets of mobile information [1] (60 [1]) to mobile information [ n] (60 [n]), orientation information 61, and extended information 62 are included.
  • Each mobile object information [k] (k is an integer of 1 to n) includes an image recognition ID [k] and distance information [k].
  • each image recognition ID, each distance information, direction information 61, and extended information 62 are 4-bit information, for example, as shown in FIGS. 9A, 9B, 9C, and 9D. Meaning is made.
  • Each image recognition ID represents the type of moving object determined by image recognition, as shown in FIG. 9A.
  • Each image recognition ID is generated by the image recognition unit 30 of the monitoring apparatus 10 in FIG.
  • “0000” indicates that there is no moving object (target object / human body)
  • “0001” indicates that a moving object exists but its type is being determined.
  • real-time processing is required, and if image recognition takes time (for example, when several seconds are required), no information is given to the user device side. There may be situations where the is not passed. For example, in the case of a vehicle, when traveling at a speed of 20 km / h, the speed is about 5 m / sec, and a car 10 m before the intersection enters the intersection in 2 seconds.
  • FIG. 9A “0000” and “0001” are defined so that information indicating the presence / absence of a moving body can be selected and generated even during the image recognition process (that is, the state of the type of the moving body is incomplete). ing.
  • FIG. 9A includes a rule that when an image recognition process is completed, an image recognition ID can be selected and generated based on the result.
  • “0010” is selected and generated for a vehicle
  • “0011” is selected for a bicycle / bike
  • “0100” is selected for a single pedestrian.
  • the image recognition process is not particularly limited, but is typically performed using a method such as template matching.
  • FIG. 9B distance information as shown in FIG. 9B is selected and generated from the captured image.
  • the distance information is generated by the distance measuring unit 31 of the monitoring device 10 in FIG.
  • the distance between the moving body and the entrance of the intersection may be detected with high accuracy using an ultrasonic radar or the like as described in FIG. .
  • FIGS. 10A and 10B a method for generating distance information simply and at high speed from the coordinates of the moving body in the captured image (frame) of the monitoring camera is shown in FIGS. 10A and 10B. I will explain. 10A and 10B are explanatory diagrams illustrating an example of the processing content of the distance measuring unit in the monitoring apparatus of FIG.
  • FIG. 10A shows an example of the detailed positional relationship of each moving body accompanying the image recognition / distance measurement processing of the monitoring devices (cyber curve mirrors) 10b and 10c, taking the intersection of FIG. 7A as an example.
  • FIG. 10B shows an example of an image captured by each monitoring camera of the monitoring devices 10c and 10b in FIG. 10A.
  • a method for selecting and generating distance information of a moving body will be described using a captured image by the monitoring apparatus 10c shown in FIG. 10B as an example.
  • the manager or the like determines the position of the “+ marker” in order to determine the maximum value of the distance information on the captured image.
  • this “+ marker” is set at a point having a maximum value of 15 m.
  • the point of “+ marker” varies depending on the angle (the depression angle / the depression angle) of the monitoring device (monitoring camera) to be installed.
  • the reference position on the imaging screen in this case, a 0 m line corresponding to the entrance of the intersection
  • the relationship between the relative coordinates from the reference position and the actual distance corresponding thereto also changes.
  • the angle of the surveillance camera when the depression angle of the surveillance camera is large, the angle is from the top to the line of sight. When the depression angle is small, the angle is from the side to the line of sight, so the relationship between the coordinates on the captured image and the actual distance changes. .
  • the installation angle (the depression angle) of each monitoring device should be the same. Can do.
  • the installation angle (the depression angle) ⁇ 2 is also the same.
  • the relationship between the coordinates on the photographed image and the actual distance is the same, and the reference position (0 m line corresponding to the entrance of the intersection) on the photographed image is substantially the same.
  • the reference position slightly changes according to the width of each road in the crossroads. For this reason, it is possible to define the coordinates of the reference position (0 m) for each monitoring device in common, but in order to increase the accuracy, the coordinates of the reference position (0 m) are individually set for each monitoring device. It may be determined as follows.
  • the administrator and the like determine the position of the “+ marker” (here, the maximum value is 15 m).
  • the position of the “+ marker” here, the maximum value is 15 m.
  • the installation angles (the depression angles) of the respective monitoring devices (monitoring cameras) are the same, the relationship between the coordinates on the captured image and the actual distance is uniquely determined. Can be determined automatically.
  • the administrator or the like may input a distance or the like for setting “+ marker” to the monitoring device.
  • the installation angle (the depression angle) of each monitoring device changes.
  • the administrator or the like registers the installation angle (the depression angle) and the reference position for each monitoring device, and sets “+ marker”.
  • the monitoring apparatus can calculate the relationship between the coordinates on the captured image and the actual distance by a predetermined arithmetic expression.
  • the accuracy of distance measurement and the accuracy of image recognition described above are different.
  • FIGS. 7A and 7B it is desirable to use an installation method as shown in FIGS. 7A and 7B.
  • the installation method as shown in FIGS. 7A and 7B in some cases, the relationship between the coordinates on the captured image and the actual distance is not obtained by a predetermined arithmetic expression, but these relationships are stored in a table in advance. And the distance can be measured based on this table.
  • distance information as shown in FIG. 9B can be selected and generated within the range from the reference position (0 m) to “+ marker” (15 m). .
  • distance measurement that does not particularly require accuracy is performed, and 0 m (“0000”), 1 m (“0001”), 3 m (“0010”).
  • the distance is expressed by 6 steps of 5 m (“0011”), 10 m (“0100”), and 15 m (“0101”).
  • the distance can usually be recognized only with a sense of nearness or distance, so it is only necessary to perform distance measurement with a certain degree of accuracy.
  • the accuracy of distance measurement is extremely increased, the data size (number of bits) of the distance information in FIG. 9B increases, and the resources on the monitoring device side that measures the distance and the user device side that receives the distance information In some cases, the accuracy of several m level is used in the example of FIG. 9B. However, of course, the accuracy of distance measurement, the number of steps, and the maximum value of distance measurement (the position of “+ marker”) can be changed as necessary.
  • the distance measurement unit 31 illustrated in FIG. 2 is the bottom line portion of the detection frame (for example, the frame of the template) of the moving object (here, the automobile) when performing the image recognition process. It is determined which of the six steps defined in FIG. 9B is in the position, and the distance information in FIG. 9B is determined.
  • the point (coordinates) set by “+ marker” is a start point when performing image recognition and distance measurement.
  • the image recognizing unit 30 in FIG. 2 falls within the range from the reference position (0 m) to the position of “+ marker” (for example, 15 m) when the lowest line portion of the detection frame of the moving body reaches the coordinates of “+ marker”.
  • generation of the image recognition ID is started.
  • the position of “+ marker” can be arbitrarily set as long as it is within the photographed image.
  • the timing for notifying the user apparatus of the presence of the moving body changes depending on the set value. For example, if it is desired to notify in advance at the earliest possible time (when information is to be given to the user with a margin), the position of the “+ marker” may be set far (upper side on the captured image). For example, when an intersection or the like is adjacent, the corner of the adjacent intersection may be set to the position of “+ marker” (several meters to several tens of meters).
  • the image recognition ID and its distance information are selected and generated.
  • a predetermined range here, 0 m to 15 m
  • the image recognition process is started and “0001” in FIG. 9A is generated as the image recognition ID.
  • the “0001” is out of a predetermined range (here, 0 m to 15 m) until the discrimination of the type of the corresponding moving body is completed or before the discrimination of the type of the moving body is completed. Will continue to be generated.
  • “0001” continues to be generated as the image recognition ID even when, for example, it is impossible to determine the moving object or when the registered image recognition ID is not hit. For this reason, even if the monitoring device takes more time than necessary for image recognition, or when the object cannot be specified for some reason, the fact that there is at least some moving object for the user device, and that Distance information can be transmitted. As a result, it is possible to warn the user of the possibility of danger in real time, and to improve the user's safety.
  • each mobile object information 60 [k] is information with a small data size.
  • a communication band of at least several MB / s or more is required.
  • a captured image of several tens of frames is acquired per second, moving object information 60 [k] is generated for each frame, and is transmitted.
  • the communication band may be several hundred B / s to several kB / s. Therefore, it is possible to sufficiently ensure the real time property.
  • FIG. 9C shows the detailed contents of the orientation information 61 in FIG.
  • the azimuth information 61 represents the imaging direction of the monitoring camera in the monitoring device 10, and is normally fixedly set in the azimuth information storage unit 32 (FIG. 2) of the monitoring device 10 when the monitoring device 10 is installed.
  • the azimuth information storage unit 32 (FIG. 2) of the monitoring device 10 when the monitoring device 10 is installed.
  • the azimuth information storage unit 32 (FIG. 2) of the monitoring device 10 when the monitoring device 10 is installed.
  • “0000” is set in the direction information storage unit 32
  • the direction information storage is performed.
  • “0100” is set in the section 32. 2 transmits the azimuth information 61
  • the user devices 40 and 50 shown in FIGS. 3A and 3B need information from a comparison with its own azimuth detection unit (compass function) 49. Can be selected.
  • the traveling direction of the user device is the north direction
  • the data including the east direction (“0001”) as the azimuth information 61 among the data (FIG. 8) transmitted from the plurality of monitoring devices, and the west direction Data including (“0011”) may be processed.
  • FIG. 9D shows the detailed contents of the extended information 62 in FIG.
  • the extended information 62 includes, for example, information on road signs that exist in the imaging direction of the monitoring camera in the monitoring device 10.
  • the extended information 62 is normally fixedly set when the monitoring device 10 is installed, and is fixedly set in the traffic information storage unit 34 (FIG. 2) of the monitoring device 10.
  • the extended information 62 set in the traffic information storage unit 34 may be appropriately updated when the environment changes thereafter. For example, when the imaging direction of the monitoring device is prohibited from entering the vehicle, “0010” is set in the traffic information storage unit 34, and when the traffic direction is one-way, “0101” is stored in the traffic information storage unit 34. Is set.
  • 11A is a diagram illustrating an example of display contents based on the image recognition ID of FIG. 9A in the user device of FIG. 3A or 3B
  • FIG. 11B is an extension of FIG. 9D in the user device of FIG. 3A or 3B.
  • FIG. 12A is a plan view illustrating an example of traffic conditions at an intersection for explaining an example of display contents in the user device of FIG. 3A or FIG. 3B
  • FIG. 12B illustrates the traffic conditions of FIG. It is a figure which shows an example of the display screen in 3B user apparatus.
  • an image display unit is provided in advance as the user notification unit 45 of the user devices 40 and 50 in FIG. 3A or FIG. 3B, and the user devices 40 and 50 are provided with the safety support system of the first embodiment.
  • the image display application software is installed.
  • the application software includes an image library (icon, symbol, pictogram, etc.) corresponding to each value of the image recognition ID.
  • the user devices 40 and 50 recognize distance information (distance from the intersection angle) received as a pair with the image recognition ID based on the data format of FIG. 8 and reflect these information, for example, as shown in FIG. 12B. Display a simple display screen.
  • a 3D display screen by a navigation system or a captured image of the own vehicle (both a moving image and a still image) captured by an in-vehicle camera are supported for each image recognition ID.
  • the image library to be displayed is one-dimensionally arranged and displayed at positions separated by the distance indicated by the distance information with the intersection angle as a base point. In other words, using the AR technology, each moving object existing in the blind spot is overwritten and displayed on the live image.
  • the application software of the user devices 40 and 50 first receives data from each of the monitoring devices 10a, 10b, 10c, and 10d, and receives azimuth information 61 included in each data. Recognize Also, the application software can recognize that the traveling direction is the north direction based on its own direction detection unit (compass function) 49, and the left and right information (here, west and east) that are blind spots with respect to its own traveling direction. Information). As a result, the application software treats data from the monitoring device 10c in which “west” is set in the direction information 61 and data from the monitoring device 10b in which “east” is set in the direction information 61. To do.
  • the application software places the vehicle symbol / icon based on FIG. 11A at a position 10 m away from the west starting from the entrance of the west intersection (for example, the corner of the intersection). indicate.
  • the user can not only visually recognize other approaching vehicles, pedestrians, etc. by visually observing the optical curve mirror, but also via a user device that can be realized by a general-purpose information terminal device or a wireless communication device. It is possible to accurately recognize information such as an approaching vehicle and a pedestrian in real time. As a result, the user's cognitive ability is further increased, human errors such as oversight and misunderstanding that can occur when viewing and viewing the optical curve mirror can be reduced, and accidents at intersections can be prevented in advance. . Furthermore, as shown in FIG. 11A and FIG. 12B, the user apparatus receives information for acquiring information on the moving object with an image recognition ID and replacing it with an image library (icons, symbols, pictograms, etc.) for display. In addition to being able to reduce the data size (which can improve real-time performance), it is also possible to protect privacy related to mobile objects.
  • an image recognition ID icons, symbols, pictograms, etc.
  • FIG. 13 is a diagram showing an example of a display screen in which FIG. 12B is applied in the traffic situation of FIG. 12A.
  • the application software of the user device for example, if the moving object is far away and does not fall within the display screen range (here, within 10 m), the symbol And the distance (text) may be displayed, and only the distance (text) may be sequentially updated, and the symbol may be moved according to the distance after entering the range.
  • the application software is limited to, for example, when the driver makes a left turn (for example, it may be interlocked with turn signal or steering wheel position information).
  • a road sign corresponding to the extended information 62 may be displayed. Furthermore, as indicated by reference numeral 82 in FIG. 13, while receiving information indicating the presence of a moving object (ie, other than “0000” in FIG. 9A) from the monitoring device to be processed, the application software always alerts. It may be displayed.
  • FIG. 12B and 13 show an example of the 3D display screen.
  • the present invention is not limited to this, and the user may be notified of information on the moving object on the 2D display screen as shown in FIG. 12A. Is possible.
  • the presence of the moving body or the presence / absence of the moving body from the left and right may be notified by sound, warning sound, vibration, or the like.
  • the notification by voice, warning sound, vibration, or the like may be performed in parallel with the image display when the image display unit is present in order to further enhance the user's cognitive ability.
  • left and right speakers such as audio equipment can be used, and it is sufficient to say “There is an approaching vehicle from the left” or “Left turn road, height is limited”, etc. .
  • vibration for example, two vibration devices may be installed on the left and right, and if there is an approaching vehicle from the left side, the user may be notified by vibrating the left vibration device.
  • FIG. 14 is a schematic diagram showing a hierarchical structure example of a data format in units of intersections when a plurality of monitoring devices installed at the intersections transmit data based on the data format of FIG.
  • the data format defines the format of the application layer.
  • the components of the data format are an image recognition ID, distance information, direction information, and extended information as described in FIG.
  • Each field is defined by a 4-bit code here, but this bit length may be changed as necessary.
  • n moving objects are recognized in the range of the performance of one surveillance camera.
  • each of the n mobile objects whose images have been recognized is represented by a pair of an image recognition ID and its distance information.
  • the distance information is not changed, and only the distance information is updated. As described with reference to FIG. 10B and the like, this update is performed until the moving object having the image recognition ID out of the predetermined range based on the “+ marker” on the captured image or until the communication is completed.
  • n pieces of mobile unit information (8 bits) as a pair of an image recognition ID and distance information is recognized as an image recognition of n pieces of mobile units.
  • the direction information (4 bits) and the extension information (4 bits) are further added to the generated information (8 bits) of the n mobile units.
  • Each piece of information is generated by the image processing / signal generation unit 21, the azimuth information generation unit 22, and the extended information generation unit 24 in FIG. 2, and the CPU 25 in FIG. 2 collects these pieces of information as data for each monitoring device.
  • the number of surveillance cameras installed varies depending on the situation of the intersection, etc., at the installation location unit. For example, two to four surveillance cameras are installed at intersections and crossroads where there is no signal, and one or two surveillance cameras are installed at the T-junction. Also, the number of installed cameras varies depending on the type of surveillance camera. For example, with the recent development of camera lens technology and image correction technology, there are surveillance cameras equipped with fish-eye lenses or 360-degree fish-eye lenses and advanced digital image processing. In this case, the number of surveillance cameras is further reduced. be able to.
  • the number (m) of the monitoring cameras changes in units of installation locations, and each data from the m monitoring cameras (monitoring devices) is transmitted to the user device.
  • each monitoring device performs transmission so as not to cause data collision according to the wireless communication method.
  • IEEE 802.11a / b / g / n so-called wireless LAN
  • a plurality of monitoring devices have different frequency band channels.
  • Communication with the user device In this case, for example, an option for a channel assigned for an existing use or an empty channel can be used.
  • FIG. 15A and 15B are flowcharts showing an example of detailed processing contents of the monitoring apparatus of FIG.
  • the m monitoring devices 10 installed at the intersection perform imaging using the monitoring cameras (camera sensor 27, infrared sensor 28) included in the sensor unit 20 of FIG. 2 (steps S101a and S101b).
  • each monitoring apparatus 10 performs recognition of each moving body and distance measurement of each moving body from the captured image using the image processing / signal generation unit 21 in FIG. 2 (steps S102a and S102b). Details of the processing contents will be described with reference to FIG. 15B.
  • each monitoring device 10 generates data for each monitoring device as shown in FIGS. 8 and 14 (steps S103a and S103b). Specifically, each monitoring apparatus 10 uses the azimuth information 61 generated by the azimuth information generation unit 22 in FIG. 2 and the extended information generation unit 24 in FIG. 2 for the processing result of the image processing / signal generation unit 21. The generated extended information 62 is added.
  • one of the plurality of monitoring devices 10 is separated from the other monitoring devices 10 by taking as an example a case where data in units of installation locations (intersections) are collected into one frame. Is obtained and data for each installation location is generated (step S104). At this time, any of the monitoring devices 10 adds extended information as necessary. Then, any one of the monitoring devices 10 transmits the data for the installation location unit using the wireless communication unit 23 of FIG. 2 (step S105).
  • imaging by the monitoring camera in steps S101a and S101b is periodically performed, for example, several tens of frames or more per second, and processing after steps S102a and S102b is performed each time.
  • the image recognition unit 30 in FIG. 2 first detects a moving object that exists within a predetermined range on the captured image (step S201). Within the predetermined range is a range based on the position of the “+ marker” as described in FIG. 10B. In addition, the moving object is detected by the presence or absence of time-series fluctuations in the coordinates of the object in each frame. When one or more moving objects are detected in step 201, the image processing / signal generating unit 21 sets the first moving object appropriately determined from the detected moving objects as a processing target (step S202).
  • the distance measuring unit 31 in FIG. 2 measures the distance of the moving object to be processed using, for example, the method described in FIG. 10B (step S208). And the distance measurement part 31 produces
  • the image processing / signal generation unit 21 determines whether or not the determination of the types of all the moving bodies detected in Step S201 has been completed (Step S212).
  • the image processing / signal generation unit 21 completes the discrimination of all types of moving objects (that is, moving object information [1] (60 [1]) to moving object information [n] (60 [n] in FIG. 8). ) Is completed), the image recognition / distance measurement process is completed, and the process returns to FIG. 15A.
  • the image processing / signal generation unit 21 sets the next moving body as a processing target (step S214), and performs step S203 and step The process returns to S208.
  • the image processing / signal generation unit 21 is, for example, a predetermined period based on the communication interval in the wireless communication unit 23 (referred to as a restriction period). ) Has passed, the image recognition / distance measurement process is terminated, and the process returns to FIG. 15A (step S213).
  • the predetermined period in step S204 described above can be determined, for example, as a period obtained by dividing the limit period described above by the number of moving bodies detected in step S201, or a period shorter than that. In the latter case, or in the former case, if there is a mobile unit that has completed the type identification at an early stage, a surplus period can be secured within the limit period accordingly. It is also possible to perform subsequent processing on a moving object whose type cannot be determined. In this case, for example, the moving object to be processed may be set in order from the moving object closest to the intersection.
  • FIG. 16 is a flowchart showing an example of detailed processing contents of the user device shown in FIG. 3A or 3B.
  • the processing flow in FIG. 16 inherits the usage method of the existing wireless communication system, wireless information device / terminal, etc., and operates, for example, on these application layers (that is, application software).
  • application software compiled according to the OS installed in the information terminal is mounted on the information terminal having a specific wireless communication function constituting the user apparatus.
  • the processing flow in FIG. 16 is operated by application software and does not depend on a specific wireless communication system.
  • the user devices 40 and 50 when the user devices 40 and 50 start this application software and set the cyber curve mirror mode, the user devices 40 and 50 enter a search mode for establishing a communication link with the monitoring device.
  • the user devices 40 and 50 establish a communication link with the monitoring device, and start receiving data using the wireless communication units 41 and 51 of FIG. 3A or 3B when synchronization is established (step S301).
  • step S301 data for each installation location described in FIG. 14 is received.
  • the user devices 40 and 50 recognize the data of each monitoring device unit from the received data of the installation site unit (step S302), and detect the extended information of the installation site unit (step S311).
  • the user devices 40 and 50 detect azimuth information included in each data of each monitoring device (step S303). Subsequently, the user devices 40 and 50 recognize their own traveling direction based on the azimuth detecting unit (compass function) 49 described above, and compare this traveling direction with each azimuth information detected in step S303. The necessary data is selected from the data for each monitoring device (step S304). Next, the user devices 40 and 50 set one of the selected data as a processing target (step S305).
  • the user devices 40 and 50 detect each moving body information from the processing target data (step S306), and also detect extended information for each monitoring device from the processing target data (step S312). . Subsequently, for each moving body information, the user devices 40 and 50 determine an icon / symbol or the like corresponding to the image recognition ID based on FIG. 11A, and based on the distance information and the corresponding azimuth information, Coordinates for displaying symbols and the like are determined (step S307). That is, the user devices 40 and 50 display the icons, symbols, and the like as shown in FIG. 12B based on the orientation information included in the data to be processed in step S306 (data in units of monitoring devices). Further, based on the distance information corresponding to the icon / symbol etc. (image recognition ID) and the scale of the screen to be actually displayed, the coordinates located in the above-mentioned direction are determined. Note that the information illustrated in FIG. 11A is included in the application software of the user device (information terminal).
  • the user devices 40 and 50 determine whether or not the processing in steps S306, S312 and S307 has been completed for all the data selected in step S305 (data in units of monitoring devices) (step S308). If there is unfinished data, the user devices 40 and 50 set the next data to be processed, and return to steps S306 and S312 (step S310). On the other hand, when there is no unfinished data, the user devices 40 and 50 make the image display unit included in the user notification unit 45 based on each icon / symbol defined in step S307 and its coordinates. Then, the display as shown in FIG. 12B is performed (step S309).
  • the user devices 40 and 50 control the audio output unit and the vibration unit included in the user notification unit 45 instead of the image display unit or in addition to the image display unit. Notification / warning may be performed.
  • the user devices 40 and 50 display the image on the image display unit, the image based on the extended information detected in step S311 or step S312 based on the information shown in FIG. 11B included in the application software. Display is also performed.
  • the safety support system and the safety support apparatus typically, the occurrence of an anti-vehicle, an interpersonal accident, etc. due to an encounter at an intersection having no poor view, a T-junction, or the like. It can be reduced and safety can be improved.
  • the image recognition ID and distance information real-time performance can be improved and personal information can be protected.
  • the azimuth information it becomes possible to appropriately select necessary data.
  • FIG. 2 In the first embodiment described above, the case where a monitoring device (cyber curve mirror) including a monitoring camera having a standard lens is mainly installed in the four-way way has been described as an example. 2 will be described by taking as an example a case where a monitoring device including a monitoring camera having a special lens is installed on a three-way (T-junction) or a sharp curve. Normally, the viewing angle of a standard lens is about 25 ° to 50 °. However, as a special lens, a wide angle lens having a viewing angle of about 60 ° to 100 ° or a viewing angle of 180 ° or more is used. Fisheye lenses are known.
  • FIG. 17 is a plan view showing an installation example of the monitoring device on a sharp curve and an operation example at that time in the safety support system according to the second embodiment of the present invention.
  • a curve mirror is installed at the apex of the sharp curve, and a monitoring device (cyber curve mirror) 10 is attached to the curve mirror.
  • the monitoring device 10 includes a monitoring camera having a fisheye lens.
  • the monitoring device 10 divides the captured image from the center to the left and right, and behaves as if each captured image is acquired by two simulated monitoring devices 90a and 90b. Thereby, each operation
  • the monitoring device 10 divides the captured image into left and right from the center, the right captured image is the captured image of the pseudo monitoring device 90a, and the left captured image is the captured image of the pseudo monitoring device 90b.
  • the monitoring device 10 performs image recognition processing and distance measurement processing on the two captured images in the same manner as in the first embodiment, and provides two pieces of data for each monitoring device shown in FIG. Generate. Further, when the monitoring device 10 is installed, as shown in FIG. 9C, the divided information (“1000”: Right, “1001”: left).
  • the user device of the approaching vehicle from the lower right acquires the information of the pseudo monitoring device (left) 90b
  • the user device of the approaching vehicle from the lower left is the information of the pseudo monitoring device (right) 90a.
  • the user device acquires information of the pseudo monitoring device (left) 90b when the traveling direction is tilted to the left based on its own direction detection unit (compass function) 49, and the traveling direction is obtained. Is tilted to the right, the information of the pseudo monitoring device (right) 90a is acquired. Thereby, the information on the blind spot in the sharp curve can be acquired in addition to the information from the optical curve mirror.
  • “1000” (right) and “1001” (left) are defined as the direction information for the sharp curve.
  • the user apparatus can determine the shape of the sharp curve. It is also possible to use azimuth information such as east, west, south, and north.
  • azimuth information such as east, west, south, and north.
  • “0101” (southeast) is set in the pseudo monitoring device (right) 90a
  • “0110” (southwest) is set in the pseudo monitoring device (left) 90b.
  • the user device of the approaching vehicle directed in the northwest direction may recognize that the sharp curve is in the southwest direction and acquire information on the pseudo monitoring device (left) 90b.
  • FIG. 18A and 18B are plan views showing an example of a general arrangement method of the curve mirrors on the three-way path.
  • FIG. 18C is a plan view showing an example of installation of the monitoring device on the three-way road and an example of operation at that time in the safety support system according to Embodiment 2 of the present invention.
  • the optical curve mirror is usually arranged as shown in FIG. 18A or 18B.
  • FIG. 18A an approaching vehicle from below can visually recognize a moving body from the right, and conversely, an approaching vehicle from right can visually recognize a moving body from below. That is, in the case of left-hand traffic, it is necessary to detect the danger of a moving body entering from the right as a minimum condition.
  • one optical curve mirror on one surface should be arranged. This condition can be satisfied.
  • one optical curve mirror with two surfaces at the end of the road in the vertical direction on the T-junction as shown in FIG. 18B.
  • an approaching vehicle from below can visually recognize a moving body from the left and right
  • an approaching vehicle from left and right can visually recognize a moving body from below.
  • a monitoring device is provided in addition to the curve mirror in FIG. 18B, if a monitoring camera having a standard lens is used, the same direction as that projected onto the two curved mirrors is set as described in the first embodiment. There is a need to install two monitoring devices to capture images.
  • one monitoring device 10 is added to the curve mirror of FIG. 18B, and a fisheye lens having a viewing angle of 180 ° is applied to the monitoring camera of the monitoring device 10.
  • the monitoring device 10 similarly to the case of FIG. 17, the monitoring device 10 divides the captured image into three in units of 60 °, and each of the captured images is acquired by the three pseudo monitoring devices 91a, 90b, and 90c in a pseudo manner.
  • the monitoring device 10 divides the captured image into three, the right-side captured image is the captured image of the pseudo-monitoring device 91a, the central-side captured image is the captured image of the pseudo-monitoring device 91b, and the left-side captured image Is a captured image of the pseudo monitoring device 91c.
  • the monitoring device 10 performs image recognition processing and distance measurement processing on the three captured images in the same manner as in the first embodiment, and three pieces of data for each monitoring device shown in FIG. Generate.
  • the orientation information of each captured image is set for the monitoring device 10 in the same manner as in the first embodiment.
  • the azimuth information shown in FIG. 9C “0001” (east) is set in the pseudo monitoring device 91a, “0010” (south) is set in the pseudo monitoring device 91b, and “0010” is set in the pseudo monitoring device 91c. “0011” (west) is set.
  • the user apparatus of the approaching vehicle can obtain necessary information using the same operation as in the first embodiment.
  • the user device of the approaching vehicle entering in the north direction may acquire information on the left and right direction (east-west direction) with respect to the traveling direction. Therefore, the user device selects data from the pseudo monitoring devices 91a and 91c as a processing target, and notifies / warns the user using an image, sound, vibration, or the like based on the data.
  • additional information may be added. That is, it is also beneficial to add extended information in units of installation locations in addition to the azimuth information and extended information in units of monitoring devices, like the data in units of installation locations in FIG. 14 described above.
  • extended information as shown in FIG. 9D, additional information on the road shape such as “1000” (steep curve), “1001” (T-shaped road), “1011” (three-way road) is added.
  • the direction of the required information may change depending on the road shape. In such a case, the user device does not have the function of recognizing the road shape, but the extension is performed. It is possible to make a judgment according to the road shape by giving it as information from the outside.
  • FIG. 19 is a plan view showing an application example of FIG. 18C.
  • the safety support system according to the present embodiment can be applied not only to an outdoor road but also to an indoor road as shown in FIG.
  • a safety mirror using a reflecting mirror may be arranged instead of the curve mirror shown in FIG. 18C.
  • the monitoring device 10 can be attached to such a safety mirror. In this case, the monitoring device 10 can be linked to the warning lamp 92, an alarm sound, or the like.
  • the monitoring device 10 when the moving body is detected in two or more captured images among the captured images of the pseudo monitoring devices 91a, 91b, and 91c, the monitoring device 10 emits a warning lamp 92, an alarm sound, or the like. To emit. As a result, it is possible to notify / warn each pedestrian, etc. that has entered the intersection of the traffic path that there is a moving body (for example, a pedestrian) from another direction, and to prevent a collision at the encounter. be able to.
  • a moving body for example, a pedestrian
  • the safety support system according to the present embodiment can be applied not only to the environment as shown in FIG. 19 but also to various environments where such various mirrors are arranged. In either case, it is necessary to set the azimuth information for each monitoring camera shooting screen (including the divided imaging screen), thereby appropriately performing notification / warning of the moving body using this azimuth information. It becomes possible.
  • the number of monitoring devices to be physically installed can be further reduced.
  • the cost can be reduced.
  • FIG. 20A and 20B are explanatory diagrams illustrating an example of general characteristics in the wireless LAN.
  • FIG. 20A shows a relationship example between a distance from an access point (hereinafter abbreviated as AP) and reception sensitivity (RSSI: Received Signal Strength Indicator). The RSSI decreases as the distance from the AP increases.
  • FIG. 20B shows an example of the relationship between the number of receiving terminals connected to one AP and the throughput (communication information amount) for each terminal. The throughput decreases as the number of receiving terminals increases.
  • reducing the amount of information using image recognition ID, distance information, and the like is from the viewpoint of FIG. 20B together with the viewpoint of real-time characteristics described in the first embodiment and the like. Will also be beneficial. That is, by reducing the amount of information, it is possible to increase the number of users (in other words, the number of user devices that can receive data from each monitoring device (AP)) that can receive and receive cybercurve mirror services at intersections without signals. .
  • AP monitoring device
  • FIG. 21 is a sequence diagram illustrating an example of a communication procedure between an access point (AP) and a wireless LAN terminal in a wireless LAN.
  • the AP transmits a beacon frame by broadcast, and the wireless LAN terminal receives the beacon frame for a certain period of time, and searches for an AP that matches the ESSID (Extended Service Set Identifier). Then, the wireless LAN terminal authenticates with the AP in a predetermined procedure, and then starts receiving data.
  • the beacon frame or the like includes the AP MAC (Media Access Control) address.
  • the safety support system realizes real-time AP switching (handover) using this MAC address.
  • FIG. 22 is a plan view showing an application example to a road environment where intersections are arranged close to each other in the safety support system according to Embodiment 3 of the present invention.
  • the three-way (T-shaped road) [1] that advances in the east or west direction when entering from the south side and the west direction road is connected to the west or
  • a three-way road (T-shaped road) [2] advanced in the north direction is arranged in close proximity.
  • the monitoring device 10 [1] with a fisheye lens is installed in the three-way (T-shaped road) [1] in the state described in FIG. 18C of the second embodiment, and the monitoring device 10 [1] is accessed.
  • a wireless communication unit serving as a point (AP1) is provided.
  • a monitoring device 10 [2] with a fish-eye lens is also installed in a three-way (T-junction) [2], and the monitoring device 10 [2] includes a wireless communication unit serving as an access point (AP2). ing.
  • FIG. 23 is an explanatory diagram showing an operation example of the safety support system when passing through each of the three differential paths (T-shaped paths) in FIG.
  • the vehicle enters the three-way (T-shaped road) [1] straight from the south side (step S401), and turns left on the west side (step S402). Take a case where the vehicle enters straight from the west side toward the road (T-shaped road) [2] (step S403) and turns right on the north side (step S404).
  • FIG. 23 shows how the reception sensitivity (RSSI) changes at that time.
  • RSSI reception sensitivity
  • the user device mounted on the vehicle approaches AP1 while being linked to AP1.
  • the RSSI in the wireless communication unit of the user device increases as the AP1 approaches (step S401). Then, the RSSI becomes maximum when the vehicle turns left (an intermediate point that turns a corner) (step S402). If the vehicle goes straight after turning left, the RSSI decreases according to the distance of AP1 (step S403).
  • the terminal corresponding to the wireless LAN generally continues to maintain the link with AP1 unless a beacon frame having an RSSI value larger than the RSSI value of AP1 is received. As a result, for example, a handover to AP2 is performed after an intermediate point between AP1 and AP2.
  • the safety support system according to the present embodiment requires real-time performance, for example, when a general wireless LAN handover method as described above is used in a road environment as shown in FIG. The next T-junction is reached immediately after handover. As a result, there is a possibility that the user device of the approaching vehicle cannot acquire sufficient time for notifying / warning the user based on the information of other moving bodies that exist in the blind spot. Therefore, in the safety support system according to the third embodiment, the following method using the RSSI and MAC address of the AP is used in order to accelerate the handover.
  • the wireless communication unit of the user apparatus stores the MAC address of AP1 and monitors the RSSI value of AP1, and the RSSI peak value (RSSI_p) generated when the vehicle turns left in step S402.
  • RSSI_p the RSSI peak value
  • RSSI_th an RSSI threshold
  • RSSI_th an RSSI threshold
  • the wireless communication unit recognizes that the left turn has been completed by detecting that the RSSI value has decreased from the peak value (RSSI_p) to the threshold value (RSSI_th), cuts off the link with AP1, and The search process is started. In other words, the next AP search process or the like is performed at the left turn of step S402.
  • the wireless communication unit stores the MAC address of AP1 at this point, An exclusive search based on the MAC address is performed. That is, the wireless communication unit searches for a beacon frame including a MAC address other than the MAC address of AP1 currently stored. When the beacon frame is found, a link is established with the source AP (AP2 in this case), and the handover is performed to the source AP.
  • AP2 next AP
  • AP2 next AP
  • the user apparatus does not need to receive data from the management apparatus 10 [1], so it is beneficial to use such early handover.
  • the safety support system according to the present embodiment is provided with a monitoring device (10) provided with a curve mirror and including a monitoring camera (27, 28), an image processing unit (21), and a first wireless communication unit (23). ) And a user device (40, 50) that is carried by the user and includes the second wireless communication unit (41, 51) and the information processing unit (47, 48).
  • the image processing unit executes first to third processes. In the first process, N (N is an integer equal to or greater than 1) moving bodies existing within a predetermined image recognition range are detected from the captured image of the monitoring camera.
  • the type of the moving body is determined for each of N moving bodies, and a first value having a value corresponding to the determination result is selected from the first identifiers (image recognition IDs) in which a plurality of values are defined in advance. 1 identifier is generated.
  • the distance from a predetermined reference position is determined for each of N moving bodies based on the coordinates on the captured image, and a second identifier (distance information) in which a plurality of values are defined in advance. )
  • the first wireless communication unit transmits a data signal including N sets of first and second identifiers generated for N mobile objects.
  • the second wireless communication unit receives the data signal transmitted from the first wireless communication unit, and the information processing unit determines whether the mobile object is present based on the N sets of first and second identifiers included in the data signal. And performing a predetermined process for notifying the user of the presence state of the mobile object.
  • the monitoring device does not transmit the captured image to the user device, but transmits the type of the moving body detected from the captured image or the distance from the predetermined reference position with the identifier.
  • the monitoring device can reduce the amount of transmission data, and the user device can recognize the presence state of the moving object with a small amount of data.
  • real-time performance is improved, and safety can be improved.
  • the identifier is used, privacy can be protected.
  • the information processing unit further includes an azimuth detecting unit (49) for detecting the traveling azimuth of the user, and the first wireless communication unit further includes a data signal, A third identifier (azimuth information) indicating the imaging direction of the surveillance camera is added and transmitted.
  • the user apparatus can determine whether or not the received data signal is a necessary data signal by comparing the detection result of the azimuth detecting unit with the third identifier.
  • a monitoring device can be provided in an arbitrary imaging direction in an arbitrary curve mirror, and a plurality of monitoring devices are not limited to one. Can be added.
  • the user device further includes an image display unit (11, 45), and the information processing unit respectively corresponds to a plurality of values previously defined as the first identifier.
  • An icon or symbol is set, and an icon or symbol corresponding to the first identifier is displayed at coordinates corresponding to the second identifier for each of the N pairs of first and second identifiers on the image display unit.
  • the user apparatus further includes an audio output unit (12, 45) or a vibration unit (13, 45), and the information processing unit includes N sets of first and first sets. 2. Control the audio output unit or the vibration unit based on the identifier. Thereby, it becomes possible to notify the user of the presence state of the moving body by a method other than vision, and there may be a case where attention with a higher recognition degree can be drawn.
  • the image recognition range (range based on “+ marker”) in the first processing of the image processing unit can be arbitrarily set. Accordingly, it is possible to customize the extent to which the moving body is notified to the user according to the situation such as the intersection.
  • the user is not limited to a vehicle driving vehicle such as a car but may be a pedestrian or the like. That is, for example, it is possible to improve safety for pedestrians (for example, children, elderly people, etc.) who carry mobile phones equipped with a navigation system or the like.
  • the image processing unit divides the captured image for each predetermined viewing angle, and executes the first to third processes described above for each of the plurality of divided captured images. To do.
  • the first wireless communication unit transmits each of the divided plurality of captured images by adding a third identifier for each of the divided captured images. In this way, even when the surveillance camera includes a wide-angle lens, a fish-eye lens, or the like having a viewing angle of 90 ° or more, for example, the processing is divided into a plurality of captured images and the imaging orientation for each captured image is determined.
  • the user apparatus can determine whether each received data signal is a necessary data signal.
  • Safety support system is provided with a curved mirror, and includes a surveillance camera (27, 28), an image processing unit (21), and a first wireless communication unit (23). It has a plurality of monitoring devices (10a, 10b, etc.) and a user device (40, 50) that is carried by a user and includes a second wireless communication unit (41, 51) and an information processing unit (47, 48). Each image processing unit in the plurality of monitoring apparatuses executes first to third processes. In the first process, N (N is an integer of 1 or more) moving bodies existing within a predetermined image recognition range are detected from the captured image of the own monitoring camera.
  • the type of the moving body is determined for each of N moving bodies, and a first value having a value corresponding to the determination result is selected from the first identifiers (image recognition IDs) in which a plurality of values are defined in advance. 1 identifier is generated.
  • the distance from a predetermined reference position is determined for each of N moving bodies based on the coordinates on the captured image, and a second identifier (distance information) in which a plurality of values are defined in advance. ) To generate a second identifier having a value corresponding to the determination result.
  • Each first wireless communication unit in the plurality of monitoring devices transmits a data signal including N sets of first and second identifiers respectively generated for N mobile objects by an image processing unit corresponding to the first wireless communication unit. To do.
  • the second wireless communication unit receives data signals transmitted from the first wireless communication units in the plurality of monitoring devices, and the information processing unit includes N sets of first and second identifiers included in the data signals. Based on the above, the presence state of the moving body is recognized, and a predetermined process for notifying the user of the presence state of the moving body is performed.
  • another safety support system is configured such that the user device further receives data signals from a plurality of monitoring devices under the safety support system of (1-1). Yes.
  • the number of monitoring devices increases, the amount of data received by the user device increases accordingly.
  • the safety support system uses identifiers, the data amount Can be reduced. As a result, as in the case of (1-1) above, it is possible to improve safety and protect privacy.
  • the information processing unit further includes an orientation detection unit (49) for detecting the traveling direction of the user, and each of the first wireless communication units in the plurality of monitoring devices is The third identifier (azimuth information) indicating the imaging direction of the own monitoring camera is further added to the own data signal and transmitted.
  • the user apparatus can determine whether or not the information is necessary for each data signal when receiving the data signal from a plurality of monitoring apparatuses.
  • the plurality of monitoring devices include first and second monitoring devices respectively provided alongside curve mirrors arranged at different intersections.
  • Each first wireless communication unit in the plurality of monitoring devices further transmits the data signal with device identification information (MAC address) for identifying the monitoring device added thereto.
  • the second wireless communication unit further monitors the wireless signal transmitted from each of the first wireless communication units of the first and second monitoring devices, and includes device identification information included in each wireless signal and each wireless signal. Detect the signal strength. Then, the information processing unit performs processing on the data signal from the first monitoring device (10 [1]), and the radio field intensity of the wireless signal from the first monitoring device is a predetermined value from the peak value.
  • the processing for the data signal from the first monitoring device is stopped, and the second wireless communication unit waits to receive a wireless signal including device identification information different from the first monitoring device.
  • the second wireless communication unit receives a wireless signal from the second monitoring device (10 [2])
  • the information processing unit performs processing on the data signal from the second monitoring device. .
  • Necessary data signals can be determined.
  • whether or not each data signal is necessary can be determined by the third identifier, and a plurality of monitoring devices installed at different intersections.
  • the necessity for each data signal can be determined by the device identification information and the radio wave intensity.
  • the safety support device (monitoring device) according to the present embodiment includes a monitoring camera (27, 28), an image processing unit (21), and a first wireless communication unit (23).
  • the image processing unit executes first to third processes.
  • N N is an integer equal to or greater than 1 moving bodies existing within a predetermined image recognition range are detected from the captured image of the monitoring camera.
  • the type of the moving body is determined for each of N moving bodies, and a first value having a value corresponding to the determination result is selected from the first identifiers (image recognition IDs) in which a plurality of values are defined in advance. 1 identifier is generated.
  • the distance from a predetermined reference position is determined for each of N moving bodies based on the coordinates on the captured image, and a second identifier (distance information) in which a plurality of values are defined in advance. )
  • a second identifier in which a plurality of values are defined in advance.
  • the first wireless communication unit transmits a data signal including N sets of first and second identifiers generated for N mobile objects. Thereby, the same effect as in the case of (1-1) is obtained.
  • the first wireless communication unit further transmits the data signal by adding a third identifier (azimuth information) indicating the imaging orientation of the surveillance camera. Thereby, the same effect as in the case of the above (1-3) can be obtained.
  • the image processing unit divides the captured image for each predetermined viewing angle, and executes the first to third processes for each of the divided plurality of captured images.
  • the first wireless communication unit adds the third identifier for each of the plurality of divided captured images to the divided data signal for each of the plurality of captured images and transmits the data signal. Thereby, the same effect as in the case of (1-8) is obtained.
  • the safety support device (user device) monitors information related to N (N is an integer of 1 or more) moving bodies detected based on the captured image of the external monitoring device.
  • the second wireless communication unit (41, 51) and the information processing unit (47, 48) that are received from the mobile phone are carried by the user.
  • the second wireless communication unit includes a first identifier (image recognition ID) representing a type for each of N mobile objects and a second identifier (distance information) representing a distance from a predetermined reference position for each of N mobile objects.
  • the data signal containing is received.
  • the information processing unit recognizes the presence state of the moving body based on the N sets of first and second identifiers included in the data signal received by the second wireless communication unit, and notifies the user of the presence state of the moving body. Predetermined processing is performed. Thereby, the same effect as in the case of (1-1) is obtained.
  • an azimuth detecting unit (49) for detecting the moving azimuth of the user is further provided, and the second wireless communication unit receives N sets of first and second sets of data signals as data signals In addition to the identifier, a third identifier (orientation information) indicating the imaging orientation of the captured image is further received. Thereby, the same effect as in the case of the above (1-3) can be obtained.
  • an image display unit (11, 45) is further provided, and the information processing unit has icons or symbols respectively corresponding to a plurality of values defined in advance as the first identifier.
  • the icon or symbol corresponding to the first identifier is displayed at the coordinates corresponding to the second identifier for each set of the N first and second identifiers set on the image display unit.
  • the user apparatus further includes an audio output unit (12, 45) or a vibration unit (13, 45), and the information processing unit includes N sets of first and first sets. 2. Control the audio output unit or the vibration unit based on the identifier. Thereby, it becomes possible to notify the user of the presence state of the moving body by a method other than vision, and there may be a case where attention with a higher recognition degree can be drawn.
  • the user is not limited to a vehicle driving vehicle such as a car but may be a pedestrian or the like. That is, for example, it is possible to improve safety for pedestrians (for example, children, elderly people, etc.) who carry mobile phones equipped with a navigation system or the like.
  • the second wireless communication unit further monitors wireless signals transmitted from a plurality of monitoring devices including the first and second monitoring devices, and the plurality of monitoring devices Device identification information (MAC address) included in each wireless signal and the radio wave intensity of each wireless signal are detected. Then, the information processing unit performs processing on the data signal from the first monitoring device (10 [1]), and the radio field intensity of the wireless signal from the first monitoring device is a predetermined value from the peak value. In the case of the decrease, the processing for the data signal from the first monitoring device is stopped, and the second wireless communication unit waits to receive a wireless signal including device identification information different from the first monitoring device.
  • MAC address Device identification information
  • the safety support method according to the present embodiment is a method using the monitoring device (10) and the user devices (40, 50).
  • the monitoring device captures an image with the monitoring camera, and detects N (N is an integer of 1 or more) moving bodies from the captured image.
  • N is an integer of 1 or more
  • the type of the moving body is determined and the distance from the predetermined reference position of the moving body is measured.
  • a first identifier image recognition ID
  • a second identifier distance information
  • a data signal including N sets of first and second identifiers generated for N mobile objects is transmitted.
  • the user device receives the data signal transmitted from the monitoring device, recognizes the presence state of the moving object based on the N sets of first and second identifiers included in the data signal, The user is notified of the presence status of the moving object. Thereby, the same effect as in the case of (1-1) is obtained.
  • the monitoring apparatus further transmits the data signal by adding a third identifier (azimuth information) indicating the imaging direction of the monitoring camera to the data signal.
  • the user apparatus further detects the traveling direction of the user and compares the detection result with the third identifier to determine whether the received data signal is necessary. Thereby, the same effect as in the case of the above (1-3) can be obtained.
  • the present invention made by the present inventor has been specifically described based on the embodiment.
  • the present invention is not limited to the embodiment, and various modifications can be made without departing from the scope of the invention.
  • the above-described embodiment has been described in detail for easy understanding of the present invention, and is not necessarily limited to one having all the configurations described.
  • a part of the configuration of one embodiment can be replaced with the configuration of another embodiment, and the configuration of another embodiment can be added to the configuration of one embodiment. .
  • a cyber curve mirror (monitoring device) is attached to an optical curve mirror installed at an intersection, T-junction, sharp curve, etc. with poor visibility
  • the installation location is not particularly limited, for example, It is also useful to install it at railroad crossings, intersections with traffic lights, parking lot entrances and garages with poor prospects for ordinary houses. In this case, for example, it is possible to warn for an unreasonable crossing at a railroad crossing or an unreasonable approach to an intersection with a signal.
  • the cyber curve mirror (monitoring device) is mainly provided along with the optical curve mirror.
  • the parking lot that is not directly visible can be extracted from the video of the surveillance camera, and information can be provided to the driver through image display or voice guidance.
  • the vehicle driver can search for an empty space in the parking lot without directly visual recognition, it is possible to reduce the possibility of a careless accident due to forward carelessness due to direct visual recognition or the like.
  • a broadcast information system that takes privacy protection into consideration while constructing a surveillance camera was constructed by displaying an image using an image recognition ID such as an icon / symbol after image recognition.
  • an image recognition ID such as an icon / symbol after image recognition.
  • the facility manager or the like uses the mode for crime prevention / security and causes the cyber curve mirror (monitoring device) to function as a normal monitoring camera.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The system is provided with a monitoring device (10) and a user device. The monitoring device (10) first captures images by a monitor camera, and detects a number N of moving bodies therein. Next, for each of the N moving bodies, the type of moving body is identified, and the distance of the moving body from a predetermined reference location is measured. Next, a first identifier representing the type of moving body, and a second identifier representing the distance of the moving body, are generated for each of the N moving bodies. A data signal containing the N sets of first and second identifiers is then transmitted. Meanwhile, the user device receives the data signal transmitted by the monitoring device (10), recognizes the state of being of the moving bodies on the basis of the N sets of first and second identifiers included therein, and notifies the user thereof through an image (11), voice (12), vibration (13), or the like. As a result, it is possible to reduce the occurrence of accidents with other vehicles or with pedestrians upon encountering an intersection with poor visibility and no traffic signal, a T intersection, or the like, making it possible to achieve improved safety.

Description

安全支援システムおよび安全支援装置Safety support system and safety support device
 本発明は、安全支援システムおよび安全支援装置に関し、例えば、カーブミラーが配置された交差点等で用いられる安全支援システムおよび安全支援装置に関する。 The present invention relates to a safety support system and a safety support device, for example, a safety support system and a safety support device used at an intersection where a curve mirror is arranged.
 特許文献1には、自動車に搭載される運転者支援装置が示されている。運転者支援装置は、ナビ情報に基づいて自車両がT字路に接近していると判断した場合に、内部の通信端末に路側装置からの情報を取得するように指示する。これに応じて、T字路の交差点に設置された路側装置は、自車両の死角方向に存在する他車両と路側装置との距離をレーダで検出し、他車両の大きさをカメラ画像で検出し、それらを自車両に送信する。自車両は、ヘッドアップディスプレイに自車両と他車両の相対的な位置関係を表示する。 Patent Document 1 discloses a driver support device mounted on an automobile. When it is determined that the host vehicle is approaching the T-shaped road based on the navigation information, the driver assistance device instructs the internal communication terminal to acquire information from the roadside device. In response to this, the roadside device installed at the intersection of the T-shaped road detects the distance between the other vehicle and the roadside device existing in the blind spot direction of the own vehicle with the radar, and detects the size of the other vehicle with the camera image. And send them to your vehicle. The host vehicle displays a relative positional relationship between the host vehicle and another vehicle on a head-up display.
 特許文献2には、自動車を対象とした近接移動体表示システムが示されている。当該システムでは、道路上に設置されたカメラに接続された画像処理装置が、撮像時刻、接近移動体の位置、接近移動体の種類、交差点形状などのデータを車載装置に送信する。車載装置は、当該データを受信し、移動体表示時刻における移動体の位置を推定し、フロントガラスへの表示、又はモニター画面への表示を行う。 Patent Document 2 discloses a proximity moving body display system for an automobile. In the system, an image processing apparatus connected to a camera installed on a road transmits data such as an imaging time, a position of an approaching mobile body, a type of an approaching mobile body, and an intersection shape to an in-vehicle device. The in-vehicle device receives the data, estimates the position of the moving object at the moving object display time, and performs display on the windshield or display on the monitor screen.
 特許文献3には、交差点に接近する車両のドライバに、死角領域の現在状況を認識させる車両用視界支援システムが示されている。当該システムでは、交差点カメラ群の画像データを処理する画像処理サーバが、交差点カメラ群および車載カメラからの各種情報に基づいて車載カメラの画像に死角領域の画像を合成した画像を生成し、当該合成画像データを車載装置に送信する。 Patent Document 3 discloses a vehicle vision support system that allows a driver of a vehicle approaching an intersection to recognize the current state of a blind spot area. In this system, the image processing server that processes the image data of the intersection camera group generates an image obtained by synthesizing the image of the blind spot area with the image of the in-vehicle camera based on various information from the intersection camera group and the in-vehicle camera. The image data is transmitted to the in-vehicle device.
 特許文献4には、カーブミラーに設置されたカメラを用いた車両用道路状況確認装置が示されている。当該車両用道路状況確認装置では、カメラ側が車両に向けて撮像映像データを送信し、車両側が当該撮像映像データを解析する。 Patent Document 4 discloses a vehicle road condition confirmation device using a camera installed on a curve mirror. In the vehicle road condition confirmation device, the camera side transmits captured video data to the vehicle, and the vehicle side analyzes the captured video data.
 特許文献5には、交差点に設置された複数のカメラからの交差点映像を用いて死角となる道路の状況を把握させる交差点状況認識システムが示されている。当該システムでは、交差点に設置される撮像装置に予め識別番号が設定され、車載装置は、交差点に進入する際に必要とされる交差点映像に対応する撮像装置の識別番号を検索し、当該識別番号を用いて交差点映像を取得する。 Patent Document 5 discloses an intersection situation recognition system that grasps the situation of a road that becomes a blind spot using intersection videos from a plurality of cameras installed at an intersection. In this system, an identification number is set in advance for the imaging device installed at the intersection, and the in-vehicle device searches for the identification number of the imaging device corresponding to the intersection image required when entering the intersection, and the identification number Use the to get the intersection video.
 特許文献6には、交差点に設置された画像表示装置に死角の映像を表示させる交通監視システムが示されている。具体的には、建物に設置されたカメラの画像から、道路上を移動する対象物の種類や大きさ、位置や移動速度などを解析し、過去に記録された対象物の類似画像の中から当該解析結果に類似する類似画像を抽出し、それを予め定めた背景画像と合成して画像表示装置に表示する。 Patent Document 6 discloses a traffic monitoring system that displays an image of a blind spot on an image display device installed at an intersection. Specifically, from the image of the camera installed in the building, the type, size, position and speed of the object moving on the road are analyzed, and similar images of the object recorded in the past are analyzed. A similar image similar to the analysis result is extracted, synthesized with a predetermined background image, and displayed on the image display device.
特開2010-26708号公報JP 2010-26708 A 特開2006-215911号公報JP 2006-215911 A 特開2009-70243号公報JP 2009-70243 A 特開2008-52493号公報JP 2008-52493 A 特開2010-55157号公報JP 2010-55157 A 特開2012-59139号公報JP 2012-59139 A
 例えば、2011年における車による交通事故の内容を分析すると、車両相互事故の場合、追突、出会い頭、正面衝突、右折時衝突、車両相互その他に分類され、人対車両事故の場合、横断、人対車両その他に分類される。車両相互事故の場合、前述した各分類は、死亡事故で、それぞれ、6.7%、15%、10%、5.5%、6.1%となり、重症事故で、それぞれ、6.3%、28%、5.4%、11.2%、12.8%となり、軽傷事故で、それぞれ、35.4%、26%、2.1%、8.2%、16.5%となっている。人対車両事故の場合、前述した各分類は、死亡事故で、それぞれ、26%、10.2%となり、重症事故で、それぞれ、13.7%、6.9%となり、軽傷事故で、それぞれ、4.8%、3.6%となっている。また、事故の主な発生場所は、信号機の無い交差点が70%程度(信号機有りの交差点で18%、単路で10%)であり、このうち市街地が75%程度となっている。つまり、事故の主な発生場所は、住宅街等の信号の無い交差点となっている。 For example, when analyzing the contents of a traffic accident caused by a car in 2011, a vehicle accident is classified into a rear-end collision, encounter, head-on collision, right turn collision, vehicle mutual and others. Classified as vehicles and others. In the case of vehicle accidents, the above-mentioned classifications are 6.7%, 15%, 10%, 5.5%, 6.1% for fatal accidents, and 6.3% for severe accidents, respectively. 28%, 5.4%, 11.2%, and 12.8%, and 35.4%, 26%, 2.1%, 8.2%, and 16.5% for minor injuries, respectively. ing. In the case of human-to-vehicle accidents, the above-mentioned classifications are 26% and 10.2% for fatal accidents, 13.7% and 6.9% for serious accidents, respectively, and minor injury accidents. 4.8% and 3.6%. In addition, the major locations where accidents occur are about 70% of intersections without traffic lights (18% at intersections with traffic lights and 10% of single roads), of which about 75% are in urban areas. In other words, the main location of the accident is an intersection without a signal such as a residential area.
 近年、車に画像処理技術を使ったブレーキ制御技術が取り入れられ、追突防止を可能とするシステムが自動車に搭載されるようになってきている。しかし、前述したように、車両相互事故の出会い頭、正面衝突、右折時衝突等に対する対策はまだ不十分であるのが現状である。この中でも、出会い頭での死亡事故(15%)、重症事故(28%)、軽傷事故(26%)は、他の事故に比べて死亡、重症事故の比率が非常に大きい。そこで、特にこの出会い頭の事故等を低減する為の交通情報システム(ITS)が求められる。なお、出会い頭の事故とは、異なった方向から進入してきた車両(自転車・バイク等)及び歩行者等が交差するときの衝突事故のことである。 In recent years, brake control technology using image processing technology has been incorporated into cars, and systems that can prevent rear-end collisions have been installed in automobiles. However, as described above, the current situation is that countermeasures for encountering a vehicle accident, frontal collision, right-turn collision, etc. are still insufficient. Among these, death accidents (15%), serious accidents (28%), and minor accidents (26%) at encounters have a very large proportion of death and serious accidents compared to other accidents. Therefore, in particular, a traffic information system (ITS) is required to reduce such accidents at encounters. In addition, the accident at the time of encounter is a collision accident when vehicles (bicycles, motorcycles, etc.) and pedestrians entering from different directions cross.
 交差点等(十字路、T字路、急カーブや植物・建築物等で見通しの悪い道路環境)における出会い頭の事故の主な原因として、人的要因、道路環境要因、気象悪化要因などが挙げられる。人的要因は、原因の9割程度を占め、代表的には、認知ミス(見落とし)や、判断・予測ミス等が挙げられる。認知ミス(見落とし)は、例えば、「交差車両を見なかった」、「カーブミラーを見たが気づかなかった」、「他に気をとられていた」等の状況に該当する。判断・予測ミスは、認知はしていたが、行動予測・動静を誤った状況であり、例えば、「こちらが優先道路のため相手が止まる筈だ」等の状況に該当する。道路環境要因は、見通しが悪い(建物・植物等、駐停車車両による見通し不良)状況や、施設不備(カーブミラーの破損、標識設備・交通安全施設不備)の状況に該当する。気象悪化要因は、例えば、雨、霧、雪により見通しが悪くなった状況に該当する。 The main causes of accidents at encounters at intersections (crossroads, T-shaped roads, sharp curves, road environments with poor visibility due to plants, buildings, etc.) include human factors, road environment factors, and weather deterioration factors. Human factors account for about 90% of the causes, and typically include recognition errors (overlooks) and judgment / prediction errors. A recognition error (oversight) corresponds to, for example, a situation such as “I didn't see a crossing vehicle”, “I saw a curve mirror but didn't notice”, or “I was distracted by others”. The judgment / prediction mistake is a situation where the user has recognized but the behavior prediction / motion is wrong. For example, it corresponds to a situation such as “This is a priority road and the other party should stop”. Road environmental factors correspond to situations in which the outlook is poor (inferior prospects due to parked vehicles such as buildings and plants) and inadequate facilities (curve mirror breakage, signs and traffic safety facilities are inadequate). The weather deterioration factor corresponds to, for example, a situation in which the outlook has deteriorated due to rain, fog, and snow.
 主に道路環境要因の対策としては、カーブミラー(道路反射鏡)の設置や、交差点の隈切りの確保(すなわち交差点の見通しの向上)や、非優先道路への路面表示や規制標識による交通規制(道路標識)等が挙げられる。図24~図26は、一般的なカーブミラーの利用状況の一例を表す説明図である。図24には、車両運転者の視点でカーブミラーを見た状況の一例が示され、図25には、車両運転者から見た死角の領域の一例が示され、図26には、カーブミラーに写る像の一例が示される。 Measures mainly for road environment factors include the installation of curve mirrors (road reflectors), ensuring crossing of intersections (that is, improving the prospects of intersections), road surface display on non-priority roads, and traffic regulation using regulatory signs (Road signs). FIGS. 24 to 26 are explanatory diagrams showing an example of the usage situation of a general curve mirror. FIG. 24 shows an example of a situation where the curve mirror is viewed from the viewpoint of the vehicle driver, FIG. 25 shows an example of a blind spot area viewed from the vehicle driver, and FIG. 26 shows a curve mirror. An example of the image shown in is shown.
 例えば、市街地等で、塀や建物、樹木等により見通しの悪い他の進入路(例えばカーブの場合は対向側、十字路、T字路等の場合は運転者の侵入路に対して左右からの進入路側)に関しては、カーブミラーを設置することで、図25に示すような運転者の死角となっている領域の情報を運転者に提供することができる。進入車両等の運転者は、カーブミラーを確認したのち、車両速度の減速、一旦停止等の判断を行い、事故を起こさない様に行動する。また、運転者に限らず歩行者も同様に、カーブミラーによって他の進入車両等を認識した場合には、道路の脇側に寄る、立ち止まって進入車両をやり過ごす等を判断し、行動することで危険を回避する。 For example, in an urban area, other approach roads with poor visibility due to fences, buildings, trees, etc. With respect to (roadside), by installing a curve mirror, it is possible to provide the driver with information on the area that is a blind spot of the driver as shown in FIG. After confirming the curve mirror, the driver such as the approaching vehicle makes a decision such as deceleration of the vehicle speed, temporary stop, etc., and acts so as not to cause an accident. Similarly, not only drivers but also pedestrians can recognize other approaching vehicles etc. with a curve mirror, stop by the side of the road, stop and pass over approaching vehicles, etc. Avoid danger.
 しかしながら、このようなカーブミラーや道路標識が設置されているにも関わらず、現実には、特に信号機の無い交差点等で出会い頭による相互車両事故や人対車両事故が多く発生している。その原因は、前述したように、認知ミス(見落とし)や、判断・予測ミス等の人的要因である。この人的要因が生じることの原因として、例えば、カーブミラー情報は、図26に示されるように、遠方からの目視による鏡に映った情報(すなわち小さく鏡に映った情報)であり、さらに、左右が逆の情報として、車両運転者や歩行者に提供されることが挙げられる。このような情報では、カーブミラーを見る運転者、歩行者によって個々に見え方や、それに伴う判断が変わる場合がある。 However, in spite of such curved mirrors and road signs installed, in reality, there are many car accidents and people-to-vehicle accidents due to encounters, especially at intersections without traffic lights. The cause is, as described above, human factors such as a recognition error (oversight) and a judgment / prediction error. As a cause of the occurrence of this human factor, for example, as shown in FIG. 26, the curve mirror information is information reflected in a mirror viewed from a distance (that is, information reflected in a small mirror). Information that is reversed to the left and right is provided to a vehicle driver or a pedestrian. With such information, there are cases where the driver and the pedestrian looking at the curve mirror individually change the way it looks and the judgment associated therewith.
 例えば、車両運転者や歩行者の視力に応じてカーブミラーの見易さが異なる。また、カーブミラー情報は左右が逆の情報であるため、車等が来ている方向の判断ミスが生じたり、カーブミラー方向にのみ注目し、反対方向に気を配れない場合が生じたり、あるいは歩行者や自転車とすれ違う場合の左右の取り違いで、擦れ違い様の距離間隔が狭くなるなどの状況が起こり得る。さらに、カーブミラー情報では、カーブミラーに写る車等との間の距離感が掴み難い場合がある。 For example, the visibility of the curve mirror varies depending on the visual acuity of the vehicle driver or pedestrian. Also, because the curve mirror information is reverse information on the left and right, there may be a misjudgment in the direction of the car, etc., or attention may be paid only to the direction of the curve mirror, and attention may not be paid to the opposite direction, or When the pedestrians and bicycles pass each other, there is a possibility that the distance between them will be narrowed due to the difference between the left and right sides. Furthermore, in the case of the curve mirror information, it may be difficult to grasp a sense of distance from a car or the like reflected on the curve mirror.
 カーブミラーと同様に、道路標識も、経時変化(例えばさび、汚れ等)や環境の変化(例えば樹木や建築物等による変化、あるいは風雨にさらされ表示角度が変化等)により、見難い状況になり得る。また、これらは、夜間での視認が難しく、設置場所によっては、見落とされる可能性が十分にある。さらに、思い込み(錯覚)等による判断・予測ミスを引き起こす要因にもなり得る。 Like curve mirrors, road signs are difficult to see due to changes over time (such as rust and dirt) and environmental changes (such as changes due to trees and buildings, or changes in display angle due to wind and rain). Can be. Also, these are difficult to see at night, and may be overlooked depending on the installation location. Furthermore, it may be a factor causing misjudgment / prediction due to an assumption (illusion).
 このような、出会い頭の事故等を低減するため、例えば、特許文献1~特許文献6に示されるような技術を用いることが考えられる。しかしながら、例えば、特許文献3~特許文献5に示されるような、カメラの画像データを車両に送信するような技術では、送信する画像データ量が大きくなるため、ユーザに的確な情報をリアルタイムで伝えることが困難となる場合がある。また、特許文献6の技術では、交差点に画像表示装置を設ける必要があり、また、画像処理用の高性能なサーバ装置等が必要とされる場合があり、高コストなシステムとなる恐れがある。 In order to reduce such accidents at the time of encounter, for example, it is conceivable to use techniques as disclosed in Patent Documents 1 to 6. However, for example, as shown in Patent Documents 3 to 5, a technique for transmitting camera image data to a vehicle increases the amount of image data to be transmitted, so that accurate information is transmitted to the user in real time. May be difficult. In the technique of Patent Document 6, it is necessary to provide an image display device at an intersection, and a high-performance server device for image processing may be required, which may result in an expensive system. .
 また、特許文献1および特許文献2には、移動体の各種情報を、具体的にどのような形式で車両側に送信するかについての考慮が特に無く、その形式によっては情報量が増大し、ユーザに的確な情報をリアルタイムで伝えることが困難となる場合がある。さらに、特許文献1および特許文献2の技術は、ある限られた特定の場面を前提とした技術となっている。すなわち、例えば、カメラが交差点の右側を撮像し、交差点に下側から進入する車両に対してこの画像データに基づく情報を送信するといった前提条件が予め構築された上での技術である。ただし、実際には、カメラがどこを撮像し、車両がどの方向から進入するかに関しては、様々なケースがあり、前述した前提条件自体を構築する技術が必要とされる。 In Patent Document 1 and Patent Document 2, there is no particular consideration as to what specific information about the moving body is transmitted to the vehicle, and the amount of information increases depending on the format. It may be difficult to convey accurate information to the user in real time. Furthermore, the techniques of Patent Document 1 and Patent Document 2 are techniques premised on a limited specific scene. That is, for example, this is a technique in which a precondition that a camera captures the right side of an intersection and transmits information based on the image data to a vehicle entering the intersection from below is established in advance. However, in reality, there are various cases regarding where the camera takes an image and from which direction the vehicle enters, and a technique for constructing the preconditions described above is required.
 後述する実施の形態は、このようなことを鑑みてなされたものであり、その他の課題と新規な特徴は、本明細書の記述及び添付図面から明らかになるであろう。 Embodiments described later have been made in view of the above, and other problems and novel features will become apparent from the description of the present specification and the accompanying drawings.
 一実施の形態による安全支援システムは、カーブミラーに併設され、監視カメラ、画像処理部および第1無線通信部を備える監視装置と、ユーザによって運搬され、第2無線通信部および情報処理部を備えるユーザ装置と、を有する。画像処理部は、第1~第3処理を実行する。第1処理では、監視カメラの撮像画像に対して、所定の画像認識範囲内に存在するN(Nは1以上の整数)個の移動体を検出する。第2処理では、N個の移動体毎に、移動体の種類を判別し、予め複数の値が規定された第1識別子の中から当該判別結果に応じた値を持つ第1識別子を生成する。第3処理では、N個の移動体毎に、撮像画像上の座標に基づき所定の基準位置からの距離を判別し、予め複数の値が規定された第2識別子の中から当該判別結果に応じた値を持つ第2識別子を生成する。第1無線通信部は、N個の移動体を対象にそれぞれ生成されたN組の第1および第2識別子を含んだデータ信号を送信する。第2無線通信部は、第1無線通信部から送信されたデータ信号を受信し、情報処理部は、当該データ信号に含まれるN組の第1および第2識別子に基づいて移動体の存在状況を認識し、当該移動体の存在状況をユーザに通知するための所定の処理を行う。 A safety support system according to an embodiment includes a monitoring device provided with a curve mirror and including a monitoring camera, an image processing unit, and a first wireless communication unit, and a second wireless communication unit and an information processing unit that are carried by a user. And a user device. The image processing unit executes first to third processes. In the first process, N (N is an integer equal to or greater than 1) moving bodies existing within a predetermined image recognition range are detected from the captured image of the monitoring camera. In the second process, the type of the moving object is determined for each of N moving objects, and a first identifier having a value corresponding to the determination result is generated from the first identifiers in which a plurality of values are defined in advance. . In the third process, the distance from the predetermined reference position is determined for each of the N moving bodies based on the coordinates on the captured image, and according to the determination result from the second identifiers in which a plurality of values are defined in advance. A second identifier having the value is generated. The first wireless communication unit transmits a data signal including N sets of first and second identifiers generated for N mobile objects. The second wireless communication unit receives the data signal transmitted from the first wireless communication unit, and the information processing unit determines whether the mobile object is present based on the N sets of first and second identifiers included in the data signal. And performing a predetermined process for notifying the user of the presence state of the mobile object.
 前記一実施の形態によれば、見通しの悪い信号の無い交差点、T字路等での出会い頭による対車両、対人事故等の発生を低減でき、安全性の向上が実現可能になる。 According to the above-described embodiment, it is possible to reduce the occurrence of vehicle-to-vehicle accidents, personal accidents, etc. due to encounters at intersections where there is no poor visibility, T-junction, etc., and it is possible to improve safety.
本発明の一実施の形態による安全支援システムにおいて、その概要を表す説明図である。In the safety support system by one embodiment of this invention, it is explanatory drawing showing the outline | summary. 本発明の実施の形態1による安全支援システムにおいて、その監視装置の概略構成例を示すブロック図である。In the safety assistance system by Embodiment 1 of this invention, it is a block diagram which shows the schematic structural example of the monitoring apparatus. 本発明の実施の形態1による安全支援システムにおいて、そのユーザ装置の概略構成例を示すブロック図である。In the safety assistance system by Embodiment 1 of this invention, it is a block diagram which shows the schematic structural example of the user apparatus. 本発明の実施の形態1による安全支援システムにおいて、図3Aとは異なるユーザ装置の概略構成例を示すブロック図である。FIG. 3B is a block diagram showing a schematic configuration example of a user device different from FIG. 3A in the safety support system according to Embodiment 1 of the present invention. カーブミラーの理想的な設置方法の一例を示す平面図である。It is a top view which shows an example of the ideal installation method of a curve mirror. 図4Aのカーブミラーに応じた図2の監視装置(サイバーカーブミラー)の設置方法の一例を示す平面図である。It is a top view which shows an example of the installation method of the monitoring apparatus (cyber curve mirror) of FIG. 2 according to the curve mirror of FIG. 4A. カーブミラーの現実的な設置方法の一例を示す平面図である。It is a top view which shows an example of the realistic installation method of a curve mirror. 図5のカーブミラーに応じた図2の監視装置(サイバーカーブミラー)の設置方法の一例を示す平面図である。It is a top view which shows an example of the installation method of the monitoring apparatus (cyber curve mirror) of FIG. 2 according to the curve mirror of FIG. 図6Aに対応する斜視図である。FIG. 6B is a perspective view corresponding to FIG. 6A. 図5のカーブミラーに応じた図2の監視装置(サイバーカーブミラー)の図6Aとは異なる設置方法の一例を示す平面図である。It is a top view which shows an example of the installation method different from FIG. 6A of the monitoring apparatus (cyber curve mirror) of FIG. 2 according to the curve mirror of FIG. 図7Aに対応する斜視図である。It is a perspective view corresponding to FIG. 7A. 図2の監視装置によって同報される無線信号のデータフォーマットの一例を示す図である。It is a figure which shows an example of the data format of the radio signal broadcast by the monitoring apparatus of FIG. 図8の詳細な内容の一例を示す図である。It is a figure which shows an example of the detailed content of FIG. 図8の詳細な内容の一例を示す図である。It is a figure which shows an example of the detailed content of FIG. 図8の詳細な内容の一例を示す図である。It is a figure which shows an example of the detailed content of FIG. 図8の詳細な内容の一例を示す図である。It is a figure which shows an example of the detailed content of FIG. 図2の監視装置における距離計測部の処理内容の一例を示す説明図である。It is explanatory drawing which shows an example of the processing content of the distance measurement part in the monitoring apparatus of FIG. 図2の監視装置における距離計測部の処理内容の一例を示す説明図である。It is explanatory drawing which shows an example of the processing content of the distance measurement part in the monitoring apparatus of FIG. 図3Aまたは図3Bのユーザ装置において、図9Aの画像認識IDに基づく表示内容の一例を示す図である。FIG. 9B is a diagram showing an example of display contents based on the image recognition ID of FIG. 9A in the user device of FIG. 3A or 3B. 図3Aまたは図3Bのユーザ装置において、図9Dの拡張情報に基づく表示内容の一例を示す図である。FIG. 9D is a diagram showing an example of display contents based on the extended information of FIG. 9D in the user device of FIG. 3A or 3B. 図3Aまたは図3Bのユーザ装置における表示内容の一例を説明するための交差点の交通状況の一例を示す平面図である。It is a top view which shows an example of the traffic condition of the intersection for demonstrating an example of the display content in the user apparatus of FIG. 3A or 3B. 図12Aの交通状況において、図3Aまたは図3Bのユーザ装置での表示画面の一例を示す図である。It is a figure which shows an example of the display screen in the user apparatus of FIG. 3A or 3B in the traffic condition of FIG. 12A. 図12Aの交通状況において、図12Bを応用した表示画面の一例を示す図である。It is a figure which shows an example of the display screen which applied FIG. 12B in the traffic condition of FIG. 12A. 交差点に設置された複数の監視装置が図8のデータフォーマットに基づくデータを送信する際の、交差点を単位としたデータフォーマットの階層構造例を示す概略図である。FIG. 9 is a schematic diagram illustrating an example of a hierarchical structure of a data format in units of intersections when a plurality of monitoring devices installed at intersections transmit data based on the data format of FIG. 8. 図2の監視装置の詳細な処理内容の一例を示すフロー図である。It is a flowchart which shows an example of the detailed processing content of the monitoring apparatus of FIG. 図2の監視装置の詳細な処理内容の一例を示すフロー図である。It is a flowchart which shows an example of the detailed processing content of the monitoring apparatus of FIG. 図3Aまたは図3Bのユーザ装置の詳細な処理内容の一例を示すフロー図である。It is a flowchart which shows an example of the detailed processing content of the user apparatus of FIG. 3A or 3B. 本発明の実施の形態2による安全支援システムにおいて、その監視装置の急カーブへの設置例およびその際の動作例を示す平面図である。In the safety assistance system by Embodiment 2 of this invention, it is a top view which shows the example of installation in the sharp curve of the monitoring apparatus, and the operation example in that case. 三差路におけるカーブミラーの一般的な配置方法の一例を示す平面図である。It is a top view which shows an example of the general arrangement | positioning method of the curve mirror in a three way path. 三差路におけるカーブミラーの一般的な配置方法の他の一例を示す平面図である。It is a top view which shows another example of the general arrangement | positioning method of the curve mirror in a three way path. 本発明の実施の形態2による安全支援システムにおいて、その監視装置の三差路への設置例およびその際の動作例を示す平面図である。In the safety assistance system by Embodiment 2 of this invention, it is a top view which shows the example of installation in the three-way way of the monitoring apparatus, and the operation example in that case. 図18Cの応用例を示す平面図である。It is a top view which shows the application example of FIG. 18C. 無線LANにおける一般的な特性の一例を示す説明図である。It is explanatory drawing which shows an example of the general characteristic in wireless LAN. 無線LANにおける一般的な特性の一例を示す説明図である。It is explanatory drawing which shows an example of the general characteristic in wireless LAN. 無線LANにおけるアクセスポイント(AP)と無線LAN端末との間の通信手順の一例を示すシーケンス図である。It is a sequence diagram which shows an example of the communication procedure between the access point (AP) and wireless LAN terminal in wireless LAN. 本発明の実施の形態3による安全支援システムにおいて、交差点が近接して配置される道路環境への適用例を示す平面図である。In the safety assistance system by Embodiment 3 of this invention, it is a top view which shows the example of application to the road environment where an intersection is arrange | positioned closely. 図22の各三差路(T字路)を通過する際の安全支援システムの動作例を示す説明図である。It is explanatory drawing which shows the operation example of the safety assistance system at the time of passing through each three-way difference (T-shaped road) of FIG. 一般的なカーブミラーの利用状況の一例を表す説明図である。It is explanatory drawing showing an example of the utilization condition of a general curve mirror. 一般的なカーブミラーの利用状況の一例を表す説明図である。It is explanatory drawing showing an example of the utilization condition of a general curve mirror. 一般的なカーブミラーの利用状況の一例を表す説明図である。It is explanatory drawing showing an example of the utilization condition of a general curve mirror.
 以下の実施の形態においては便宜上その必要があるときは、複数のセクションまたは実施の形態に分割して説明するが、特に明示した場合を除き、それらは互いに無関係なものではなく、一方は他方の一部または全部の変形例、詳細、補足説明等の関係にある。また、以下の実施の形態において、要素の数等(個数、数値、量、範囲等を含む)に言及する場合、特に明示した場合および原理的に明らかに特定の数に限定される場合等を除き、その特定の数に限定されるものではなく、特定の数以上でも以下でも良い。 In the following embodiment, when it is necessary for the sake of convenience, the description will be divided into a plurality of sections or embodiments. However, unless otherwise specified, they are not irrelevant, and one is the other. Some or all of the modifications, details, supplementary explanations, and the like are related. Further, in the following embodiments, when referring to the number of elements (including the number, numerical value, quantity, range, etc.), especially when clearly indicated and when clearly limited to a specific number in principle, etc. Except, it is not limited to the specific number, and may be more or less than the specific number.
 さらに、以下の実施の形態において、その構成要素(要素ステップ等も含む)は、特に明示した場合および原理的に明らかに必須であると考えられる場合等を除き、必ずしも必須のものではないことは言うまでもない。同様に、以下の実施の形態において、構成要素等の形状、位置関係等に言及するときは、特に明示した場合および原理的に明らかにそうでないと考えられる場合等を除き、実質的にその形状等に近似または類似するもの等を含むものとする。このことは、上記数値および範囲についても同様である。 Further, in the following embodiments, the constituent elements (including element steps and the like) are not necessarily indispensable unless otherwise specified and apparently essential in principle. Needless to say. Similarly, in the following embodiments, when referring to the shapes, positional relationships, etc. of the components, etc., the shapes are substantially the same unless otherwise specified, or otherwise apparent in principle. And the like are included. The same applies to the above numerical values and ranges.
 以下、本発明の実施の形態を図面に基づいて詳細に説明する。なお、実施の形態を説明するための全図において、同一の部材には原則として同一の符号を付し、その繰り返しの説明は省略する。 Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings. Note that components having the same function are denoted by the same reference symbols throughout the drawings for describing the embodiment, and the repetitive description thereof will be omitted.
 《実施の形態の概要》
 図1は、本発明の一実施の形態による安全支援システムにおいて、その概要を表す説明図である。本実施の形態による安全支援システムは、見通しの悪い交差点、T字路等に設置されたカーブミラーと同等の機能を監視カメラ、無線通信および無線端末機器、更にAR(Augmented Reality:拡張現実)技術を組み合わせることで実現され、出会い頭による対車両、対人事故等の発生を低減する。AR技術とは、バーチャルリアリティの一つで、その時点の周囲を取り巻く現実環境に情報を付加、削除、強調、減衰させ、文字通り人間から見た現実世界を拡張するものを示す。
<< Summary of Embodiment >>
FIG. 1 is an explanatory diagram showing an outline of a safety support system according to an embodiment of the present invention. The safety support system according to the present embodiment has functions equivalent to those of curved mirrors installed at intersections with poor visibility, T-junctions, etc., surveillance cameras, wireless communication and wireless terminal devices, and AR (Augmented Reality) technology. To reduce the occurrence of vehicle-to-vehicle accidents and personal accidents. The AR technology is one of virtual reality, which literally expands the real world as seen by humans by adding, deleting, emphasizing, and attenuating information in the real environment surrounding the surroundings.
 具体的には、当該安全支援システムは、図1に示すように、カーブミラーに併設され、監視カメラを含んだ監視装置10と、ユーザ(車両運転者、歩行者等)によって運搬されるユーザ装置を備える。監視装置10は、監視カメラの撮影画像を利用し、画像認識処理により歩行者や車両等の移動体の種類と各移動体の交差点からの距離を判別し、監視カメラの撮影方位の情報を付加して無線通信を使って同報する。このような監視装置10を、本明細書では、「サイバーカーブミラー」と呼ぶ。ユーザ装置(例えば、ナビゲーションシステム、ドライブレコーダ、携帯電話やスマートフォン等)は、監視装置10からの情報を受信し、見通しの悪いまたは見えない死角になっている進入路から進入してくる移動体(車両や歩行者等)の存在を、画像表示部11、音声出力部12、振動部13等を用いてユーザに通知する。 Specifically, as shown in FIG. 1, the safety support system includes a monitoring device 10 including a monitoring camera and a user device that is carried by a user (a vehicle driver, a pedestrian, etc.). Is provided. The monitoring device 10 uses the image captured by the monitoring camera, determines the type of a moving object such as a pedestrian or vehicle and the distance from the intersection of each moving object by image recognition processing, and adds information on the imaging direction of the monitoring camera. And broadcast using wireless communication. Such a monitoring device 10 is referred to as a “cyber curve mirror” in this specification. A user device (for example, a navigation system, a drive recorder, a mobile phone, a smart phone, or the like) receives information from the monitoring device 10 and enters a moving object (such as a blind spot with a poor view or an invisible blind spot). The presence of a vehicle, a pedestrian, etc.) is notified to the user using the image display unit 11, the audio output unit 12, the vibration unit 13, and the like.
 すなわち、当該安全支援システムは、カーブミラーに映る(進入方向によって違う)情報を、必要最小限度の情報(交差点等に進入してくる歩行者、車両等の有無とその位置情報)に変換し、ユーザ装置側に同報する。この際に、これらの情報は、様々なユーザに同報される為、撮影画像等をそのまま同報した場合、情報量の大きさに伴うリアルタイム性の低下に加えて、プライバシー侵害等の問題を引き起こす恐れがある。これらの問題を考慮し、監視装置10は、撮影画像の画像認識後、その認識結果を特定のアイコン/シンボルID(コード)信号等に変換し、送信する。ユーザ装置側は、この特定のID(コード)信号によって、車両または歩行者等であるかを認識し、その認識結果に従ってアイコン/シンボルまたはピクトグラム等を表示する画像表示部11、音声や警告音等を発する音声出力部12、所定の振動パターン等を持つ振動部13等を介してユーザに警告する。 That is, the safety support system converts the information reflected on the curve mirror (depending on the approach direction) into the minimum necessary information (presence / absence of pedestrians, vehicles, etc. entering intersections and their position information), Broadcast to the user equipment side. At this time, since this information is broadcast to various users, when a photographed image or the like is broadcast as it is, in addition to the decrease in real-time property due to the amount of information, there are problems such as privacy infringement. May cause. Considering these problems, the monitoring apparatus 10 converts the recognition result into a specific icon / symbol ID (code) signal or the like after transmitting the image of the captured image, and transmits it. The user device side recognizes whether it is a vehicle or a pedestrian or the like by this specific ID (code) signal, and displays an icon / symbol or pictogram or the like according to the recognition result, voice or warning sound, etc. The user is warned via the voice output unit 12 that emits the sound, the vibration unit 13 having a predetermined vibration pattern, and the like.
 これにより、車両運転者、歩行者等は、見通しの悪い箇所または死角領域に存在する移動体(進入車、自転車・バイクおよび歩行者等)の情報を、カーブミラーを直接目視することで得るだけでなく、監視カメラの撮像画像に基づき、視覚に限らず、音声、振動等で得ることができる。その結果、交差点に進入する車両運転者・歩行者等に対して、危険をより確実に認知させ、交差点等での認知ミス(見落とし等)、判断・予測ミス(思い込み/錯覚等)などの人的ミスを軽減させることができ、交差点等での事故を未然に防ぐことが可能となる。また、当該安全支援システムは、監視装置10側が画像処理等を実行して送信するシステムであり、ユーザ装置側は例えば無線通信可能な汎用情報端末機器(ナビゲーションシステム、スマートフォン、携帯電話、ドライブレコーダ等)に専用のアプリケーションソフトウエアを実装することで実現できる。その結果、ITSシステムを安価で構築できる。 As a result, vehicle drivers, pedestrians, etc. can obtain information on moving bodies (approaching vehicles, bicycles / bikes, pedestrians, etc.) existing in places with poor visibility or blind spots by directly viewing the curve mirror. In addition, based on the captured image of the surveillance camera, it can be obtained not only by vision but also by voice, vibration, or the like. As a result, the vehicle drivers and pedestrians entering the intersection are more surely aware of the danger, and people who have misrecognition (such as oversight) at the intersection, misjudgment / prediction (belief / illusion, etc.) Mistakes can be reduced and accidents at intersections can be prevented. In addition, the safety support system is a system in which the monitoring device 10 executes image processing and transmits the image, and the user device side is, for example, a general-purpose information terminal device capable of wireless communication (navigation system, smartphone, mobile phone, drive recorder, etc.) ) Can be realized by installing dedicated application software. As a result, an ITS system can be constructed at a low cost.
 さらに、当該安全支援システムは、付帯情報として、交差点等への進入時および進入後(直進、左折、右折後)の道路標識情報を画像表示部11、音声出力部12で提供することができる。具体的には、監視装置10は、前述したアイコン/シンボルID(コード)信号等の一部として予め設定されている付帯情報を加えてユーザ装置に同報し、ユーザ装置は、この付帯情報を認識してユーザに通知する。これにより、標識設備不備や、見通し不良や、見落とし等で見逃される恐れがある道路標識情報を、ユーザに通知し、注意を喚起することができ、交差点通過に伴う安全性をより高めることが可能になる。 Furthermore, the safety support system can provide, as incidental information, road sign information at the time of entry to an intersection or the like and after entry (after going straight, left turn, right turn) by the image display unit 11 and the audio output unit 12. Specifically, the monitoring device 10 broadcasts to the user device by adding supplementary information set in advance as a part of the above-described icon / symbol ID (code) signal or the like, and the user device transmits this supplementary information. Recognize and notify the user. As a result, it is possible to notify the user of road sign information that may be overlooked due to inadequate signage facilities, poor visibility, or oversight, etc. become.
 (実施の形態1)
 《監視装置(安全支援装置)の概略構成》
 図2は、本発明の実施の形態1による安全支援システムにおいて、その監視装置の概略構成例を示すブロック図である。図2に示す監視装置10は、図1に示したように、カーブミラーに併設される。当該監視装置10は、センサ部20と、画像処理・信号生成部21と、方位情報生成部22と、無線通信部23と、拡張情報生成部24と、プロセッサ部(CPU)25等を備え、これらがバス26で接続された構成となっている。センサ部20は、カメラセンサ27か赤外線センサ28のいずれか一方または両方を備え、さらに、加えて超音波レーダ29等を備えてもよい。
(Embodiment 1)
<< Schematic configuration of monitoring device (safety support device) >>
FIG. 2 is a block diagram illustrating a schematic configuration example of the monitoring device in the safety support system according to the first embodiment of the present invention. As shown in FIG. 1, the monitoring device 10 shown in FIG. 2 is attached to the curve mirror. The monitoring device 10 includes a sensor unit 20, an image processing / signal generation unit 21, an orientation information generation unit 22, a wireless communication unit 23, an extended information generation unit 24, a processor unit (CPU) 25, and the like. These are connected by a bus 26. The sensor unit 20 includes one or both of the camera sensor 27 and the infrared sensor 28, and may further include an ultrasonic radar 29 or the like.
 カメラセンサ27や赤外線センサ28は、監視カメラであり、予め指定された撮像方位で撮像を行う。この際に、赤外線センサ28(すなわち赤外線画像)を用いることで、例えば、夜間等での画像認識(移動体の検出および移動体の種類の判別等)の向上が図れる。また、超音波レーダ29を用いることで、距離計測(すなわち移動体の位置の判別)に伴う精度の向上が図れる。ただし、コストの観点からは、センサ部20に、カメラセンサ27のみを設けてもよい。 The camera sensor 27 and the infrared sensor 28 are surveillance cameras, and perform imaging in a pre-designated imaging direction. At this time, by using the infrared sensor 28 (that is, an infrared image), for example, it is possible to improve image recognition at night or the like (detection of the moving body, determination of the type of the moving body, etc.). Further, by using the ultrasonic radar 29, it is possible to improve the accuracy associated with distance measurement (that is, determination of the position of the moving body). However, from the viewpoint of cost, only the camera sensor 27 may be provided in the sensor unit 20.
 画像処理・信号生成部21は、画像認識部30と距離計測部31を備える。画像認識部30は、センサ部20の監視カメラからの撮像画像に対して、既存の画像認識処理アルゴリズムにより画像認識処理を実行し、移動体を検出すると共にその種類(車両、歩行者、自転車・自動二輪等)を判別する。距離計測部31は、詳細は後述するが、画像認識部30で認識された各移動体毎に、撮像画像上の座標に基づいて基準位置(例えば交差点の入り口等)と各移動体との間の距離をそれぞれ計測する。この距離の計測精度は、特に限定はされないが、メートル単位から数十メートル単位の間で任意の単位を設定することが可能である。この際に、より高精度な距離計測が必要な場合には、前述したように超音波レーダ29等を用いてもよい。また、画像認識部30は、予め定めた距離内に存在する移動体を検出すればよい。この距離は、撮像画像の座標に対応するため、画像認識部30は、撮像画像上の予め定めた座標範囲内に存在する移動体を検出すればよい。 The image processing / signal generation unit 21 includes an image recognition unit 30 and a distance measurement unit 31. The image recognition unit 30 performs image recognition processing on the captured image from the monitoring camera of the sensor unit 20 using an existing image recognition processing algorithm, detects a moving body, and detects the type (vehicle, pedestrian, bicycle / Motorcycles, etc.). Although the details will be described later, the distance measuring unit 31 is arranged between a reference position (for example, an entrance of an intersection) and each moving body for each moving body recognized by the image recognizing unit 30 based on coordinates on the captured image. Measure the distance of each. The measurement accuracy of this distance is not particularly limited, but an arbitrary unit can be set between the meter unit and several tens of meter units. At this time, if more accurate distance measurement is required, the ultrasonic radar 29 or the like may be used as described above. Moreover, the image recognition part 30 should just detect the mobile body which exists within the predetermined distance. Since this distance corresponds to the coordinates of the captured image, the image recognition unit 30 only has to detect a moving body that exists within a predetermined coordinate range on the captured image.
 以降では、カメラセンサ(監視カメラ)27を用いた場合を例として説明を行う。 Hereinafter, the case where the camera sensor (surveillance camera) 27 is used will be described as an example.
 方位情報生成部22は、例えば、カメラセンサ(監視カメラ)27の撮影方位をジャイロコンパス等により生成する。すなわち、カメラセンサ27とジャイロコンパスが、監視装置10上に一体化して設置され、ジャイロコンパスは、カメラセンサ27の撮影方位を検出する。ただし、実際には、カーブミラーに監視装置10を併設する際に、カメラセンサ27の撮像方位も固定的に定められる。したがって、必ずしもジャイロコンパスを持つ必要はなく、監視装置10を併設する際に、方位情報記憶部32に撮像方位を表す固定値を記憶させてもよい。この場合、ジャイロコンパスに伴う動作が不必要となるため、監視装置10の低コスト化、低消費電力化が図れる。 The azimuth information generation unit 22 generates the shooting azimuth of the camera sensor (surveillance camera) 27 using a gyro compass or the like, for example. That is, the camera sensor 27 and the gyrocompass are installed integrally on the monitoring device 10, and the gyrocompass detects the shooting direction of the camera sensor 27. However, in practice, when the monitoring device 10 is provided alongside the curve mirror, the imaging direction of the camera sensor 27 is also fixedly determined. Therefore, it is not always necessary to have a gyrocompass, and when the monitoring apparatus 10 is provided, a fixed value representing the imaging direction may be stored in the direction information storage unit 32. In this case, the operation associated with the gyro compass is unnecessary, so that the cost and power consumption of the monitoring device 10 can be reduced.
 無線通信部23は、画像処理・信号生成部21で得られた各移動体に関する情報(各移動体の種類および距離)と、方位情報生成部22の情報とを用いて、後述する規定のデータフォーマットを持つデータ信号を生成し、当該データ信号を無線信号によって同報する。この際には、必要に応じて、データ信号内に拡張情報生成部24からの情報を加えてもよい。拡張情報生成部24は、交通情報記憶部34を備え、交通情報記憶部34には、例えば、カメラセンサ27の撮像方位に存在する道路標識等の情報が予め保持されている。また、無線通信部23は、この例では、IEEE 802.11a/b/g/n等(所謂無線LAN)に基づく無線インタフェース[1]33aと、IEEE 802.11p(所謂WAVE(Wireless Access in Vehicular Environments))に基づく無線インタフェース[2]33bとを備える。 The wireless communication unit 23 uses the information (type and distance of each moving body) about each moving body obtained by the image processing / signal generating unit 21 and the information of the azimuth information generating unit 22 to define data described later. A data signal having a format is generated, and the data signal is broadcast by a radio signal. At this time, information from the extended information generation unit 24 may be added to the data signal as necessary. The extended information generation unit 24 includes a traffic information storage unit 34. The traffic information storage unit 34 stores information such as road signs that exist in the imaging direction of the camera sensor 27 in advance. In this example, the wireless communication unit 23 includes a wireless interface [1] 33a based on IEEE 802.11a / b / g / n (so-called wireless LAN) and an IEEE 802.11p (so-called WAVE (Wireless Access Access in Vehicular). Environments)) based wireless interface [2] 33b.
 図2の監視装置10は、例えば、センサ部20内の監視カメラを用いて一定期間毎に撮像を行い、当該撮像画像に対する画像処理・信号生成部21の処理等を経て、一定期間毎に無線通信部23を用いてデータ信号の同報を行う。この際の全体シーケンスは、プロセッサ部(CPU)25によって制御される。また、画像処理・信号生成部21は、汎用プロセッサを用いたソフトウェア処理によって実現することも、画像処理専用の処理回路等を用いたハードウェア処理によって実現することも可能である。リアルタイム性をより確保する観点からは、ハードウェア処理を用いる方が望ましい。 The monitoring device 10 in FIG. 2, for example, captures images at regular intervals using a surveillance camera in the sensor unit 20, and performs wireless processing at regular intervals through the processing of the image processing / signal generation unit 21 for the captured images. The data signal is broadcast using the communication unit 23. The entire sequence at this time is controlled by a processor unit (CPU) 25. The image processing / signal generation unit 21 can be realized by software processing using a general-purpose processor or hardware processing using a processing circuit dedicated to image processing. From the viewpoint of securing real-time properties, it is preferable to use hardware processing.
 《ユーザ装置(安全支援装置)の概略構成》
 図3Aおよび図3Bは、本発明の実施の形態1による安全支援システムにおいて、そのユーザ装置のそれぞれ異なる概略構成例を示すブロック図である。図3Aには、例えば、自動車用のユーザ装置40が示されている。図3Aのユーザ装置40は、無線通信部41と、情報処理部47と、ユーザ通知部45とを備える。情報処理部47は、携帯電話/スマートフォン42、ナビゲーションシステム43、ドライブレコーダ44の少なくともいずれか一つを含めばよい。ユーザ通知部45は、図1で述べた画像表示部11、音声出力部12、振動部13等に該当する。無線通信部41は、図2の場合と同様に、所謂無線LANに基づく無線インタフェース[1]46aと、所謂WAVEに基づく無線インタフェース[2]46bとを備える。
<< Schematic configuration of user equipment (safety support equipment) >>
3A and 3B are block diagrams showing different schematic configuration examples of the user devices in the safety support system according to Embodiment 1 of the present invention. FIG. 3A shows a user device 40 for an automobile, for example. 3A includes a wireless communication unit 41, an information processing unit 47, and a user notification unit 45. The information processing unit 47 may include at least one of the mobile phone / smartphone 42, the navigation system 43, and the drive recorder 44. The user notification unit 45 corresponds to the image display unit 11, the audio output unit 12, the vibration unit 13, and the like described in FIG. As in the case of FIG. 2, the wireless communication unit 41 includes a wireless interface [1] 46a based on a so-called wireless LAN and a wireless interface [2] 46b based on a so-called WAVE.
 情報処理部47は、自身の進行方向を検出する方位検出部(すなわちコンパス機能)49を備える。例えば、GPS(Global Positioning System)機能を備えた携帯電話/スマートフォン42や、GPS機能を通常搭載しているナビゲーションシステム43またはドライブレコーダ44であれば、GPS機能の一部として当該方位検出部49を予め備えている。ここでは、情報処理部47がナビゲーションシステム43である場合を例として説明を行う。 The information processing unit 47 includes an azimuth detecting unit (that is, a compass function) 49 that detects its own traveling direction. For example, in the case of a mobile phone / smartphone 42 having a GPS (Global Positioning System) function, a navigation system 43 or a drive recorder 44 that normally has a GPS function, the direction detection unit 49 is used as a part of the GPS function. It is prepared in advance. Here, the case where the information processing unit 47 is the navigation system 43 will be described as an example.
 ナビゲーションシステム43は、専用または汎用のインタフェース(USB等)により、無線通信部41に接続される。ナビゲーションシステム43は、無線通信部41に接続した後に市街地走行モードまたは「サイバーカーブミラーモード」に遷移するか、あるいは自身が持つ地図情報や付帯情報を利用して、自動的に国道、県道等を外れて走行する場合等を判別して市街地走行モードまたは「サイバーカーブミラーモード」に遷移する。これにより、無線通信部41と連動し、監視装置10との通信リンク確立のための探索モードに入る。 The navigation system 43 is connected to the wireless communication unit 41 by a dedicated or general-purpose interface (USB or the like). The navigation system 43 transitions to the urban driving mode or “cyber curve mirror mode” after connecting to the wireless communication unit 41, or automatically uses the map information and incidental information that the navigation system 43 has to automatically display national roads, prefectural roads, etc. When the vehicle travels outside the vehicle, the mode is changed to the urban travel mode or “cyber curve mirror mode”. Accordingly, the search mode for establishing a communication link with the monitoring apparatus 10 is entered in conjunction with the wireless communication unit 41.
 自動車が、市街地走行モードまたは「サイバーカーブミラーモード」で市街地を走行中にサイバーカーブミラー(すなわち監視装置10)が設置された市街地の交差点等に近づくと、当該自動車に搭載されたユーザ装置40は、サイバーカーブミラーから送信される無線信号を検出する。ユーザ装置40は、所定の無線通信規格(WAVE、無線LAN等)に基づき、監視装置10との間で通信同期を開始し、リンクの確立を行う。これにより、ユーザ装置40は、進入方向にある交差点に配置された監視装置10から同報されてくる情報(データ)を受信し、受信した情報を情報処理部47で処理し、所望のユーザインタフェース(ユーザ通知部45)を介して車両運転者に伝達することができる。 When an automobile approaches an urban intersection or the like where a cyber curve mirror (that is, the monitoring device 10) is installed while traveling in an urban area in the urban area driving mode or “cyber curve mirror mode”, the user device 40 mounted on the automobile is Detect the wireless signal transmitted from the cyber curve mirror. The user device 40 starts communication synchronization with the monitoring device 10 based on a predetermined wireless communication standard (WAVE, wireless LAN, etc.) and establishes a link. As a result, the user device 40 receives information (data) broadcast from the monitoring device 10 arranged at the intersection in the approach direction, and the information processing unit 47 processes the received information to obtain a desired user interface. It can be transmitted to the vehicle driver via (user notification unit 45).
 図3Bには、例えば、歩行者等用のユーザ装置50が示されている。図3Bのユーザ装置50は、無線通信部51と、情報処理部48と、ユーザ通知部45とを備える。情報処理部48は、代表的には自身の進行方向を検出する方位検出部(すなわちコンパス機能)49を含んだ携帯電話/スマートフォン52で構成される。GPS機能を持つ携帯電話/スマートフォン52であれば、GPS機能の一部として当該方位検出部49を備えている。ユーザ通知部45は、図1で述べた画像表示部11、音声出力部12、振動部13等に該当する。無線通信部51は、図2の場合と同様に、所謂無線LANに基づく無線インタフェース[1]46aを備える。 FIG. 3B shows a user device 50 for a pedestrian or the like, for example. 3B includes a wireless communication unit 51, an information processing unit 48, and a user notification unit 45. The information processing unit 48 is typically composed of a mobile phone / smartphone 52 including an azimuth detection unit (that is, a compass function) 49 that detects its own traveling direction. If the mobile phone / smartphone 52 has a GPS function, the direction detection unit 49 is provided as a part of the GPS function. The user notification unit 45 corresponds to the image display unit 11, the audio output unit 12, the vibration unit 13, and the like described in FIG. As in the case of FIG. 2, the wireless communication unit 51 includes a wireless interface [1] 46a based on a so-called wireless LAN.
 なお、無線通信部51やユーザ通知部45は、携帯電話/スマートフォン52の一機能として搭載することも可能である。また、情報処理部48は、必ずしも、携帯電話/スマートフォン52に限定されるものではなく、少なくとも、方位検出部49と、無線通信部51からの情報を処理する機能と、この処理結果に応じてユーザ通知部45を制御する機能とを備えていればよい。 Note that the wireless communication unit 51 and the user notification unit 45 can be mounted as one function of the mobile phone / smartphone 52. In addition, the information processing unit 48 is not necessarily limited to the mobile phone / smartphone 52, and at least according to the function of processing the information from the direction detection unit 49 and the wireless communication unit 51, and the processing result. A function for controlling the user notification unit 45 may be provided.
 例えば、歩行者や自転車等のユーザが「サイバーカーブミラーモード」を使用する場合、無線LAN付きスマートフォンや携帯電話等を利用する。この際には、事前にスマートフォンや携帯電話等にサイバーカーブミラー用のアプリケーションソフトを実装しておく。例えば、登校・下校中の児童・生徒等は、無線LAN付きスマートフォンや携帯電話等を所持し、携帯電話やスマートフォン等に対してこのサイバーカーブミラーモードを設定する。 For example, when a user such as a pedestrian or a bicycle uses the “cyber curve mirror mode”, a smartphone or a mobile phone with a wireless LAN is used. In this case, cybercurve mirror application software is installed in advance on a smartphone or mobile phone. For example, children / students who are attending or leaving school have smartphones or mobile phones with wireless LAN, and set the cyber curve mirror mode for the mobile phones or smartphones.
 これにより、ユーザ装置50を運搬している児童・生徒等が、市街地での登校・下校中に信号のない交差点に近づくと、ユーザ装置50は、サイバーカーブミラーの情報を無線LANで受信し、ユーザ通知部45を介して当該児童・生徒等に所望の情報を通知・警告する。具体的には、例えば、携帯電話やスマートフォン等に搭載されたディスプレイ、スピーカ、バイブレーション機能等を介して通知・警告が行われる。これにより、児童・生徒等は、交差点の光学式カーブミラーを介して移動体(車両、歩行者)を認識できることに加えて、ユーザ装置50を介して当該移動体の情報を認識することができる。その結果、児童・生徒等の認知力を高めることができ、安全で安心な登下校をサポートすることが可能になる。 As a result, when a child or student carrying the user device 50 approaches an intersection where there is no signal during attending school or leaving school in the city, the user device 50 receives the information of the cyber curve mirror via the wireless LAN, The user is notified / warned of desired information via the user notification unit 45. Specifically, for example, notification / warning is performed via a display, a speaker, a vibration function, or the like mounted on a mobile phone or a smartphone. Thereby, in addition to being able to recognize the moving body (vehicle, pedestrian) through the optical curve mirror at the intersection, the child / student etc. can recognize the information on the moving body through the user device 50. . As a result, it is possible to improve the cognitive ability of children and students, and to support safe and secure going to and from school.
 また、近年、携帯電話、スマートフォン、音楽プレーヤー等のヘッドフォンで音楽等を聞きながらランニング・歩行等を行う歩行者等が増えており、この様な場合でも本実施の形態の安全支援システムを用いることができる。例えば、通常のヘッドフォンを使用している場合には音楽等の再生中に警告音を挿入することで警告したり、あるいは振動装置付きヘッドフォン、ヘッドパッドやヘルメット等を利用することで、イヤーピース部を振動させることで危険を知らせることができ、危険回避を容易に行える。 In recent years, there are an increasing number of pedestrians who run and walk while listening to music using headphones such as mobile phones, smartphones, music players, etc. Even in such cases, the safety support system of this embodiment should be used. Can do. For example, if you are using normal headphones, you can warn by inserting a warning sound during playback of music, etc., or use headphones with a vibration device, head pad, helmet, etc. By vibrating, danger can be notified and danger avoidance can be easily performed.
 なお、図3Aのユーザ通知部45に関し、例えば、ナビゲーションシステムやスマートフォン等が通常用いる画像表示部による表示方法に限らず、近年技術開発が進んでいるドライブレコーダ機器や、車のインストゥルメンタルパネル、ヘッドアップディスプレイ等と連動した表示方法を用いてもよい。また、図3Aおよび図3Bのユーザ通知部45に関し、メガネやヘッドバンド、あるいは自転車やバイク用のヘルメット等に振動装置を一個以上配置し、これとナビゲーションシステムやスマートフォン等を有線または無線(Bluetooth機器等)で接続することで、振動装置によって警告を行うことも可能である。この際に、例えば振動装置を2個用いる場合には、対象物に応じて左右どちらかを振動させたり、あるいは左右双方に対象物がある場合には双方同時に振動させるようなことも可能である。 Note that the user notification unit 45 in FIG. 3A is not limited to a display method using an image display unit that is normally used by a navigation system, a smartphone, or the like, but a drive recorder device, a car instrument panel, A display method linked to a head-up display or the like may be used. 3A and 3B, one or more vibration devices are arranged in glasses, a headband, a helmet for a bicycle or a motorcycle, etc., and a navigation system or a smartphone is wired or wireless (Bluetooth device). Etc.), it is also possible to give a warning by the vibration device. At this time, for example, when two vibration devices are used, it is possible to vibrate either the left or right according to the object, or to vibrate both simultaneously when there are objects on both the left and right. .
 《監視装置(サイバーカーブミラー)の設置方法》
 図4Aは、カーブミラーの理想的な設置方法の一例を示す平面図であり、図4Bは、図4Aのカーブミラーに応じた図2の監視装置(サイバーカーブミラー)の設置方法の一例を示す平面図である。通常の交差点(ここでは十字路の場合で説明)では、どの進入路からでもカーブミラーを見て、左右の目視(直視)できない他の進入路から進入してくる対物、対人情報が入手できなければならない。したがって、図4Aに示すように、交差点では、4つのカーブミラーがそれぞれ対角線上に設置されるのが望ましい。これは光学式カーブミラーに映る凸型ミラー画面のゆがみが、取り付け場所によって、またこの光学式カーブミラーを目視する方向によってゆがみが異なる為である。図4Aにおいて、車両運転者は、例えば、進行方向に対して左側の死角を目視する場合は、交差点の右側奥のカーブミラー(C)を見て確認する。また、右側の死角を目視する場合は、交差点左側奥のカーブミラー(B)を見て確認する。
<Installation method of monitoring device (Cyber Curve Mirror)>
4A is a plan view showing an example of an ideal installation method of a curve mirror, and FIG. 4B shows an example of an installation method of the monitoring device (cyber curve mirror) of FIG. 2 corresponding to the curve mirror of FIG. 4A. It is a top view. At normal intersections (explained here in the case of a crossroad), if you cannot see objectives and interpersonal information entering from other approach paths that cannot be seen (directly viewed) from the left and right by looking at the curved mirror from any approach path Don't be. Therefore, as shown in FIG. 4A, it is desirable that the four curve mirrors are installed on the diagonal lines at the intersection. This is because the distortion of the convex mirror screen reflected on the optical curve mirror differs depending on the mounting location and the direction in which the optical curve mirror is viewed. In FIG. 4A, for example, when viewing the blind spot on the left side with respect to the traveling direction, the vehicle driver confirms by looking at the curve mirror (C) at the back right side of the intersection. When viewing the blind spot on the right side, check the curve mirror (B) on the left side of the intersection.
 サイバーカーブミラーの場合も同様であり、図4Bに示すように、監視装置(サイバーカーブミラー)は、図4Aに示した4つのカーブミラーにそれぞれ併設されることが望ましい。この場合、交差点の4隅(A,B,C,D)にそれぞれ設置された各監視装置(サイバーカーブミラー)10a,10b,10c,10dの監視カメラの撮像方向は、それぞれ対応するカーブミラーに映し出される方向と同じである。このような設置方法を用いると、単に、各監視装置(サイバーカーブミラー)を同じアングル(撮影角度、撮影フレームパターン方向)で画一的に設置するだけで、各監視装置の撮影画像からの画像処理・距離計測処理の条件を各監視装置で共通化することができる。したがって、実使用上の取り扱いが容易となる。 The same applies to the case of the cyber curve mirror, and as shown in FIG. 4B, it is desirable that the monitoring device (cyber curve mirror) is provided in each of the four curve mirrors shown in FIG. 4A. In this case, the imaging directions of the monitoring cameras of the monitoring devices (cyber curve mirrors) 10a, 10b, 10c, and 10d installed at the four corners (A, B, C, and D) of the intersection are respectively the corresponding curve mirrors. The direction is the same as the projected direction. Using such an installation method, simply installing each monitoring device (cyber curve mirror) uniformly at the same angle (shooting angle, shooting frame pattern direction), images from the images taken by each monitoring device. The conditions of the processing / distance measurement processing can be shared by the respective monitoring devices. Therefore, handling in actual use becomes easy.
 しかしながら、実際には、図5の様にカーブミラーが設置される場合が多い。図5は、カーブミラーの現実的な設置方法の一例を示す平面図である。図5では、車両運転者の進行方向に対して、左側奥のBの位置と右側手前のAの位置とに、それぞれ支柱(ポール)を介して2面(2方向)の光学式カーブミラーが設置されている。これにより、2箇所の支柱で4方向に対応できるため、コストの観点等から有益となる。 However, in reality, a curved mirror is often installed as shown in FIG. FIG. 5 is a plan view showing an example of a practical installation method of the curve mirror. In FIG. 5, optical curve mirrors of two surfaces (two directions) are respectively provided through a column (pole) at the position B on the left side and the position A on the right side with respect to the traveling direction of the vehicle driver. is set up. Thereby, since it can respond to four directions with two support | pillars, it becomes useful from a viewpoint of cost.
 図6Aは、図5のカーブミラーに応じた図2の監視装置(サイバーカーブミラー)の設置方法の一例を示す平面図である。図6Aでは、図5のように、対角線上の2箇所にそれぞれ2個ずつ設置された計4個のカーブミラーに対して、それぞれ監視装置(サイバーカーブミラー)10a,10b,10c,10dが併設されている。各監視装置(サイバーカーブミラー)の監視カメラの撮像方向は、それぞれ対応するカーブミラーに映し出される方向と同じである。しかしながら、この場合、必要される交差点領域を撮像するためには、各箇所に設置された2個の監視装置(例えば10aと10b)の間で撮像方向の俯角を変える必要がある。 FIG. 6A is a plan view showing an example of an installation method of the monitoring device (cyber curve mirror) of FIG. 2 corresponding to the curve mirror of FIG. In FIG. 6A, as shown in FIG. 5, monitoring devices (cyber curve mirrors) 10a, 10b, 10c, and 10d are attached to a total of four curve mirrors installed at two locations on a diagonal line, respectively. Has been. The imaging direction of the monitoring camera of each monitoring device (cyber curve mirror) is the same as the direction displayed on the corresponding curve mirror. However, in this case, in order to image the required intersection area, it is necessary to change the depression angle in the imaging direction between the two monitoring devices (for example, 10a and 10b) installed at each location.
 図6Bは、図6Aに対応する斜視図である。図6Bに示すように、各光学式カーブミラーに映し出される方向と監視カメラの撮像方向が一致するように各監視装置(サイバーカーブミラー)を設置すると、例えば監視装置10aと10b(10cと10dも同様)で、撮像方向の俯角を変える必要がある。例えば、監視装置10aの俯角をθ1、監視装置10bの俯角をθ2とすると、θ1>θ2となる。そうすると、監視装置10aと10bとで、撮像画像上の距離と座標の関係が異なるため、画像認識処理や距離計測処理において、個々の複雑な調整が必要になってくる。 FIG. 6B is a perspective view corresponding to FIG. 6A. As shown in FIG. 6B, when each monitoring device (cyber curve mirror) is installed so that the direction projected on each optical curve mirror matches the imaging direction of the monitoring camera, for example, monitoring devices 10a and 10b (10c and 10d are also shown) Similarly, it is necessary to change the depression angle in the imaging direction. For example, if the depression angle of the monitoring device 10a is θ1, and the depression angle of the monitoring device 10b is θ2, θ1> θ2. Then, since the relationship between the distance on the captured image and the coordinate differs between the monitoring devices 10a and 10b, individual complicated adjustments are required in the image recognition processing and the distance measurement processing.
 例えば、監視装置10aの監視カメラの撮像画像と監視装置10bの監視カメラの撮像画像とで、それぞれに対応する交差点の入り口の座標を一致させるためには、それぞれ異なる俯角となるように設置する必要がある。この調整作業には多くの労力が必要とされる。さらに、仮に交差点の入り口の座標を一致させたとしても、俯角が異なるため、監視装置10aの撮像画像と監視装置10bの撮像画像とで、撮像画像上の距離と座標の関係が異なることになる。したがって、撮像画像からの距離計測を正しく行うためには、監視装置10aと10bとで距離計測処理の条件を変える必要がある。この調整作業にも多くの労力が必要とされる。 For example, in order for the captured image of the monitoring camera of the monitoring device 10a and the captured image of the monitoring camera of the monitoring device 10b to coincide with the coordinates of the entrance of the corresponding intersection, it is necessary to install them at different depression angles. There is. This adjustment work requires a lot of labor. Further, even if the coordinates of the entrance of the intersection are made coincident, the depression angles are different, so that the relationship between the distance on the captured image and the coordinate differs between the captured image of the monitoring device 10a and the captured image of the monitoring device 10b. . Therefore, in order to correctly measure the distance from the captured image, it is necessary to change the conditions of the distance measurement process between the monitoring devices 10a and 10b. This adjustment work also requires a lot of labor.
 そこで、勿論、図6Aおよび図6Bのようにサイバーカーブミラーを設置することも可能であるが、図7Aおよび図7Bのように設置する方がより望ましい。図7Aは、図5のカーブミラーに応じた図2の監視装置(サイバーカーブミラー)の図6Aとは異なる設置方法の一例を示す平面図である。図7Bは、図7Aに対応する斜視図である。図7Aでは、図6Aにおける監視装置10aの役割(撮像領域)を監視装置10cが担い、逆に、図6Aにおける監視装置10cの役割(撮像領域)を監視装置10aが担っている。すなわち、交差点の四つ角を時計回りの順にA(第1角)、D(第2角)、B(第3角)、C(第4角)とすると、例えば、A(第1角)に設置された一方の監視装置10cはD(第2角)とB(第3角)の間を交差点の入り口とする領域を撮像するように設置され、他方の監視装置10dはB(第3角)とC(第4角)の間を交差点の入り口とする領域を撮像するように設置される。 Therefore, it is of course possible to install a cyber curve mirror as shown in FIGS. 6A and 6B, but it is more desirable to install as shown in FIGS. 7A and 7B. 7A is a plan view showing an example of an installation method different from FIG. 6A of the monitoring device (cyber curve mirror) of FIG. 2 corresponding to the curve mirror of FIG. FIG. 7B is a perspective view corresponding to FIG. 7A. In FIG. 7A, the monitoring device 10c plays the role (imaging region) of the monitoring device 10a in FIG. 6A, and conversely, the monitoring device 10a plays the role (imaging region) of the monitoring device 10c in FIG. 6A. That is, if the four corners of the intersection are A (first corner), D (second corner), B (third corner), and C (fourth corner) in the clockwise order, for example, installed at A (first corner). One of the monitoring devices 10c is installed so as to capture an area where the intersection is between D (second corner) and B (third corner), and the other monitoring device 10d is B (third corner). And C (fourth corner) are installed so as to image a region having an entrance of an intersection.
 これにより、図7Aおよび図7Bに示すように、監視装置10a,10b,10c,10dの各監視カメラは、共に交差点内の道路幅を挟んだ先の領域を撮像することになるため、各監視装置を共に同じ俯角θ2で設置した場合に、各監視装置の各撮像画像上での交差点の入り口の座標を同等に保つことができ、かつ、各撮像画像上での距離と座標の関係も一致させることができる。すなわち、各監視装置を同じアングル(撮影角度、撮影フレームパターン方向)で画一的に設置することができ、さらに、各監視装置の画像認識処理や距離計測処理の条件も同じにすることができる。その結果、各監視装置の設置作業や調整作業に伴う労力を大きく低減することができる。 As a result, as shown in FIGS. 7A and 7B, each of the monitoring cameras of the monitoring devices 10a, 10b, 10c, and 10d takes an image of the area ahead of the road width in the intersection. When both devices are installed at the same depression angle θ2, the coordinates of the entrance of the intersection on each captured image of each monitoring device can be kept equal, and the relationship between the distance and the coordinate on each captured image also matches. Can be made. That is, each monitoring device can be installed uniformly at the same angle (shooting angle, shooting frame pattern direction), and the conditions of image recognition processing and distance measurement processing of each monitoring device can be made the same. . As a result, it is possible to greatly reduce the labor involved in installing and adjusting each monitoring device.
 《監視装置(安全支援装置)の概略動作》
 例えば、図7Aにおて、下方(進入方向)からの車両運転者から見た交差点の視界は図1のようになる。この場合、図7Aにおいて、この下方(進入方向)からの進入車両は、監視装置(サイバーカーブミラー)10bと監視装置10cからの情報が必要となる。また、例えば、左からの進入車両に対しては、監視装置10aと監視装置10dからの情報が必要となる。このように、ユーザ装置(図3Aおよび図3Bの40又は50)は、自身の進入方向に応じて複数の監視装置10a,10b,10c,10dの中からどの監視装置からの情報を無線信号によって入手するかを的確に判定しなければならない。
<< Overall Operation of Monitoring Device (Safety Support Device) >>
For example, in FIG. 7A, the field of view of the intersection seen from the vehicle driver from below (approach direction) is as shown in FIG. In this case, in FIG. 7A, an approaching vehicle from below (the approach direction) needs information from the monitoring device (cyber curve mirror) 10b and the monitoring device 10c. For example, for the approaching vehicle from the left, information from the monitoring device 10a and the monitoring device 10d is required. In this way, the user device (40 or 50 in FIGS. 3A and 3B) can transmit information from any of the plurality of monitoring devices 10a, 10b, 10c, and 10d by radio signals according to the direction of entry of the user device. You must judge exactly whether you get it.
 そこで、本実施の形態1の安全支援システムでは、図8に示すようなデータフォーマットを用いて、無線信号による通信が行われる。図8は、図2の監視装置によって同報される無線信号のデータフォーマットの一例を示す図である。図9A、図9B、図9Cおよび図9Dは、それぞれ、図8の詳細な内容の一例を示す図である。図8に示すように、図2の監視装置10から送信される無線信号には、n(nは1以上の整数)組の移動体情報[1](60[1])~移動体情報[n](60[n])と、方位情報61と、拡張情報62とが含まれている。各移動体情報[k](kは1~nの整数)には、画像認識ID[k]と距離情報[k]とが含まれている。特に限定はされないが、各画像認識ID、各距離情報、方位情報61および拡張情報62は、それぞれ、4ビットの情報であり、例えば、図9A、図9B、図9Cおよび図9Dに示すような意味づけがなされている。 Therefore, in the safety support system of the first embodiment, communication using radio signals is performed using a data format as shown in FIG. FIG. 8 is a diagram illustrating an example of a data format of a radio signal broadcast by the monitoring apparatus of FIG. 9A, 9B, 9C, and 9D are diagrams each showing an example of the detailed contents of FIG. As shown in FIG. 8, n (n is an integer of 1 or more) sets of mobile information [1] (60 [1]) to mobile information [ n] (60 [n]), orientation information 61, and extended information 62 are included. Each mobile object information [k] (k is an integer of 1 to n) includes an image recognition ID [k] and distance information [k]. Although not particularly limited, each image recognition ID, each distance information, direction information 61, and extended information 62 are 4-bit information, for example, as shown in FIGS. 9A, 9B, 9C, and 9D. Meaning is made.
 各画像認識IDは、図9Aに示すように、画像認識によって判別した移動体の種類を表すものである。当該各画像認識IDは、図2の監視装置10の画像認識部30によって生成される。例えば、“0000”は、移動体(対象となる物体/人体)が存在しないことを表し、“0001”は、移動体が存在するがその種類を判別中で有ることを表す。本実施の形態のような安全支援システムでは、リアルタイム処理が求められ、もし画像認識に時間がかかってしまうような場合(例えば時間にして数秒が必要となった場合)、ユーザ装置側に何ら情報が渡されない状況が起こりうる。例えば、車両の場合、時速20kmで走行している場合、秒速約5m/秒であり、交差点の手前10mにいる車は2秒で交差点に進入してくる。 Each image recognition ID represents the type of moving object determined by image recognition, as shown in FIG. 9A. Each image recognition ID is generated by the image recognition unit 30 of the monitoring apparatus 10 in FIG. For example, “0000” indicates that there is no moving object (target object / human body), and “0001” indicates that a moving object exists but its type is being determined. In a safety support system such as this embodiment, real-time processing is required, and if image recognition takes time (for example, when several seconds are required), no information is given to the user device side. There may be situations where the is not passed. For example, in the case of a vehicle, when traveling at a speed of 20 km / h, the speed is about 5 m / sec, and a car 10 m before the intersection enters the intersection in 2 seconds.
 そこで、図9Aでは、画像認識処理中(すなわち移動体の種類の判別が未完の状態)でも、移動体の有無を示す情報を選択および生成できるように、“0000”と“0001”が規定されている。また、図9Aには、画像認識処理が完了した際に、その結果に基づいて画像認識IDが選択および生成できるような規定が含まれている。その一部の例として、車両の場合には“0010”が、自転車/バイクの場合には“0011”が、単独歩行者の場合には“0100”がそれぞれ選択および生成される。なお、画像認識処理は、特に限定はされないが、代表的にはテンプレートマッチング等の方式を用いて行われる。 Therefore, in FIG. 9A, “0000” and “0001” are defined so that information indicating the presence / absence of a moving body can be selected and generated even during the image recognition process (that is, the state of the type of the moving body is incomplete). ing. In addition, FIG. 9A includes a rule that when an image recognition process is completed, an image recognition ID can be selected and generated based on the result. As some examples, “0010” is selected and generated for a vehicle, “0011” is selected for a bicycle / bike, and “0100” is selected for a single pedestrian. The image recognition process is not particularly limited, but is typically performed using a method such as template matching.
 次に、画像認識処理と並行して、撮像画像から図9Bに示されるような距離情報が選択および生成される。当該距離情報は、図2の監視装置10の距離計測部31によって生成される。この際に、精度を必要とする場合には、図2等で述べたように、超音波レーダ等を用いて、移動体と交差点の入り口との間の距離を高精度に検出してもよい。ここでは、監視装置が固定されていることを前提として、その監視カメラの撮影画像(フレーム)内の移動体の座標から、簡便かつ高速に距離情報を生成する方法を図10Aおよび図10Bを用いて説明する。図10Aおよび図10Bは、図2の監視装置における距離計測部の処理内容の一例を示す説明図である。 Next, in parallel with the image recognition processing, distance information as shown in FIG. 9B is selected and generated from the captured image. The distance information is generated by the distance measuring unit 31 of the monitoring device 10 in FIG. In this case, if accuracy is required, the distance between the moving body and the entrance of the intersection may be detected with high accuracy using an ultrasonic radar or the like as described in FIG. . Here, on the assumption that the monitoring device is fixed, a method for generating distance information simply and at high speed from the coordinates of the moving body in the captured image (frame) of the monitoring camera is shown in FIGS. 10A and 10B. I will explain. 10A and 10B are explanatory diagrams illustrating an example of the processing content of the distance measuring unit in the monitoring apparatus of FIG.
 図10Aには、図7Aの交差点を例として、監視装置(サイバーカーブミラー)10b,10cの画像認識・距離計測処理に伴う各移動体の詳細な位置関係の一例が示されている。図10Bには、図10Aの監視装置10c,10bの各監視カメラによる撮像画像の一例が示されている。ここでは、図10Bに示される監視装置10cによる撮像画像を例として、移動体の距離情報を選択および生成する方法について説明する。 FIG. 10A shows an example of the detailed positional relationship of each moving body accompanying the image recognition / distance measurement processing of the monitoring devices (cyber curve mirrors) 10b and 10c, taking the intersection of FIG. 7A as an example. FIG. 10B shows an example of an image captured by each monitoring camera of the monitoring devices 10c and 10b in FIG. 10A. Here, a method for selecting and generating distance information of a moving body will be described using a captured image by the monitoring apparatus 10c shown in FIG. 10B as an example.
 まず、交差点に監視装置10cを設置した際に、管理者等は、撮影画像上に距離情報の最大値を決定するため、「+マーカー」の位置を決定する。この例では、この「+マーカー」は最大値15mの地点に設定される。この「+マーカー」のポイントは、設置される監視装置(監視カメラ)の角度(俯角/伏角)によって変わる。具体的には、監視装置(監視カメラ)の設置角度(俯角)によって、撮像画面上での基準位置(ここでは交差点の入り口に対応する0mの線とする)が変化し、また、撮影画像上での基準位置からの相対的な座標と、それに対応する実際の距離との関係も変化する。例えば、監視カメラの俯角が大きい場合には上から目線でのアングルとなり、俯角が小さい場合には横から目線でのアングルになるため、撮影画像上の座標と実際の距離との関係が変化する。 First, when the monitoring device 10c is installed at the intersection, the manager or the like determines the position of the “+ marker” in order to determine the maximum value of the distance information on the captured image. In this example, this “+ marker” is set at a point having a maximum value of 15 m. The point of “+ marker” varies depending on the angle (the depression angle / the depression angle) of the monitoring device (monitoring camera) to be installed. Specifically, the reference position on the imaging screen (in this case, a 0 m line corresponding to the entrance of the intersection) changes depending on the installation angle (the depression angle) of the monitoring device (monitoring camera). The relationship between the relative coordinates from the reference position and the actual distance corresponding thereto also changes. For example, when the depression angle of the surveillance camera is large, the angle is from the top to the line of sight. When the depression angle is small, the angle is from the side to the line of sight, so the relationship between the coordinates on the captured image and the actual distance changes. .
 例えば、図7Aおよび図7Bのように各監視装置10a,10b,10c,10dを配置した場合には、前述したように、各監視装置(監視カメラ)の設置角度(俯角)を同一にすることができる。この場合、各監視装置を設置する高さ(すなわち図7Bのカーブミラーの高さに相当)が同一であることを前提として、その設置角度(俯角)θ2も同一であるため、各監視装置間で、撮影画像上の座標と実際の距離との関係は同一となり、また、撮影画像上の基準位置(交差点の入り口に対応する0mの線)もほぼ同一となる。ただし、この基準位置は、より厳密には、十字路内の各道路幅に応じて若干変化する。このため、各監視装置に対して基準位置(0m)の座標を共通に定めることも可能であるが、より精度を高めるためには、各監視装置に対して基準位置(0m)の座標を個別に定められるようにしてもよい。 For example, when the monitoring devices 10a, 10b, 10c, and 10d are arranged as shown in FIGS. 7A and 7B, as described above, the installation angle (the depression angle) of each monitoring device (monitoring camera) should be the same. Can do. In this case, on the assumption that the height at which each monitoring device is installed (that is, the height of the curve mirror in FIG. 7B) is the same, the installation angle (the depression angle) θ2 is also the same. Thus, the relationship between the coordinates on the photographed image and the actual distance is the same, and the reference position (0 m line corresponding to the entrance of the intersection) on the photographed image is substantially the same. However, more strictly, the reference position slightly changes according to the width of each road in the crossroads. For this reason, it is possible to define the coordinates of the reference position (0 m) for each monitoring device in common, but in order to increase the accuracy, the coordinates of the reference position (0 m) are individually set for each monitoring device. It may be determined as follows.
 基準位置(0m)の座標が定められると、次ぎに、管理者等は、「+マーカー」(ここでは最大値15m)の位置を定める。この際に、各監視装置(監視カメラ)の設置角度(俯角)が同一の場合には、撮像画像上の座標と実際の距離との関係が一義的に定められるため、「+マーカー」の位置を自動的に定めることが可能である。ただし、この最大値の距離を表す「+マーカー」を任意に定めたい場合がある。この場合、例えば、管理者等は、監視装置に対して、「+マーカー」を設定する距離等を入力すればよい。 When the coordinates of the reference position (0 m) are determined, the administrator and the like then determine the position of the “+ marker” (here, the maximum value is 15 m). At this time, when the installation angles (the depression angles) of the respective monitoring devices (monitoring cameras) are the same, the relationship between the coordinates on the captured image and the actual distance is uniquely determined. Can be determined automatically. However, there is a case where it is desired to arbitrarily define “+ marker” representing the distance of the maximum value. In this case, for example, the administrator or the like may input a distance or the like for setting “+ marker” to the monitoring device.
 一方、例えば、図6Aのように各監視装置10a,10b,10c,10dを配置した場合には、各監視装置(監視カメラ)の設置角度(俯角)が変わることになる。この場合、例えば、管理者等は、各監視装置毎に設置角度(俯角)や基準位置等の登録を行うと共に、「+マーカー」の設定を行う。これらの登録に行うと、監視装置は、所定の演算式によって撮影画像上の座標と実際の距離との関係を算出することができる。ただし、この場合、各監視装置毎に、撮影画像上の座標と実際の距離との関係や、さらに、撮影画像上の移動体の縮尺も異なるため、距離計測の精度や前述した画像認識の精度にバラツキが生じる恐れがある。このため、前述したように、図7Aおよび図7Bのような設置方法を用いることが望ましい。なお、図7Aおよび図7Bのような設置方法を用いた場合、場合によっては、撮影画像上の座標と実際の距離との関係を所定の演算式によって求めるのではなく、予めこれらの関係をテーブルとして保持し、このテーブルに基づいて距離を計測するようなことも可能である。 On the other hand, for example, when the respective monitoring devices 10a, 10b, 10c, and 10d are arranged as shown in FIG. 6A, the installation angle (the depression angle) of each monitoring device (monitoring camera) changes. In this case, for example, the administrator or the like registers the installation angle (the depression angle) and the reference position for each monitoring device, and sets “+ marker”. When the registration is performed, the monitoring apparatus can calculate the relationship between the coordinates on the captured image and the actual distance by a predetermined arithmetic expression. However, in this case, since the relationship between the coordinates on the captured image and the actual distance and the scale of the moving object on the captured image are different for each monitoring device, the accuracy of distance measurement and the accuracy of image recognition described above are different. There is a risk of variation. For this reason, as described above, it is desirable to use an installation method as shown in FIGS. 7A and 7B. When the installation method as shown in FIGS. 7A and 7B is used, in some cases, the relationship between the coordinates on the captured image and the actual distance is not obtained by a predetermined arithmetic expression, but these relationships are stored in a table in advance. And the distance can be measured based on this table.
 例えば、「+マーカー」の距離を15mとした場合、基準位置(0m)~「+マーカー」(15m)の範囲内で、図9Bに示すような距離情報を選択および生成することが可能になる。図9Bの例では、光学式カーブミラーとの比較を考慮し、特に精度を必要としない距離計測を行うこととし、0m(“0000”)、1m(“0001”)、3m(“0010”)、5m(“0011”)、10m(“0100”)、15m(“0101”)の6ステップで距離を表現している。すなわち、光学式カーブミラーでは、個人差にもよるが、通常、近いか遠いか程度の感覚でしか距離を認識できないため、これよりもある程度高い精度で距離計測を行えればよい。 For example, when the distance of “+ marker” is 15 m, distance information as shown in FIG. 9B can be selected and generated within the range from the reference position (0 m) to “+ marker” (15 m). . In the example of FIG. 9B, in consideration of comparison with the optical curve mirror, distance measurement that does not particularly require accuracy is performed, and 0 m (“0000”), 1 m (“0001”), 3 m (“0010”). The distance is expressed by 6 steps of 5 m (“0011”), 10 m (“0100”), and 15 m (“0101”). In other words, with an optical curve mirror, although it depends on individual differences, the distance can usually be recognized only with a sense of nearness or distance, so it is only necessary to perform distance measurement with a certain degree of accuracy.
 ここで、距離計測の精度を極度に高くすると、図9Bの距離情報のデータサイズ(ビット数)が増加し、また、距離計測を行う監視装置側のリソースや、距離情報を受信するユーザ装置側のリソースも増大する場合があるため、図9Bの例では、数mレベルの精度としている。ただし、勿論、この距離計測の精度やステップ数、および距離計測の最大値(「+マーカー」の位置)は、必要に応じて適宜変更することが可能である。 Here, if the accuracy of distance measurement is extremely increased, the data size (number of bits) of the distance information in FIG. 9B increases, and the resources on the monitoring device side that measures the distance and the user device side that receives the distance information In some cases, the accuracy of several m level is used in the example of FIG. 9B. However, of course, the accuracy of distance measurement, the number of steps, and the maximum value of distance measurement (the position of “+ marker”) can be changed as necessary.
 図10Bに示す監視装置10cの撮像画像において、図2の距離計測部31は、画像認識処理を行う際の移動体(ここでは自動車)の検出枠(例えば、テンプレートの枠)の最下線部の位置が図9Bで規定された6ステップの中のどのステップにあるかを判定し、図9Bの距離情報を決定する。この際に、「+マーカー」で設定されたポイント(座標)は、画像認識および距離計測を行う際の開始ポイントとなる。図2の画像認識部30は、移動体の検出枠の最下線部が「+マーカー」の座標に到達した場合(すなわち基準位置(0m)~「+マーカー」の位置(例えば15m)の範囲に入った場合)に、画像認識IDの生成を開始する。 In the captured image of the monitoring device 10c illustrated in FIG. 10B, the distance measurement unit 31 illustrated in FIG. 2 is the bottom line portion of the detection frame (for example, the frame of the template) of the moving object (here, the automobile) when performing the image recognition process. It is determined which of the six steps defined in FIG. 9B is in the position, and the distance information in FIG. 9B is determined. At this time, the point (coordinates) set by “+ marker” is a start point when performing image recognition and distance measurement. The image recognizing unit 30 in FIG. 2 falls within the range from the reference position (0 m) to the position of “+ marker” (for example, 15 m) when the lowest line portion of the detection frame of the moving body reaches the coordinates of “+ marker”. When the image recognition ID is entered, generation of the image recognition ID is started.
 「+マーカー」の位置は撮影画像内であれば、任意に設定可能である。この設定値により、ユーザ装置に対して移動体の存在を知らせるタイミングが変化する。例えば、事前にできるだけ早い時点で知らせたい場合(ユーザに余裕をもって情報を与えたい場合)は、「+マーカー」の位置を遠く(撮像画像上のより上部側)に設定すればよい。また、例えば、交差点等が隣接しているような場合は、隣の交差点の角を「+マーカー」の位置(数メートルから数十メートル)に設定すればよい。 The position of “+ marker” can be arbitrarily set as long as it is within the photographed image. The timing for notifying the user apparatus of the presence of the moving body changes depending on the set value. For example, if it is desired to notify in advance at the earliest possible time (when information is to be given to the user with a margin), the position of the “+ marker” may be set far (upper side on the captured image). For example, when an intersection or the like is adjacent, the corner of the adjacent intersection may be set to the position of “+ marker” (several meters to several tens of meters).
 以上のような処理により、画像認識IDとその距離情報が選択・生成される。この際に、前述したように、移動体が「+マーカー」の位置に基づく所定の範囲(ここでは0m~15m)内に存在しない場合には、画像認識IDとして図9Aの“0000”が生成され続ける。また、移動体の検出枠の最下線部が所定の範囲(ここでは0m~15m)に入った時点で、画像認識処理が開始されると共に、画像認識IDとして図9Aの“0001”が生成される。当該“0001”は、それに対応する移動体の種類の判別が完了するまで、あるいは、当該移動体の種類の判別が完了する前に当該移動体が所定の範囲(ここでは0m~15m)から外れるまで生成され続ける。 Through the above processing, the image recognition ID and its distance information are selected and generated. At this time, as described above, if the moving body does not exist within a predetermined range (here, 0 m to 15 m) based on the position of “+ marker”, “0000” in FIG. 9A is generated as the image recognition ID. Continue to be. Further, when the lowest line portion of the detection frame of the moving body enters a predetermined range (here, 0 m to 15 m), the image recognition process is started and “0001” in FIG. 9A is generated as the image recognition ID. The The “0001” is out of a predetermined range (here, 0 m to 15 m) until the discrimination of the type of the corresponding moving body is completed or before the discrimination of the type of the moving body is completed. Will continue to be generated.
 この場合、例えば、移動体の判別が不可能な場合や、または登録された画像認識IDにヒットしない場合等でも、画像認識IDとして“0001”は生成され続ける。このため、監視装置は、画像認識に必要以上に時間がかかる場合や、何等かの理由により対象物を特定できない場合等でも、ユーザ装置に対して、少なくとも何らかの移動体が存在する旨と、その距離情報とを送信することができる。その結果、ユーザに対して危険の可能性をリアルタイムに警告することができ、ユーザの安全性を高めることが可能になる。 In this case, “0001” continues to be generated as the image recognition ID even when, for example, it is impossible to determine the moving object or when the registered image recognition ID is not hit. For this reason, even if the monitoring device takes more time than necessary for image recognition, or when the object cannot be specified for some reason, the fact that there is at least some moving object for the user device, and that Distance information can be transmitted. As a result, it is possible to warn the user of the possibility of danger in real time, and to improve the user's safety.
 さらに、図8等に示したように、各移動体情報60[k]は、データサイズが小さい情報となっている。例えば、監視装置が撮像画像をそのまま送信するような場合、少なくとも数MB/s以上の通信帯域が必要とされる。一方、本実施の形態1の安全支援システムでは、例えば、1秒間に数十フレームの撮像画像を取得し、その各フレーム毎に移動体情報60[k]を生成し、それを送信するものとして、数百B/s~数kB/sの通信帯域でよい。したがって、リアルタイム性を十分に確保することが可能になる。 Further, as shown in FIG. 8 and the like, each mobile object information 60 [k] is information with a small data size. For example, when the monitoring device transmits the captured image as it is, a communication band of at least several MB / s or more is required. On the other hand, in the safety support system according to the first embodiment, for example, a captured image of several tens of frames is acquired per second, moving object information 60 [k] is generated for each frame, and is transmitted. The communication band may be several hundred B / s to several kB / s. Therefore, it is possible to sufficiently ensure the real time property.
 図9Cには、図8における方位情報61の詳細な内容が示されている。当該方位情報61は、監視装置10内の監視カメラの撮像方向を表し、通常は、監視装置10の設置時に当該監視装置10の方位情報記憶部32(図2)に対して固定的に設定される。例えば、北方向を撮像するように監視装置が設置された場合、その方位情報記憶部32に“0000”が設定され、北東方向を撮像するように監視装置が設置された場合、その方位情報記憶部32に“0100”が設定される。このように、図2の監視装置10が方位情報61を送信することで、図3Aおよび図3Bのユーザ装置40,50は、自身の方位検出部(コンパス機能)49との比較から必要な情報を選択することができる。例えば、ユーザ装置の進行方向が北方向の場合、複数の監視装置から送信された各データ(図8)の中から、方位情報61として東方向(“0001”)が含まれるデータと、西方向(“0011”)が含まれるデータとを処理対象とすればよい。 FIG. 9C shows the detailed contents of the orientation information 61 in FIG. The azimuth information 61 represents the imaging direction of the monitoring camera in the monitoring device 10, and is normally fixedly set in the azimuth information storage unit 32 (FIG. 2) of the monitoring device 10 when the monitoring device 10 is installed. The For example, when a monitoring device is installed to image the north direction, “0000” is set in the direction information storage unit 32, and when the monitoring device is installed to image the northeast direction, the direction information storage is performed. “0100” is set in the section 32. 2 transmits the azimuth information 61, the user devices 40 and 50 shown in FIGS. 3A and 3B need information from a comparison with its own azimuth detection unit (compass function) 49. Can be selected. For example, when the traveling direction of the user device is the north direction, the data including the east direction (“0001”) as the azimuth information 61 among the data (FIG. 8) transmitted from the plurality of monitoring devices, and the west direction Data including (“0011”) may be processed.
 図9Dには、図8における拡張情報62の詳細な内容が示されている。当該拡張情報62には、例えば、監視装置10内の監視カメラの撮像方向に存在する道路標識の情報等が含まれる。当該拡張情報62は、通常は、監視装置10の設置時に固定的に定められ、当該監視装置10の交通情報記憶部34(図2)に対して固定的に設定される。ただし、その後、環境が変化した場合等で、交通情報記憶部34に設定する拡張情報62を適宜更新してもよい。例えば、監視装置の撮像方向が車両進入禁止となっている場合、その交通情報記憶部34に“0010”が設定され、一方通行となっている場合、その交通情報記憶部34に“0101”が設定される。 FIG. 9D shows the detailed contents of the extended information 62 in FIG. The extended information 62 includes, for example, information on road signs that exist in the imaging direction of the monitoring camera in the monitoring device 10. The extended information 62 is normally fixedly set when the monitoring device 10 is installed, and is fixedly set in the traffic information storage unit 34 (FIG. 2) of the monitoring device 10. However, the extended information 62 set in the traffic information storage unit 34 may be appropriately updated when the environment changes thereafter. For example, when the imaging direction of the monitoring device is prohibited from entering the vehicle, “0010” is set in the traffic information storage unit 34, and when the traffic direction is one-way, “0101” is stored in the traffic information storage unit 34. Is set.
 《ユーザ装置(安全支援装置)の概略動作》
 図11Aは、図3Aまたは図3Bのユーザ装置において、図9Aの画像認識IDに基づく表示内容の一例を示す図であり、図11Bは、図3Aまたは図3Bのユーザ装置において、図9Dの拡張情報に基づく表示内容の一例を示す図である。図12Aは、図3Aまたは図3Bのユーザ装置における表示内容の一例を説明するための交差点の交通状況の一例を示す平面図であり、図12Bは、図12Aの交通状況において、図3Aまたは図3Bのユーザ装置での表示画面の一例を示す図である。
<< Schematic operation of user equipment (safety support equipment) >>
11A is a diagram illustrating an example of display contents based on the image recognition ID of FIG. 9A in the user device of FIG. 3A or 3B, and FIG. 11B is an extension of FIG. 9D in the user device of FIG. 3A or 3B. It is a figure which shows an example of the display content based on information. FIG. 12A is a plan view illustrating an example of traffic conditions at an intersection for explaining an example of display contents in the user device of FIG. 3A or FIG. 3B, and FIG. 12B illustrates the traffic conditions of FIG. It is a figure which shows an example of the display screen in 3B user apparatus.
 ここでは、予め、図3Aまたは図3Bのユーザ装置40,50のユーザ通知部45として、画像表示部が備わり、また、ユーザ装置40,50内に、本実施の形態1の安全支援システムに向けた画像表示用のアプリケーションソフトウエアが実装されていることを前提とする。当該アプリケーションソフトウエアには、図11Aに示すように、画像認識IDの各値に対応する画像ライブラリ(アイコン、シンボル、ピクトグラム等)が含まれている。ユーザ装置40,50は、図8のデータフォーマットに基づき画像認識IDとペアで受信した距離情報(交差点角からの距離)を認識し、これらの情報を反映して、例えば、図12Bに示すような表示画面を表示する。 Here, an image display unit is provided in advance as the user notification unit 45 of the user devices 40 and 50 in FIG. 3A or FIG. 3B, and the user devices 40 and 50 are provided with the safety support system of the first embodiment. It is assumed that the image display application software is installed. As shown in FIG. 11A, the application software includes an image library (icon, symbol, pictogram, etc.) corresponding to each value of the image recognition ID. The user devices 40 and 50 recognize distance information (distance from the intersection angle) received as a pair with the image recognition ID based on the data format of FIG. 8 and reflect these information, for example, as shown in FIG. 12B. Display a simple display screen.
 図12Bの表示画面の例では、例えば、ナビゲーションシステムによる3D表示画面や、あるいは、車載カメラで撮像した自車の撮像画像(動画、静止画どちらも可)に対して、各画像認識IDに対応する画像ライブラリが、交差点角を基点として距離情報が示す距離だけ離れた位置に一次元的に並べて表示されている。言い換えれば、AR技術を用いて、死角に存在する各移動体がライブ画像上に上書き表示されている。 In the example of the display screen of FIG. 12B, for example, a 3D display screen by a navigation system or a captured image of the own vehicle (both a moving image and a still image) captured by an in-vehicle camera are supported for each image recognition ID. The image library to be displayed is one-dimensionally arranged and displayed at positions separated by the distance indicated by the distance information with the intersection angle as a base point. In other words, using the AR technology, each moving object existing in the blind spot is overwritten and displayed on the live image.
 より詳細に説明すると、図12Aを例として、ユーザ装置40,50のアプリケーションソフトは、まず、各監視装置10a,10b,10c,10dからのデータをそれぞれ受信し、各データに含まれる方位情報61を認識する。また、当該アプリケーションソフトは、自身の方位検出部(コンパス機能)49に基づき進行方向が北方向であることを認識でき、自身の進行方向に対して死角となる左右の情報(ここでは西と東の情報)を処理対象とする。その結果、当該アプリケーションソフトは、方位情報61に「西」が設定されている監視装置10cからのデータと、方位情報61に「東」が設定されている監視装置10bからのデータを処理対象とする。 More specifically, referring to FIG. 12A as an example, the application software of the user devices 40 and 50 first receives data from each of the monitoring devices 10a, 10b, 10c, and 10d, and receives azimuth information 61 included in each data. Recognize Also, the application software can recognize that the traveling direction is the north direction based on its own direction detection unit (compass function) 49, and the left and right information (here, west and east) that are blind spots with respect to its own traveling direction. Information). As a result, the application software treats data from the monitoring device 10c in which “west” is set in the direction information 61 and data from the monitoring device 10b in which “east” is set in the direction information 61. To do.
 この際に、例えば、監視装置10cから送信されるデータ(図8)には、当該方向情報61(西)と共に、移動体情報[1](60[1])として、例えば、画像認識ID[1]=“0010”(車両)と距離情報[1]=“0100”(10m)とが含まれている。これに基づいて、当該アプリケーションソフトは、図12Bに示すように、図11Aに基づく車両のシンボル/アイコンを、西側の交差点の入り口(例えば交差点の角)を基点として西側に10mだけ離れた位置に表示する。 At this time, for example, the data (FIG. 8) transmitted from the monitoring device 10c includes the direction information 61 (west) and the mobile body information [1] (60 [1]), for example, the image recognition ID [ 1] = “0010” (vehicle) and distance information [1] = “0100” (10 m). Based on this, as shown in FIG. 12B, the application software places the vehicle symbol / icon based on FIG. 11A at a position 10 m away from the west starting from the entrance of the west intersection (for example, the corner of the intersection). indicate.
 これにより、ユーザは、光学式カーブミラーを目視することで他の進入車両、歩行者等を視認するだけでなく、汎用的な情報端末機器や無線通信装置で実現可能なユーザ装置を介して他の進入車両、歩行者等の情報をリアルタイムかつ正確に認識することができる。その結果、ユーザの認知力がより高まり、光学式カーブミラーを目視・視認する際に生じ得る見落としや勘違いなどの人的ミスを低減でき、交差点等での事故を未然に防ぐことが可能になる。さらに、ユーザ装置は、図11Aおよび図12Bに示したように、移動体の情報を画像認識IDで取得し、それを画像ライブラリ(アイコン、シンボル、ピクトグラム等)に置き換えて表示するため、受信するデータサイズを低減できる(これによりリアルタイム性の向上が図れる)ことに加えて、移動体に関するプライバシーの保護も図れる。 Thereby, the user can not only visually recognize other approaching vehicles, pedestrians, etc. by visually observing the optical curve mirror, but also via a user device that can be realized by a general-purpose information terminal device or a wireless communication device. It is possible to accurately recognize information such as an approaching vehicle and a pedestrian in real time. As a result, the user's cognitive ability is further increased, human errors such as oversight and misunderstanding that can occur when viewing and viewing the optical curve mirror can be reduced, and accidents at intersections can be prevented in advance. . Furthermore, as shown in FIG. 11A and FIG. 12B, the user apparatus receives information for acquiring information on the moving object with an image recognition ID and replacing it with an image library (icons, symbols, pictograms, etc.) for display. In addition to being able to reduce the data size (which can improve real-time performance), it is also possible to protect privacy related to mobile objects.
 図13は、図12Aの交通状況において、図12Bを応用した表示画面の一例を示す図である。図13の符号80に示すように、ユーザ装置のアプリケーションソフトは、例えば、移動体の距離が離れており、表示画面のレンジ(ここでは10m以内とする)内に収まらない場合には、そのシンボルと距離(テキスト)を表示した上で距離(テキスト)のみを逐次更新し、レンジ内に入った以降に、距離に応じてシンボルを移動させてもよい。また、拡張情報62の利用方法として、図13の符号81に示すように、当該アプリケーションソフトは、例えば、この運転者が左折する場合(例えばウインカーやハンドル位置情報と連動してもよい)に限って、拡張情報62に対応する道路標識を表示するようにしてもよい。さらに、図13の符号82に示すように、処理対象となる監視装置から移動体の有りの情報(すなわち図9Aの“0000”以外)を受信している間、当該アプリケーションソフトは、常時アラートを表示してもよい。 FIG. 13 is a diagram showing an example of a display screen in which FIG. 12B is applied in the traffic situation of FIG. 12A. As indicated by reference numeral 80 in FIG. 13, the application software of the user device, for example, if the moving object is far away and does not fall within the display screen range (here, within 10 m), the symbol And the distance (text) may be displayed, and only the distance (text) may be sequentially updated, and the symbol may be moved according to the distance after entering the range. Further, as a method of using the extended information 62, as indicated by reference numeral 81 in FIG. 13, the application software is limited to, for example, when the driver makes a left turn (for example, it may be interlocked with turn signal or steering wheel position information). Thus, a road sign corresponding to the extended information 62 may be displayed. Furthermore, as indicated by reference numeral 82 in FIG. 13, while receiving information indicating the presence of a moving object (ie, other than “0000” in FIG. 9A) from the monitoring device to be processed, the application software always alerts. It may be displayed.
 なお、図12Bおよび図13では、3D表示画面の一例を示したが、勿論、これに限定されるものではなく、図12Aのような2D表示画面で移動体の情報をユーザに通知することも可能である。また、例えば、画像表示部が存在しない場合には、音声、警告音、振動等で、移動体の存在または左右からの移動体の有無を知らせるようにすればよい。さらに、この音声、警告音、振動等による通知は、ユーザの認知力をより高めるため、画像表示部が存在する場合に画像表示と並行して行われてもよい。音声、警告音を用いる場合、例えば、オーディオ機器等の左右のスピーカを利用することができ、音声にて「左側から進入車あり」や「左折道路、高さ制限あり」等と発すればよい。また、振動を用いる場合、例えば、2個の振動装置を左右に設置し、左側から進入車ある場合には左側の振動装置を振動させること等でユーザに通知を行えばよい。 12B and 13 show an example of the 3D display screen. However, the present invention is not limited to this, and the user may be notified of information on the moving object on the 2D display screen as shown in FIG. 12A. Is possible. Further, for example, when there is no image display unit, the presence of the moving body or the presence / absence of the moving body from the left and right may be notified by sound, warning sound, vibration, or the like. Further, the notification by voice, warning sound, vibration, or the like may be performed in parallel with the image display when the image display unit is present in order to further enhance the user's cognitive ability. When using sound and warning sound, for example, left and right speakers such as audio equipment can be used, and it is sufficient to say “There is an approaching vehicle from the left” or “Left turn road, height is limited”, etc. . Further, when using vibration, for example, two vibration devices may be installed on the left and right, and if there is an approaching vehicle from the left side, the user may be notified by vibrating the left vibration device.
 《データフォーマットの階層構造》
 図14は、交差点に設置された複数の監視装置が図8のデータフォーマットに基づくデータを送信する際の、交差点を単位としたデータフォーマットの階層構造例を示す概略図である。当該データフォーマットは、アプリケーション層のフォーマットを規定するものである。図14において、データフォーマットの構成要素は、図8で述べたように、画像認識ID、距離情報、方位情報、拡張情報である。各フィールドは、ここでは、4ビットのコードで規定されているが、このビット長は、必要に応じて変更されてもよい。
《Data format hierarchy》
FIG. 14 is a schematic diagram showing a hierarchical structure example of a data format in units of intersections when a plurality of monitoring devices installed at the intersections transmit data based on the data format of FIG. The data format defines the format of the application layer. In FIG. 14, the components of the data format are an image recognition ID, distance information, direction information, and extended information as described in FIG. Each field is defined by a 4-bit code here, but this bit length may be changed as necessary.
 まず、一つの監視カメラの性能の範囲で、n個の移動体が画像認識される。この時、画像認識されたn個の移動体は、それぞれ、画像認識IDとその距離情報の対で表現される。このうち、距離情報は、監視カメラで一度画像認識IDが決定された後、この画像認識IDは変化せず、距離情報のみが更新される。この更新は、図10B等で述べたように、当該画像認識IDを持つ移動体が撮像画像上の「+マーカー」に基づく所定の範囲からフレームアウトするか、あるいは通信が終了するまで行われる。 First, n moving objects are recognized in the range of the performance of one surveillance camera. At this time, each of the n mobile objects whose images have been recognized is represented by a pair of an image recognition ID and its distance information. Among these, after the image recognition ID is once determined by the surveillance camera, the distance information is not changed, and only the distance information is updated. As described with reference to FIG. 10B and the like, this update is performed until the moving object having the image recognition ID out of the predetermined range based on the “+ marker” on the captured image or until the communication is completed.
 次に、監視装置単位では、図8で述べたように、n個の移動体を画像認識するものとして、画像認識IDおよび距離情報の対となる移動体単位の情報(8ビット)がn個生成され、このn個の移動体単位の情報(8ビット)に対して、さらに、方位情報(4bit)と拡張情報(4bit)が加えられる。この各情報は、図2の画像処理・信号生成部21、方位情報生成部22、拡張情報生成部24によって生成され、図2のCPU25は、これらの情報を監視装置単位のデータとして纏める。方位情報(4bit)と拡張情報(4bit)は、通常、サイバーカーブミラー設置時に一義的に決定され、n個の移動体単位の情報(8ビット)は、状況に応じて時系列的に適宜変化する。具体的には、距離情報や、あるいは、移動体の数(n個)が適宜変化する。ただし、移動体が存在しない場合でも、n=1として、図9Aの画像認識ID=“0000”が発信される。 Next, in the monitoring device unit, as described with reference to FIG. 8, n pieces of mobile unit information (8 bits) as a pair of an image recognition ID and distance information is recognized as an image recognition of n pieces of mobile units. The direction information (4 bits) and the extension information (4 bits) are further added to the generated information (8 bits) of the n mobile units. Each piece of information is generated by the image processing / signal generation unit 21, the azimuth information generation unit 22, and the extended information generation unit 24 in FIG. 2, and the CPU 25 in FIG. 2 collects these pieces of information as data for each monitoring device. Orientation information (4 bits) and extended information (4 bits) are usually uniquely determined when a cyber curve mirror is installed, and information on n mobile units (8 bits) changes appropriately in time series according to the situation. To do. Specifically, the distance information or the number of moving bodies (n) changes as appropriate. However, even when there is no moving object, n = 1 and image recognition ID = “0000” in FIG. 9A is transmitted.
 続いて、設置場所単位では、交差点の状況等に応じて設置される監視カメラの台数が変化する。例えば、信号のない交差点や十字路では2~4台の監視カメラが設置され、T字路では1~2個の監視カメラが設置される。また、監視カメラの種類によっても設置台数が変化する。例えば、近年のカメラのレンズ技術や画像補正技術の発達に伴い、魚眼レンズあるいは360度魚眼レンズと高度なデジタル画像処理を備えた監視カメラが存在しており、この場合、監視カメラの台数をより低減することができる。 Subsequently, the number of surveillance cameras installed varies depending on the situation of the intersection, etc., at the installation location unit. For example, two to four surveillance cameras are installed at intersections and crossroads where there is no signal, and one or two surveillance cameras are installed at the T-junction. Also, the number of installed cameras varies depending on the type of surveillance camera. For example, with the recent development of camera lens technology and image correction technology, there are surveillance cameras equipped with fish-eye lenses or 360-degree fish-eye lenses and advanced digital image processing. In this case, the number of surveillance cameras is further reduced. be able to.
 このように、設置場所単位では、監視カメラの台数(m)が変化し、このm台の監視カメラ(監視装置)からの各データが、ユーザ装置に向けて送信される。この際には、各監視装置は、無線通信方式に応じてデータの衝突が生じないように送信を行う。例えば、IEEE 802.11a/b/g/n等(所謂無線LAN)を用いて通信を行う際には、複数の周波数帯チャネルを使用できるため、複数の監視装置は、それぞれ異なる周波数帯チャネルを用いてユーザ装置との間で通信を行えばよい。この際には、例えば既存に割り当てられた用途向けのチャネルや空きチャネルのオプションを利用することができる。 As described above, the number (m) of the monitoring cameras changes in units of installation locations, and each data from the m monitoring cameras (monitoring devices) is transmitted to the user device. At this time, each monitoring device performs transmission so as not to cause data collision according to the wireless communication method. For example, when performing communication using IEEE 802.11a / b / g / n (so-called wireless LAN), since a plurality of frequency band channels can be used, a plurality of monitoring devices have different frequency band channels. Communication with the user device. In this case, for example, an option for a channel assigned for an existing use or an empty channel can be used.
 一方、IEEE 802.11p(所謂WAVE)を用いて通信を行う際には、周波数帯チャネルによる多重化が行えないため、複数の監視装置からのデータを時分割で送信する必要がある。この場合、例えば、複数の監視装置に対して、それぞれ異なるタイムスロットを設定し、それぞれ異なる期間にデータを送信させる。または、例えば、複数の監視装置のいずれかが、自身のデータと他の監視装置からのデータを一つのフレームに纏めてユーザ装置に向けて送信するような方式を用いてもよい。この場合、拡張情報に関しては、設置場所(例えば交差点等)による共通情報として、設置場所単位でこの共通フレーム上に1つとすることも可能である。 On the other hand, when communication is performed using IEEE 802.11p (so-called WAVE), it is necessary to transmit data from a plurality of monitoring devices in a time division manner because multiplexing using frequency band channels cannot be performed. In this case, for example, different time slots are set for a plurality of monitoring devices, and data is transmitted in different periods. Alternatively, for example, a method may be used in which one of a plurality of monitoring devices collects its own data and data from other monitoring devices in one frame and transmits the data to the user device. In this case, with regard to the extended information, it is possible to have one piece of information on the common frame for each installation location as common information based on the installation location (for example, an intersection).
 《監視装置(安全支援装置)の詳細動作》
 図15Aおよび図15Bは、図2の監視装置の詳細な処理内容の一例を示すフロー図である。ここでは、画像認識処理、距離計測処理、無線通信等で用いる具体的な方式については、一般的な方式を用いるものとして詳細な説明は省略する。図15Aにおいて、まず、交差点に設置されたm台の監視装置10は、図2のセンサ部20に含まれる監視カメラ(カメラセンサ27、赤外線センサ28)を用いて撮像を行う(ステップS101a,S101b)。次いで、各監視装置10は、図2の画像処理・信号生成部21を用いて撮像画像上から各移動体の認識および当該各移動体の距離計測を行う(ステップS102a,S102b)。この処理内容の詳細に関しては、図15Bで述べる。
<< Detailed operation of monitoring device (safety support device) >>
15A and 15B are flowcharts showing an example of detailed processing contents of the monitoring apparatus of FIG. Here, a specific method used in image recognition processing, distance measurement processing, wireless communication, and the like is not described in detail because a general method is used. In FIG. 15A, first, the m monitoring devices 10 installed at the intersection perform imaging using the monitoring cameras (camera sensor 27, infrared sensor 28) included in the sensor unit 20 of FIG. 2 (steps S101a and S101b). ). Next, each monitoring apparatus 10 performs recognition of each moving body and distance measurement of each moving body from the captured image using the image processing / signal generation unit 21 in FIG. 2 (steps S102a and S102b). Details of the processing contents will be described with reference to FIG. 15B.
 続いて、各監視装置10は、それぞれ、図8および図14に示したような監視装置単位のデータを生成する(ステップS103a,S103b)。具体的には、各監視装置10は、画像処理・信号生成部21の処理結果に対して、図2の方位情報生成部22で生成された方位情報61と図2の拡張情報生成部24で生成された拡張情報62を付加する。次いで、ここでは、図14で述べたように、設置場所(交差点)単位のデータを一つのフレームに纏める場合を例として、例えば、複数の監視装置10のいずれかは、他の監視装置10からのデータを入手し、設置場所単位のデータを生成する(ステップS104)。この際に、当該監視装置10のいずれかは、必要に応じて拡張情報を追加する。そして、当該監視装置10のいずれかは、図2の無線通信部23を用いて当該設置場所単位のデータを送信する(ステップS105)。 Subsequently, each monitoring device 10 generates data for each monitoring device as shown in FIGS. 8 and 14 (steps S103a and S103b). Specifically, each monitoring apparatus 10 uses the azimuth information 61 generated by the azimuth information generation unit 22 in FIG. 2 and the extended information generation unit 24 in FIG. 2 for the processing result of the image processing / signal generation unit 21. The generated extended information 62 is added. Next, as described in FIG. 14, for example, one of the plurality of monitoring devices 10 is separated from the other monitoring devices 10 by taking as an example a case where data in units of installation locations (intersections) are collected into one frame. Is obtained and data for each installation location is generated (step S104). At this time, any of the monitoring devices 10 adds extended information as necessary. Then, any one of the monitoring devices 10 transmits the data for the installation location unit using the wireless communication unit 23 of FIG. 2 (step S105).
 なお、ステップS101a,S101bにおける監視カメラによる撮像は、例えば、1秒間に数十フレーム以上といったように定期的に行われ、その都度、ステップS102a,S102b以降の処理が行われる。 In addition, imaging by the monitoring camera in steps S101a and S101b is periodically performed, for example, several tens of frames or more per second, and processing after steps S102a and S102b is performed each time.
 ここで、前述したステップS102a,S102bにおける各画像認識・距離計測処理では、図15Bに示すような処理が行われる。図2の画像認識部30は、まず、撮像画像上の所定の範囲内に存在する移動体を検出する(ステップS201)。所定の範囲内とは、図10Bで述べたように、「+マーカー」の位置に基づく範囲である。また、移動体は、各フレームにおける対象物の座標の時系列的な変動の有無によって検出される。ステップ201で単数または複数の移動体が検出された場合、画像処理・信号生成部21は、検出された移動体の中から適宜定めた最初の移動体を処理対象に設定する(ステップS202)。 Here, in each of the image recognition / distance measurement processing in steps S102a and S102b described above, processing as shown in FIG. 15B is performed. The image recognition unit 30 in FIG. 2 first detects a moving object that exists within a predetermined range on the captured image (step S201). Within the predetermined range is a range based on the position of the “+ marker” as described in FIG. 10B. In addition, the moving object is detected by the presence or absence of time-series fluctuations in the coordinates of the object in each frame. When one or more moving objects are detected in step 201, the image processing / signal generating unit 21 sets the first moving object appropriately determined from the detected moving objects as a processing target (step S202).
 次いで、画像認識部30は、処理対象の移動体の種類の判別を開始する(ステップS203)。その後、画像認識部30は、所定の期間内に種類の判別が完了した場合には、その判別結果に応じた画像認識IDを図9Aに基づき生成する(ステップS204~S206)。一方、所定の期間内に種類の判別が完了しなかった場合、画像認識部30は、画像認識ID=“0001”(移動体有り、認識処理中)を生成する(ステップS204,S207)。また、ステップS203の処理と並行して、図2の距離計測部31は、例えば、図10Bで説明したような方式を用いて処理対象の移動体の距離を計測する(ステップS208)。そして、距離計測部31は、当該計測結果に対応する距離情報を図9Bに基づき生成する(ステップS209)。 Next, the image recognition unit 30 starts to determine the type of moving object to be processed (step S203). Thereafter, when the type discrimination is completed within a predetermined period, the image recognition unit 30 generates an image recognition ID corresponding to the discrimination result based on FIG. 9A (steps S204 to S206). On the other hand, when the type determination is not completed within the predetermined period, the image recognition unit 30 generates image recognition ID = “0001” (there is a moving body and the recognition process is being performed) (steps S204 and S207). In parallel with the process in step S203, the distance measuring unit 31 in FIG. 2 measures the distance of the moving object to be processed using, for example, the method described in FIG. 10B (step S208). And the distance measurement part 31 produces | generates the distance information corresponding to the said measurement result based on FIG. 9B (step S209).
 続いて、画像処理・信号生成部21は、ステップS206またはステップS207で生成された画像認識IDと、ステップS209で生成された距離情報とを対として、1個の移動体情報を生成する(ステップS211)。なお、この際に、画像処理・信号生成部21は、ステップS201において移動体が検出されなかった場合にも、画像認識ID=“0000”(移動体無し)を生成し(ステップS210)、これを含めた1個の移動体情報を生成する。 Subsequently, the image processing / signal generation unit 21 generates one piece of mobile body information by pairing the image recognition ID generated in Step S206 or Step S207 with the distance information generated in Step S209 (Step S206). S211). At this time, the image processing / signal generation unit 21 also generates the image recognition ID = “0000” (no moving object) even when no moving object is detected in step S201 (step S210). One piece of moving body information including is generated.
 次いで、画像処理・信号生成部21は、ステップS201で検出した全ての移動体の種類の判別を完了したか否かを判定する(ステップS212)。画像処理・信号生成部21は、全ての移動体の種類の判別を完了した場合(すなわち図8の移動体情報[1](60[1])~移動体情報[n](60[n])の生成を完了した場合)、画像認識・距離計測処理を終え、図15Aに復帰する。一方、画像処理・信号生成部21は、種類の判別が完了していない移動体が有る場合には、その中から次ぎの移動体を処理対象に設定して(ステップS214)、ステップS203およびステップS208の処理に戻る。ただし、その前に、画像処理・信号生成部21は、種類の判別が完了していない移動体が有る場合でも、例えば、無線通信部23での通信間隔に基づく所定の期間(制限期間と呼ぶ)を経過している場合には、画像認識・距離計測処理を終え、図15Aに復帰する(ステップS213)。 Next, the image processing / signal generation unit 21 determines whether or not the determination of the types of all the moving bodies detected in Step S201 has been completed (Step S212). The image processing / signal generation unit 21 completes the discrimination of all types of moving objects (that is, moving object information [1] (60 [1]) to moving object information [n] (60 [n] in FIG. 8). ) Is completed), the image recognition / distance measurement process is completed, and the process returns to FIG. 15A. On the other hand, when there is a moving body for which the type determination has not been completed, the image processing / signal generation unit 21 sets the next moving body as a processing target (step S214), and performs step S203 and step The process returns to S208. However, before that, even when there is a mobile object whose type has not been determined, the image processing / signal generation unit 21 is, for example, a predetermined period based on the communication interval in the wireless communication unit 23 (referred to as a restriction period). ) Has passed, the image recognition / distance measurement process is terminated, and the process returns to FIG. 15A (step S213).
 ここで、前述したステップS204での所定の期間は、例えば、前述した制限期間をステップS201で検出した移動体の数で割った期間や、あるいは、それよりも短い期間等に定めることができる。後者の場合や、あるいは前者の場合で種類の識別を早期に完了した移動体が有る場合、これに応じて制限期間の範囲内で余りの期間を確保でき、この余りの期間で、一度目に種類の判別ができなかった移動体を対象に、その続きの処理を行うようなことも可能である。また、この場合、処理対象の移動体は、例えば、交差点に近い移動体から順に設定するとよい。 Here, the predetermined period in step S204 described above can be determined, for example, as a period obtained by dividing the limit period described above by the number of moving bodies detected in step S201, or a period shorter than that. In the latter case, or in the former case, if there is a mobile unit that has completed the type identification at an early stage, a surplus period can be secured within the limit period accordingly. It is also possible to perform subsequent processing on a moving object whose type cannot be determined. In this case, for example, the moving object to be processed may be set in order from the moving object closest to the intersection.
 図15Bのような処理を用いることで、例えば、何らかの理由で画像認識ができない移動体が有る場合でも、その存在を画像認識ID=“0001”を用いてユーザ側に通知することが可能になる。 By using the processing as shown in FIG. 15B, for example, even when there is a moving body that cannot be recognized for some reason, it is possible to notify the user of the presence using the image recognition ID = “0001”. .
 《ユーザ装置(安全支援装置)の詳細動作》
 図16は、図3Aまたは図3Bのユーザ装置の詳細な処理内容の一例を示すフロー図である。図16に処理フローは、既存の無線通信システム、無線情報機器・端末等の利用方法を継承し、例えば、これらのアプリケーションレイヤ上(すなわちアプリケーションソフトウエア)で動作する。この場合、ユーザ装置を構成する特定の無線通信機能を持つ情報端末に対して、その情報端末に搭載されるOSにそってコンパイルされたアプリケーションソフトウエアが実装される。なお、図16に処理フローは、アプリケーションソフトウエアで動作するものであり、特定の無線通信方式に依存するものでは無い。
<< Detailed Operation of User Device (Safety Support Device) >>
FIG. 16 is a flowchart showing an example of detailed processing contents of the user device shown in FIG. 3A or 3B. The processing flow in FIG. 16 inherits the usage method of the existing wireless communication system, wireless information device / terminal, etc., and operates, for example, on these application layers (that is, application software). In this case, application software compiled according to the OS installed in the information terminal is mounted on the information terminal having a specific wireless communication function constituting the user apparatus. Note that the processing flow in FIG. 16 is operated by application software and does not depend on a specific wireless communication system.
 図16において、ユーザ装置40,50は、このアプリケーションソフトウエアを起動し、サイバーカーブミラーモードを設定すると、監視装置との間で通信リンクを確立する為のサーチモードになる。ユーザ装置40,50は、監視装置との間で通信リンクを確立し、同期がとれた際に図3Aまたは図3Bの無線通信部41,51を用いてデータの受信を開始する(ステップS301)。ここでは、図14で述べた設置場所単位のデータを受信する例で説明する。ユーザ装置40,50は、受信した設置場所単位のデータから、各監視装置単位のデータを認識し(ステップS302)、また、設置場所単位の拡張情報を検出する(ステップS311)。 In FIG. 16, when the user devices 40 and 50 start this application software and set the cyber curve mirror mode, the user devices 40 and 50 enter a search mode for establishing a communication link with the monitoring device. The user devices 40 and 50 establish a communication link with the monitoring device, and start receiving data using the wireless communication units 41 and 51 of FIG. 3A or 3B when synchronization is established (step S301). . Here, an example will be described in which data for each installation location described in FIG. 14 is received. The user devices 40 and 50 recognize the data of each monitoring device unit from the received data of the installation site unit (step S302), and detect the extended information of the installation site unit (step S311).
 次いで、ユーザ装置40,50は、各監視装置単位のデータのそれぞれに含まれる方位情報を検出する(ステップS303)。続いて、ユーザ装置40,50は、前述した方位検出部(コンパス機能)49に基づいて自身の進行方向を認識し、この進行方向と、ステップS303で検出した各方位情報とを比較することで、各監視装置単位のデータの中から必要なデータを選択する(ステップS304)。次ぎに、ユーザ装置40,50は、選択されたデータの一つを処理対象に設定する(ステップS305)。 Next, the user devices 40 and 50 detect azimuth information included in each data of each monitoring device (step S303). Subsequently, the user devices 40 and 50 recognize their own traveling direction based on the azimuth detecting unit (compass function) 49 described above, and compare this traveling direction with each azimuth information detected in step S303. The necessary data is selected from the data for each monitoring device (step S304). Next, the user devices 40 and 50 set one of the selected data as a processing target (step S305).
 次いで、ユーザ装置40,50は、処理対象のデータの中から各移動体情報を検出し(ステップS306)、また、処理対象のデータの中から監視装置単位の拡張情報を検出する(ステップS312)。続いて、ユーザ装置40,50は、各移動体情報毎に、その画像認識IDに対応するアイコン・シンボル等を図11Aに基づいて定め、その距離情報および対応する方位情報に基づいて当該アイコン・シンボル等を表示する座標を定める(ステップS307)。すなわち、ユーザ装置40,50は、ステップS306での処理対象のデータ(監視装置単位でのデータ)に含まれる方位情報に基づいて、図12Bに示したように、アイコン・シンボル等を表示する方角を定め、また、当該アイコン・シンボル等(画像認識ID)に対応する距離情報と実際に表示する画面の縮尺とに基づいて、前述した方角に位置する座標を定める。なお、図11Aに示した情報は、ユーザ装置(情報端末)のアプリケーションソフトウエアに含まれている。 Next, the user devices 40 and 50 detect each moving body information from the processing target data (step S306), and also detect extended information for each monitoring device from the processing target data (step S312). . Subsequently, for each moving body information, the user devices 40 and 50 determine an icon / symbol or the like corresponding to the image recognition ID based on FIG. 11A, and based on the distance information and the corresponding azimuth information, Coordinates for displaying symbols and the like are determined (step S307). That is, the user devices 40 and 50 display the icons, symbols, and the like as shown in FIG. 12B based on the orientation information included in the data to be processed in step S306 (data in units of monitoring devices). Further, based on the distance information corresponding to the icon / symbol etc. (image recognition ID) and the scale of the screen to be actually displayed, the coordinates located in the above-mentioned direction are determined. Note that the information illustrated in FIG. 11A is included in the application software of the user device (information terminal).
 続いて、ユーザ装置40,50は、ステップS305で選択されたデータ(監視装置単位でのデータ)の全てに対するステップS306,S312,S307の処理が完了したか否かを判別する(ステップS308)。ユーザ装置40,50は、処理が未完のデータがある場合には、次ぎのデータを処理対象に設定し、ステップS306,S312に戻る(ステップS310)。一方、ユーザ装置40,50は、処理が未完のデータが無い場合には、ユーザ通知部45に含まれる画像表示部に対して、ステップS307で定めた各アイコン・シンボル等ならびにその各座標に基づいて図12Bに示したような表示を行う(ステップS309)。 Subsequently, the user devices 40 and 50 determine whether or not the processing in steps S306, S312 and S307 has been completed for all the data selected in step S305 (data in units of monitoring devices) (step S308). If there is unfinished data, the user devices 40 and 50 set the next data to be processed, and return to steps S306 and S312 (step S310). On the other hand, when there is no unfinished data, the user devices 40 and 50 make the image display unit included in the user notification unit 45 based on each icon / symbol defined in step S307 and its coordinates. Then, the display as shown in FIG. 12B is performed (step S309).
 なお、このステップS309の際に、ユーザ装置40,50は、画像表示部の代わりに、あるいは画像表示部に加えて、ユーザ通知部45に含まれる音声出力部や振動部を制御することでユーザに対して通知・警告を行ってもよい。また、ユーザ装置40,50は、画像表示部に表示する際には、アプリケーションソフトウエアに含まれている図11Bに示した情報に基づいて、ステップS311やステップS312で検出した拡張情報に基づく画像表示も併せて行う。 In this step S309, the user devices 40 and 50 control the audio output unit and the vibration unit included in the user notification unit 45 instead of the image display unit or in addition to the image display unit. Notification / warning may be performed. When the user devices 40 and 50 display the image on the image display unit, the image based on the extended information detected in step S311 or step S312 based on the information shown in FIG. 11B included in the application software. Display is also performed.
 以上、本実施の形態1の安全支援システムおよび安全支援装置を用いることで、代表的には、見通しの悪い信号の無い交差点、T字路等での出会い頭による対車両、対人事故等の発生を低減でき、安全性の向上が実現可能になる。この際には、画像認識IDや距離情報を用いることで、リアルタイム性を向上できると共に、個人情報の保護も可能となる。さらに、方位情報を用いることで、必要なデータを適切に選択することが可能になる。 As described above, by using the safety support system and the safety support apparatus according to the first embodiment, typically, the occurrence of an anti-vehicle, an interpersonal accident, etc. due to an encounter at an intersection having no poor view, a T-junction, or the like. It can be reduced and safety can be improved. In this case, by using the image recognition ID and distance information, real-time performance can be improved and personal information can be protected. Furthermore, by using the azimuth information, it becomes possible to appropriately select necessary data.
 (実施の形態2)
 前述した実施の形態1では、主に、標準的なレンズを持つ監視カメラを備えた監視装置(サイバーカーブミラー)を四差路に設置する場合を例として説明を行ったが、本実施の形態2では、特殊なレンズを持つ監視カメラを備えた監視装置を三差路(T字路)や急カーブに設置する場合を例として説明を行う。通常、標準的なレンズの視野角は25°~50°程度であるが、特殊なレンズとして、60°~100°程度の視野角を持つ広角レンズや、180°またはそれ以上の視野角を持つ魚眼レンズ等が知られている。
(Embodiment 2)
In the first embodiment described above, the case where a monitoring device (cyber curve mirror) including a monitoring camera having a standard lens is mainly installed in the four-way way has been described as an example. 2 will be described by taking as an example a case where a monitoring device including a monitoring camera having a special lens is installed on a three-way (T-junction) or a sharp curve. Normally, the viewing angle of a standard lens is about 25 ° to 50 °. However, as a special lens, a wide angle lens having a viewing angle of about 60 ° to 100 ° or a viewing angle of 180 ° or more is used. Fisheye lenses are known.
 《特殊レンズを持つ監視装置(サイバーカーブミラー)の急カーブへの設置例》
 図17は、本発明の実施の形態2による安全支援システムにおいて、その監視装置の急カーブへの設置例およびその際の動作例を示す平面図である。図17では、急カーブの頂点にカーブミラーが設置され、このカーブミラーに監視装置(サイバーカーブミラー)10が併設されている。当該監視装置10は、ここでは、魚眼レンズを持つ監視カメラを備えている。このような場合、当該監視装置10は、撮影画像を中心から左右に分割し、擬似的に、その各撮像画像をそれぞれ2台の擬似監視装置90a,90bで取得したかのように振る舞う。これにより、実施の形態1で述べた各動作等がそのまま適用できる。
[Example of installation of a monitoring device with a special lens (Cyber Curve Mirror) on a sharp curve]
FIG. 17 is a plan view showing an installation example of the monitoring device on a sharp curve and an operation example at that time in the safety support system according to the second embodiment of the present invention. In FIG. 17, a curve mirror is installed at the apex of the sharp curve, and a monitoring device (cyber curve mirror) 10 is attached to the curve mirror. Here, the monitoring device 10 includes a monitoring camera having a fisheye lens. In such a case, the monitoring device 10 divides the captured image from the center to the left and right, and behaves as if each captured image is acquired by two simulated monitoring devices 90a and 90b. Thereby, each operation | movement etc. which were described in Embodiment 1 can be applied as it is.
 具体的には、監視装置10は、撮像画像を中心から左右に分割し、右側の撮像画像を擬似監視装置90aの撮像画像とし、左側の撮像画像を擬似監視装置90bの撮像画像とする。監視装置10は、この2個の撮像画像に対して、それぞれ、実施の形態1の場合と同様に画像認識処理および距離計測処理を行い、図14等に示した監視装置単位のデータを2個生成する。また、監視装置10の設置時には、図9Cに示したように、監視装置10に対して、前述した各撮像画像(各擬似監視装置)の方位情報としてそれぞれ撮像画像の分割情報(“1000”:右、“1001”:左)を設定する。 Specifically, the monitoring device 10 divides the captured image into left and right from the center, the right captured image is the captured image of the pseudo monitoring device 90a, and the left captured image is the captured image of the pseudo monitoring device 90b. The monitoring device 10 performs image recognition processing and distance measurement processing on the two captured images in the same manner as in the first embodiment, and provides two pieces of data for each monitoring device shown in FIG. Generate. Further, when the monitoring device 10 is installed, as shown in FIG. 9C, the divided information (“1000”: Right, “1001”: left).
 これにより、図17において、右下からの進入車両のユーザ装置は、擬似監視装置(左)90bの情報を取得し、左下からの進入車両のユーザ装置は、擬似監視装置(右)90aの情報を取得する。より具体的には、ユーザ装置は、自身の方位検出部(コンパス機能)49に基づき、進行方向が向かって左寄りに傾いていた場合、擬似監視装置(左)90bの情報を取得し、進行方向が向かって右寄りに傾いていた場合、擬似監視装置(右)90aの情報を取得する。これによって、急カーブにおける死角の情報を、光学式カーブミラーからの情報に加えて取得することができる。 Accordingly, in FIG. 17, the user device of the approaching vehicle from the lower right acquires the information of the pseudo monitoring device (left) 90b, and the user device of the approaching vehicle from the lower left is the information of the pseudo monitoring device (right) 90a. To get. More specifically, the user device acquires information of the pseudo monitoring device (left) 90b when the traveling direction is tilted to the left based on its own direction detection unit (compass function) 49, and the traveling direction is obtained. Is tilted to the right, the information of the pseudo monitoring device (right) 90a is acquired. Thereby, the information on the blind spot in the sharp curve can be acquired in addition to the information from the optical curve mirror.
 なお、ここでは、急カーブ向けの方位情報として、“1000”(右)および“1001”(左)を規定したが、ユーザ装置側が進行方向の判別に加えて、急カーブの形状を判別できる機能を備えていれば、東西南北等の方位情報を用いることも可能である。例えば、図17の例では、方位情報として、擬似監視装置(右)90aに “0101”(南東)が設定され、擬似監視装置(左)90bに“0110”(南西)が設定される。北西方向に向けた進入車両のユーザ装置は、急カーブが南西方向に向かうことを認識し、擬似監視装置(左)90bの情報を取得すればよい。 In this example, “1000” (right) and “1001” (left) are defined as the direction information for the sharp curve. However, in addition to the determination of the traveling direction, the user apparatus can determine the shape of the sharp curve. It is also possible to use azimuth information such as east, west, south, and north. For example, in the example of FIG. 17, as the direction information, “0101” (southeast) is set in the pseudo monitoring device (right) 90a, and “0110” (southwest) is set in the pseudo monitoring device (left) 90b. The user device of the approaching vehicle directed in the northwest direction may recognize that the sharp curve is in the southwest direction and acquire information on the pseudo monitoring device (left) 90b.
 《特殊レンズを持つ監視装置(サイバーカーブミラー)の三差路への設置例》
 図18Aおよび図18Bは、三差路におけるカーブミラーの一般的な配置方法の一例を示す平面図である。図18Cは、本発明の実施の形態2による安全支援システムにおいて、その監視装置の三差路への設置例およびその際の動作例を示す平面図である。三差路(T字路)では、光学式カーブミラーは、通常、図18Aまたは図18Bのように配置される。図18Aの場合、下からの進入車両は右からの移動体を視認でき、その逆に、右からの進入車両は下からの移動体を視認できる。すなわち、左側通行の場合、最低限の条件として、右から進入してくる移動体の危険を察知することが必要であり、図18Aのように、1面の光学式カーブミラーを1個配置すれば、当該条件を満たすことができる。
[Example of installation of a monitoring device with a special lens (Cyber Curve Mirror) on a three-way road]
18A and 18B are plan views showing an example of a general arrangement method of the curve mirrors on the three-way path. FIG. 18C is a plan view showing an example of installation of the monitoring device on the three-way road and an example of operation at that time in the safety support system according to Embodiment 2 of the present invention. In the three-passage (T-junction), the optical curve mirror is usually arranged as shown in FIG. 18A or 18B. In the case of FIG. 18A, an approaching vehicle from below can visually recognize a moving body from the right, and conversely, an approaching vehicle from right can visually recognize a moving body from below. That is, in the case of left-hand traffic, it is necessary to detect the danger of a moving body entering from the right as a minimum condition. As shown in FIG. 18A, one optical curve mirror on one surface should be arranged. This condition can be satisfied.
 ただし、より安全性を高めるためには、図18Bのように、T字路における縦方向の道路の突き当たりの位置に2面の光学式カーブミラーを1個配置することが望ましい。この場合、下からの進入車両は左右からの移動体を視認でき、左右からの進入車両は下からのの移動体を視認できる。例えば、図18Bのカーブミラーに監視装置を併設する場合、標準レンズを持つ監視カメラを用いると、実施の形態1で述べたように、当該2面のカーブミラーにそれぞれ映し出される方向と同じ方向を撮像するように2個の監視装置を設置する必要性が生じる。 However, in order to further improve safety, it is desirable to arrange one optical curve mirror with two surfaces at the end of the road in the vertical direction on the T-junction as shown in FIG. 18B. In this case, an approaching vehicle from below can visually recognize a moving body from the left and right, and an approaching vehicle from left and right can visually recognize a moving body from below. For example, when a monitoring device is provided in addition to the curve mirror in FIG. 18B, if a monitoring camera having a standard lens is used, the same direction as that projected onto the two curved mirrors is set as described in the first embodiment. There is a need to install two monitoring devices to capture images.
 一方、本実施の形態2では、図18Cに示すように、図18Bのカーブミラーに1個の監視装置10を併設し、当該監視装置10の監視カメラに180°の視野角を持つ魚眼レンズを適用する。この場合、図17の場合と同様に、監視装置10は、撮像画像を60°単位で3分割し、擬似的に、その各撮像画像をそれぞれ3台の擬似監視装置91a,90b,90cで取得したかのように振る舞う。具体的には、監視装置10は、撮像画像を3分割し、右側の撮像画像を擬似監視装置91aの撮像画像とし、中央側の撮像画像を擬似監視装置91bの撮像画像とし、左側の撮像画像を擬似監視装置91cの撮像画像とする。監視装置10は、この3個の撮像画像に対して、それぞれ、実施の形態1の場合と同様に画像認識処理および距離計測処理を行い、図14等に示した監視装置単位のデータを3個生成する。 On the other hand, in the second embodiment, as shown in FIG. 18C, one monitoring device 10 is added to the curve mirror of FIG. 18B, and a fisheye lens having a viewing angle of 180 ° is applied to the monitoring camera of the monitoring device 10. To do. In this case, similarly to the case of FIG. 17, the monitoring device 10 divides the captured image into three in units of 60 °, and each of the captured images is acquired by the three pseudo monitoring devices 91a, 90b, and 90c in a pseudo manner. Behaves as if Specifically, the monitoring device 10 divides the captured image into three, the right-side captured image is the captured image of the pseudo-monitoring device 91a, the central-side captured image is the captured image of the pseudo-monitoring device 91b, and the left-side captured image Is a captured image of the pseudo monitoring device 91c. The monitoring device 10 performs image recognition processing and distance measurement processing on the three captured images in the same manner as in the first embodiment, and three pieces of data for each monitoring device shown in FIG. Generate.
 また、監視装置10の設置時には、実施の形態1の場合と同様にして、監視装置10に対して、前述した各撮像画像(各擬似監視装置)の方位情報が設定される。例えば、図9Cに示した方位情報として、擬似監視装置91aには“0001”(東)が設定され、擬似監視装置91bには“0010”(南)が設定され、擬似監視装置91cには“0011”(西)が設定される。これにより、進入車両のユーザ装置は、実施の形態1の場合と同様の動作を用いて、必要な情報を得ることができる。例えば、図18Cにおいて、北方向に進入している進入車両のユーザ装置は、進行方向に対して左右方向(東西方向)の情報を取得すればよい。そこで、当該ユーザ装置は、擬似監視装置91a,91cからのデータを処理対象として選択し、当該データに基づき画像、音声、振動等を用いてユーザに通知・警告を行う。 Also, when the monitoring device 10 is installed, the orientation information of each captured image (each pseudo monitoring device) described above is set for the monitoring device 10 in the same manner as in the first embodiment. For example, as the azimuth information shown in FIG. 9C, “0001” (east) is set in the pseudo monitoring device 91a, “0010” (south) is set in the pseudo monitoring device 91b, and “0010” is set in the pseudo monitoring device 91c. “0011” (west) is set. Thereby, the user apparatus of the approaching vehicle can obtain necessary information using the same operation as in the first embodiment. For example, in FIG. 18C, the user device of the approaching vehicle entering in the north direction may acquire information on the left and right direction (east-west direction) with respect to the traveling direction. Therefore, the user device selects data from the pseudo monitoring devices 91a and 91c as a processing target, and notifies / warns the user using an image, sound, vibration, or the like based on the data.
 また、図17や図18Cのように、1台の監視装置を複数台の擬似監視装置として運用する場合、さらに拡張情報を追加してよい。すなわち、前述した図14の設置場所単位のデータのように、監視装置単位での方位情報および拡張情報に加えて、設置場所単位での拡張情報を加えることも有益である。例えば、この拡張情報として、図9Dに示したように、“1000”(急カーブ)、“1001”(T字路)、“1011”(三差路)といったような道路形状の付加情報を加えることで、ユーザ装置側で複数の擬似監視装置からの情報を選択する際の処理を助けることができる。具体的には、道路形状に応じて必要とされる情報の方向が変わる場合があるが、このような場合、ユーザ装置は、当該道路形状を認識する機能を自身で備えていなくても、拡張情報として外部から与えて貰うことで、道路形状に応じた判断を行うことが可能になる。 Also, as shown in FIGS. 17 and 18C, when one monitoring device is operated as a plurality of pseudo monitoring devices, additional information may be added. That is, it is also beneficial to add extended information in units of installation locations in addition to the azimuth information and extended information in units of monitoring devices, like the data in units of installation locations in FIG. 14 described above. For example, as the extended information, as shown in FIG. 9D, additional information on the road shape such as “1000” (steep curve), “1001” (T-shaped road), “1011” (three-way road) is added. Thus, it is possible to assist the processing when selecting information from a plurality of pseudo monitoring devices on the user device side. Specifically, the direction of the required information may change depending on the road shape. In such a case, the user device does not have the function of recognizing the road shape, but the extension is performed. It is possible to make a judgment according to the road shape by giving it as information from the outside.
 図19は、図18Cの応用例を示す平面図である。本実施の形態による安全支援システムは、屋外の道路に限らず、図19に示すような屋内の通行路等に適用することも可能である。例えば、病院等の施設内での見通しの悪い場所では、図19に示すように、図18Cに示したカーブミラーの代わりに、反射鏡を利用した安全ミラー等が配置される場合がある。監視装置10は、このような安全ミラー等に併設することができる。また、この場合、監視装置10を警告灯92やアラーム音等と連動させることも可能である。例えば、監視装置10は、擬似監視装置91a,91b,91cの各撮像画像の内、2以上の撮像画像で移動体が検出されている場合には、警告灯92を光らせたり、あるいはアラーム音等を発する。これにより、通行路の交差点に進入している各歩行者等に対して、他の方向からの移動体(例えば歩行者)の存在を通知・警告することができ、出会い頭での衝突等を防ぐことができる。 FIG. 19 is a plan view showing an application example of FIG. 18C. The safety support system according to the present embodiment can be applied not only to an outdoor road but also to an indoor road as shown in FIG. For example, in a place with poor visibility in a facility such as a hospital, as shown in FIG. 19, a safety mirror using a reflecting mirror may be arranged instead of the curve mirror shown in FIG. 18C. The monitoring device 10 can be attached to such a safety mirror. In this case, the monitoring device 10 can be linked to the warning lamp 92, an alarm sound, or the like. For example, when the moving body is detected in two or more captured images among the captured images of the pseudo monitoring devices 91a, 91b, and 91c, the monitoring device 10 emits a warning lamp 92, an alarm sound, or the like. To emit. As a result, it is possible to notify / warn each pedestrian, etc. that has entered the intersection of the traffic path that there is a moving body (for example, a pedestrian) from another direction, and to prevent a collision at the encounter. be able to.
 近年では、「安全・安心な環境作り」として、施設構内でも丸ミラー、FF(Fantastic Flat)ミラー、ガレージミラー、ドーム型や半ドーム型ミラー等が利用されている。本実施の形態による安全支援システムは、図19のような環境に限らず、このような各種ミラーが配置される様々な環境に適用することができる。いずれの場合も、監視カメラの撮影画面(分割された撮像画面も含む)毎に方位情報を設定することが必要となり、これによって、この方位情報を利用した移動体の通知・警告を適切に行うことが可能になる。 In recent years, round mirrors, FF (Fantastic Flat) mirrors, garage mirrors, dome-shaped and semi-dome-shaped mirrors, etc. have been used as “creating a safe and secure environment”. The safety support system according to the present embodiment can be applied not only to the environment as shown in FIG. 19 but also to various environments where such various mirrors are arranged. In either case, it is necessary to set the azimuth information for each monitoring camera shooting screen (including the divided imaging screen), thereby appropriately performing notification / warning of the moving body using this azimuth information. It becomes possible.
 以上、本実施の形態2の安全支援システムおよび安全支援装置を用いることで、実施の形態1で述べた各種効果に加えて、さらに、物理的に設置する監視装置の台数を減らせることから、実施の形態1に比べて、コストの低減等が可能になる。 As described above, by using the safety support system and the safety support device according to the second embodiment, in addition to the various effects described in the first embodiment, the number of monitoring devices to be physically installed can be further reduced. Compared to the first embodiment, the cost can be reduced.
 (実施の形態3)
 前述した実施の形態1では、一箇所の交差点を前提として、当該交差点に設置された各監視装置(監視カメラ)の情報を適切に選択する方式を説明したが、実際の道路環境等では、複数の交差点が近接して存在するような場合がある。このような場合、ユーザ装置は、各監視装置からのデータを複数の交差点から受信する恐れがあるため、交差点内の各監視装置を単位としてデータを選択することに加えて、各交差点を単位としてデータを選択する必要がある。本実施の形態3の安全支援システムは、このような各交差点を単位としたデータの選択を行うものである。
(Embodiment 3)
In the first embodiment described above, a method of appropriately selecting information of each monitoring device (monitoring camera) installed at the intersection on the premise of one intersection is described. There are cases where there are close intersections. In such a case, since the user device may receive data from each monitoring device from a plurality of intersections, in addition to selecting the data by each monitoring device in the intersection, the user device as a unit. Need to select data. The safety support system according to the third embodiment selects data in units of such intersections.
 《無線通信に関する一般的な事項》
 図20Aおよび図20Bは、無線LANにおける一般的な特性の一例を示す説明図である。図20Aには、アクセスポイント(以下、APと略す)からの距離と、受信感度(RSSI:Received Signal Strength Indicator)との関係例が示されている。APからの距離が離れるほどRSSIは低下していく。図20Bには、1個のAPに接続される受信端末数と、各端末毎のスループット(通信情報量)との関係例が示されている。受信端末数が多くなるほどスループットは低下していく。
《General matters concerning wireless communication》
20A and 20B are explanatory diagrams illustrating an example of general characteristics in the wireless LAN. FIG. 20A shows a relationship example between a distance from an access point (hereinafter abbreviated as AP) and reception sensitivity (RSSI: Received Signal Strength Indicator). The RSSI decreases as the distance from the AP increases. FIG. 20B shows an example of the relationship between the number of receiving terminals connected to one AP and the throughput (communication information amount) for each terminal. The throughput decreases as the number of receiving terminals increases.
 したがって、本実施の形態の安全支援システムのように、画像認識IDや距離情報等を用いて情報量を減らすことは、実施の形態1等で述べたリアルタイム性の観点等と共に図20Bの観点からも有益となる。すなわち、情報量を減らすことで、信号のない交差点でのサイバーカーブミラーのサービスを授受できるユーザ数(言い換えれば各監視装置(AP)からのデータを受信できるユーザ装置の数)を増やすことができる。 Therefore, as in the safety support system of the present embodiment, reducing the amount of information using image recognition ID, distance information, and the like is from the viewpoint of FIG. 20B together with the viewpoint of real-time characteristics described in the first embodiment and the like. Will also be beneficial. That is, by reducing the amount of information, it is possible to increase the number of users (in other words, the number of user devices that can receive data from each monitoring device (AP)) that can receive and receive cybercurve mirror services at intersections without signals. .
 図21は、無線LANにおけるアクセスポイント(AP)と無線LAN端末との間の通信手順の一例を示すシーケンス図である。図21に示すように、APは、ビーコンフレームをブロードキャストで送信し、無線LAN端末は、一定時間、ビーコンフレームを受信し、ESSID(Extended Service Set Identifier)が合致するAPを検索する。そして、無線LAN端末は、当該APとの間で所定の手順で認証を行ったのち、データの受信を開始する。ビーコンフレーム等には、APのMAC(Media Access Control)アドレスが含まれている。本実施の形態3の安全支援システムは、このMACアドレスを利用して、リアルタイムなAPの切り替え(ハンドオーバー)を実現する。 FIG. 21 is a sequence diagram illustrating an example of a communication procedure between an access point (AP) and a wireless LAN terminal in a wireless LAN. As illustrated in FIG. 21, the AP transmits a beacon frame by broadcast, and the wireless LAN terminal receives the beacon frame for a certain period of time, and searches for an AP that matches the ESSID (Extended Service Set Identifier). Then, the wireless LAN terminal authenticates with the AP in a predetermined procedure, and then starts receiving data. The beacon frame or the like includes the AP MAC (Media Access Control) address. The safety support system according to the third embodiment realizes real-time AP switching (handover) using this MAC address.
 《安全支援システムにおける交差点の選択方法》
 図22は、本発明の実施の形態3による安全支援システムにおいて、交差点が近接して配置される道路環境への適用例を示す平面図である。図22の例では、南側から進入した場合に東方向または西方向に進める三差路(T字路)[1]と、この西方向の道路に結合され、西側から進入した場合に西方向または北方向に進める三差路(T字路)[2]とが近接して配置されている。三差路(T字路)[1]には、実施の形態2の図18Cで述べたような状態で魚眼レンズ付きの監視装置10[1]が設置され、当該監視装置10[1]はアクセスポイント(AP1)となる無線通信部を備えている。同様に、三差路(T字路)[2]にも、魚眼レンズ付きの監視装置10[2]が設置され、当該監視装置10[2]はアクセスポイント(AP2)となる無線通信部を備えている。
《How to select an intersection in the safety support system》
FIG. 22 is a plan view showing an application example to a road environment where intersections are arranged close to each other in the safety support system according to Embodiment 3 of the present invention. In the example of FIG. 22, the three-way (T-shaped road) [1] that advances in the east or west direction when entering from the south side and the west direction road is connected to the west or A three-way road (T-shaped road) [2] advanced in the north direction is arranged in close proximity. The monitoring device 10 [1] with a fisheye lens is installed in the three-way (T-shaped road) [1] in the state described in FIG. 18C of the second embodiment, and the monitoring device 10 [1] is accessed. A wireless communication unit serving as a point (AP1) is provided. Similarly, a monitoring device 10 [2] with a fish-eye lens is also installed in a three-way (T-junction) [2], and the monitoring device 10 [2] includes a wireless communication unit serving as an access point (AP2). ing.
 図23は、図22の各三差路(T字路)を通過する際の安全支援システムの動作例を示す説明図である。ここでは、図22および図23に示すように、車両が、三差路(T字路)[1]を南側から直進で進入し(ステップS401)、西側に左折し(ステップS402)、三差路(T字路)[2]に向けて西側から直進で進入し(ステップS403)、北側に右折する(ステップS404)場合を例とする。図23には、その際の受信感度(RSSI)の変化の様子が示されている。 FIG. 23 is an explanatory diagram showing an operation example of the safety support system when passing through each of the three differential paths (T-shaped paths) in FIG. Here, as shown in FIG. 22 and FIG. 23, the vehicle enters the three-way (T-shaped road) [1] straight from the south side (step S401), and turns left on the west side (step S402). Take a case where the vehicle enters straight from the west side toward the road (T-shaped road) [2] (step S403) and turns right on the north side (step S404). FIG. 23 shows how the reception sensitivity (RSSI) changes at that time.
 図23において、車両に搭載されるユーザ装置は、AP1とリンクしている状態でAP1に近づいていく。当該ユーザ装置の無線通信部でのRSSIは、AP1に近づくほど増大する(ステップS401)。そして、当該RSSIは、車両の左折時(角を曲る中間地点)において最大となり(ステップS402)、左折後、そのまま直進すると、AP1の距離に応じて低下する(ステップS403)。この時、無線LANに対応する端末は、一般的に、AP1のRSSIの値よりも大きいRSSIの値を持つビーコンフレーム等を受信しない限りは、AP1とのリンクを維持し続ける。その結果、例えば、AP1とAP2の中間地点を過ぎた以降にAP2へのハンドオーバーが行われる。 23, the user device mounted on the vehicle approaches AP1 while being linked to AP1. The RSSI in the wireless communication unit of the user device increases as the AP1 approaches (step S401). Then, the RSSI becomes maximum when the vehicle turns left (an intermediate point that turns a corner) (step S402). If the vehicle goes straight after turning left, the RSSI decreases according to the distance of AP1 (step S403). At this time, the terminal corresponding to the wireless LAN generally continues to maintain the link with AP1 unless a beacon frame having an RSSI value larger than the RSSI value of AP1 is received. As a result, for example, a handover to AP2 is performed after an intermediate point between AP1 and AP2.
 しかしながら、本実施の形態による安全支援システムではリアルタイム性が求められるため、例えば図22のような道路環境で前述したような一般的な無線LANのハンドオーバー方法を用いた場合、進入車両は、AP2にハンドオーバーしたのち即座に次のT字路に到達してしまう。その結果、当該進入車両のユーザ装置は、死角に存在する他の移動体の情報を取得し、それに基づきユーザに通知・警告する時間を十分に確保できない恐れがある。そこで、本実施の形態3の安全支援システムでは、ハンドオーバーを早めるため、APのRSSIとMACアドレスを利用した次ぎのような方法を用いる。 However, since the safety support system according to the present embodiment requires real-time performance, for example, when a general wireless LAN handover method as described above is used in a road environment as shown in FIG. The next T-junction is reached immediately after handover. As a result, there is a possibility that the user device of the approaching vehicle cannot acquire sufficient time for notifying / warning the user based on the information of other moving bodies that exist in the blind spot. Therefore, in the safety support system according to the third embodiment, the following method using the RSSI and MAC address of the AP is used in order to accelerate the handover.
 まず、ユーザ装置の無線通信部は、ステップS401,S402において、AP1のMACアドレスを記憶すると共にAP1のRSSIの値を監視し、ステップS402における車両の左折時に生じるAP1のRSSIのピーク値(RSSI_p)を記憶する。ここで、当該無線通信部には、予め、RSSIのピーク値(RSSI_p)からの期間(t)や、または、RSSIの閾値(RSSI_th)や、あるいは、これらの組合せが設定されている。ここでは、RSSIの閾値(RSSI_th)を設定するものとし、この閾値は、AP1との距離を考慮して図20Aの関係に基づいて定められる。 First, in steps S401 and S402, the wireless communication unit of the user apparatus stores the MAC address of AP1 and monitors the RSSI value of AP1, and the RSSI peak value (RSSI_p) generated when the vehicle turns left in step S402. Remember. Here, a period (t) from an RSSI peak value (RSSI_p), an RSSI threshold (RSSI_th), or a combination thereof is set in advance in the wireless communication unit. Here, an RSSI threshold (RSSI_th) is set, and this threshold is determined based on the relationship of FIG. 20A in consideration of the distance to AP1.
 当該無線通信部は、RSSIの値がピーク値(RSSI_p)から閾値(RSSI_th)へ低下したことを検出することで左折が完了したことを認識し、AP1とのリンクを遮断すると共に、次ぎのAPの検索処理等を開始する。すなわち、ステップS402の左折段階で次のAPの検索処理等が行われる。ここで、RSSIの値が最も大きいAPを検索対象とした場合、再びAP1との間でリンクが確立されるが、当該無線通信部は、この時点ではAP1のMACアドレスを記憶しているため、当該MACアドレスに基づく排他検索を行う。すなわち、当該無線通信部は、現在記憶しているAP1のMACアドレス以外のMACアドレスを含むビーコンフレームを検索する。そして、当該ビーコンフレームが見つかった場合、その発信元のAP(ここではAP2)との間でリンクを確立し、当該発信元のAPにハンドオーバーする。 The wireless communication unit recognizes that the left turn has been completed by detecting that the RSSI value has decreased from the peak value (RSSI_p) to the threshold value (RSSI_th), cuts off the link with AP1, and The search process is started. In other words, the next AP search process or the like is performed at the left turn of step S402. Here, when the AP having the largest RSSI value is the search target, the link is established again with AP1, but the wireless communication unit stores the MAC address of AP1 at this point, An exclusive search based on the MAC address is performed. That is, the wireless communication unit searches for a beacon frame including a MAC address other than the MAC address of AP1 currently stored. When the beacon frame is found, a link is established with the source AP (AP2 in this case), and the handover is performed to the source AP.
 これにより、無線LANの一般的な処理方法を用いるよりも早い段階で(例えばステップS402での左折後即座に)次ぎのAP(ここではAP2)にハンドオーバーすることができ、ユーザ装置は、監視装置10[2]からのデータに基づいてユーザに対してリアルタイムに通知・警告を行うことが可能になる。また、実際上、ステップS402での左折後は、ユーザ装置は、管理装置10[1]からのデータを受信する必要性は無いため、このような早期のハンドオーバーを用いることが有益となる。そして、このようなハンドオーバー方法を用いることで、各交差点単位のデータを適切に(リアルタイムに)選択することが可能になる。 Thereby, it is possible to hand over to the next AP (here, AP2) at an earlier stage (for example, immediately after the left turn in step S402) than using the general processing method of the wireless LAN. Based on the data from the device 10 [2], it becomes possible to notify / warn the user in real time. Also, in practice, after the left turn in step S402, the user apparatus does not need to receive data from the management apparatus 10 [1], so it is beneficial to use such early handover. Then, by using such a handover method, it is possible to appropriately (in real time) select data for each intersection.
 《各実施の形態の纏め》
 以上に説明した、各実施の形態による安全支援システムおよび安全支援装置の主な特徴を纏めると以下のようになる。
<< Summary of each embodiment >>
The main features of the safety support system and the safety support device according to each embodiment described above are summarized as follows.
 《本実施の形態による安全支援システム[1]》
 (1-1)本実施の形態による安全支援システムは、カーブミラーに併設され、監視カメラ(27,28)、画像処理部(21)および第1無線通信部(23)を備える監視装置(10)と、ユーザによって運搬され、第2無線通信部(41,51)および情報処理部(47,48)を備えるユーザ装置(40,50)と、を有する。画像処理部は、第1~第3処理を実行する。第1処理では、監視カメラの撮像画像に対して、所定の画像認識範囲内に存在するN(Nは1以上の整数)個の移動体を検出する。第2処理では、N個の移動体毎に、移動体の種類を判別し、予め複数の値が規定された第1識別子(画像認識ID)の中から当該判別結果に応じた値を持つ第1識別子を生成する。第3処理では、N個の移動体毎に、撮像画像上の座標に基づき所定の基準位置(交差点の入り口)からの距離を判別し、予め複数の値が規定された第2識別子(距離情報)の中から当該判別結果に応じた値を持つ第2識別子を生成する。第1無線通信部は、N個の移動体を対象にそれぞれ生成されたN組の第1および第2識別子を含んだデータ信号を送信する。第2無線通信部は、第1無線通信部から送信されたデータ信号を受信し、情報処理部は、当該データ信号に含まれるN組の第1および第2識別子に基づいて移動体の存在状況を認識し、当該移動体の存在状況をユーザに通知するための所定の処理を行う。
<< Safety support system according to this embodiment [1] >>
(1-1) The safety support system according to the present embodiment is provided with a monitoring device (10) provided with a curve mirror and including a monitoring camera (27, 28), an image processing unit (21), and a first wireless communication unit (23). ) And a user device (40, 50) that is carried by the user and includes the second wireless communication unit (41, 51) and the information processing unit (47, 48). The image processing unit executes first to third processes. In the first process, N (N is an integer equal to or greater than 1) moving bodies existing within a predetermined image recognition range are detected from the captured image of the monitoring camera. In the second process, the type of the moving body is determined for each of N moving bodies, and a first value having a value corresponding to the determination result is selected from the first identifiers (image recognition IDs) in which a plurality of values are defined in advance. 1 identifier is generated. In the third process, the distance from a predetermined reference position (entrance of the intersection) is determined for each of N moving bodies based on the coordinates on the captured image, and a second identifier (distance information) in which a plurality of values are defined in advance. ) To generate a second identifier having a value corresponding to the determination result. The first wireless communication unit transmits a data signal including N sets of first and second identifiers generated for N mobile objects. The second wireless communication unit receives the data signal transmitted from the first wireless communication unit, and the information processing unit determines whether the mobile object is present based on the N sets of first and second identifiers included in the data signal. And performing a predetermined process for notifying the user of the presence state of the mobile object.
 このように、監視装置は、ユーザ装置に向けて、撮像画像を送信するのではなく、撮像画像から検出した移動体の種別や所定の基準位置からの距離を識別子に置き換えて送信する。これにより、監視装置では送信データ量を低減でき、ユーザ装置では少ないデータ量で移動体の存在状況を認識することができる。その結果、リアルタイム性が向上し、安全性の向上が実現可能になる。さらに、識別子を用いていることから、プライバシーの保護が図れる。 As described above, the monitoring device does not transmit the captured image to the user device, but transmits the type of the moving body detected from the captured image or the distance from the predetermined reference position with the identifier. As a result, the monitoring device can reduce the amount of transmission data, and the user device can recognize the presence state of the moving object with a small amount of data. As a result, real-time performance is improved, and safety can be improved. Furthermore, since the identifier is used, privacy can be protected.
 (1-2)上記(1-1)において、第1識別子として規定された複数の値の中には、移動体の種類を判別できなかった場合に生成される値(画像認識ID=“0001”)が含まれる。これにより、仮に、不明の移動体が存在した場合でも、その存在をユーザに通知することができ、安全性の向上が実現可能になる。 (1-2) In the above (1-1), among the plurality of values defined as the first identifier, a value generated when the type of the moving object cannot be determined (image recognition ID = “0001”) ”). As a result, even if an unknown moving body exists, it is possible to notify the user of the presence and to improve safety.
 (1-3)上記(1-2)において、情報処理部は、さらに、ユーザの進行方位を検出する方位検出部(49)を備え、第1無線通信部は、データ信号の中に、さらに監視カメラの撮像方位を示す第3識別子(方位情報)を加えて送信する。これにより、ユーザ装置は、方位検出部の検出結果と第3識別子との比較によって、受信したデータ信号が必要なデータ信号か否かを判別することができる。また、ユーザ装置側で必要なデータ信号か否かの判別が可能なため、任意のカーブミラーに任意の撮像方位で監視装置を併設することができ、また、1個に限らず複数の監視装置を併設することができる。 (1-3) In the above (1-2), the information processing unit further includes an azimuth detecting unit (49) for detecting the traveling azimuth of the user, and the first wireless communication unit further includes a data signal, A third identifier (azimuth information) indicating the imaging direction of the surveillance camera is added and transmitted. Thereby, the user apparatus can determine whether or not the received data signal is a necessary data signal by comparing the detection result of the azimuth detecting unit with the third identifier. In addition, since it is possible to determine whether or not the data signal is necessary on the user device side, a monitoring device can be provided in an arbitrary imaging direction in an arbitrary curve mirror, and a plurality of monitoring devices are not limited to one. Can be added.
 (1-4)上記(1-3)において、ユーザ装置は、さらに、画像表示部(11,45)を備え、情報処理部は、予め第1識別子として規定された複数の値にそれぞれ対応するアイコン又はシンボルが設定され、画像表示部上で、N組の第1および第2識別子の各組毎に、第1識別子に対応するアイコン又はシンボルを第2識別子に応じた座標に表示する。これによって、ユーザに対して、移動体の存在状況を視覚的に通知することが可能になる。 (1-4) In the above (1-3), the user device further includes an image display unit (11, 45), and the information processing unit respectively corresponds to a plurality of values previously defined as the first identifier. An icon or symbol is set, and an icon or symbol corresponding to the first identifier is displayed at coordinates corresponding to the second identifier for each of the N pairs of first and second identifiers on the image display unit. As a result, it is possible to visually notify the user of the presence state of the moving object.
 (1-5)上記(1-3)において、ユーザ装置は、さらに、音声出力部(12,45)または振動部(13,45)を備え、情報処理部は、N組の第1および第2識別子に基づいて音声出力部または振動部を制御する。これにより、ユーザに対して、移動体の存在状況を視覚以外の方法で通知することが可能になり、より認識度が高い注意を喚起できる場合がある。 (1-5) In the above (1-3), the user apparatus further includes an audio output unit (12, 45) or a vibration unit (13, 45), and the information processing unit includes N sets of first and first sets. 2. Control the audio output unit or the vibration unit based on the identifier. Thereby, it becomes possible to notify the user of the presence state of the moving body by a method other than vision, and there may be a case where attention with a higher recognition degree can be drawn.
 (1-6)上記(1-3)において、画像処理部の第1処理での画像認識範囲(「+マーカー」に基づく範囲)は、任意に設定可能となっている。これによって、どの程度の範囲に存在する移動体をユーザに対して通知するかを交差点等の状況に応じてカスタマイズすることができる。 (1-6) In (1-3) above, the image recognition range (range based on “+ marker”) in the first processing of the image processing unit can be arbitrarily set. Accordingly, it is possible to customize the extent to which the moving body is notified to the user according to the situation such as the intersection.
 (1-7)上記(1-3)において、ユーザは、自動車等の車両運転車に限らず、歩行者等であってもよい。すなわち、例えば、ナビゲーションシステム等が実装された携帯電話機等を持ち運ぶ歩行者(例えば、子供や年配者等)を対象に、安全性の向上を図ることも可能である。 (1-7) In the above (1-3), the user is not limited to a vehicle driving vehicle such as a car but may be a pedestrian or the like. That is, for example, it is possible to improve safety for pedestrians (for example, children, elderly people, etc.) who carry mobile phones equipped with a navigation system or the like.
 (1-8)上記(1-3)において、画像処理部は、撮像画像を所定の視野角毎に分割し、当該分割された複数の撮像画像毎に前述した第1~第3処理を実行する。第1無線通信部は、当該分割された複数の撮像画像毎のデータ信号の中に、さらに当該分割された複数の撮像画像毎の第3識別子をそれぞれ加えて送信する。このように、監視カメラが例えば90°以上の視野角を持つ広角レンズや魚眼レンズ等を備える場合であっても、複数の撮像画像に分割して処理を行うと共にその各撮像画像毎の撮像方位を加えることで、ユーザ装置は、受信した各データ信号がそれぞれ必要なデータ信号か否かを判別することができる。 (1-8) In (1-3) above, the image processing unit divides the captured image for each predetermined viewing angle, and executes the first to third processes described above for each of the plurality of divided captured images. To do. The first wireless communication unit transmits each of the divided plurality of captured images by adding a third identifier for each of the divided captured images. In this way, even when the surveillance camera includes a wide-angle lens, a fish-eye lens, or the like having a viewing angle of 90 ° or more, for example, the processing is divided into a plurality of captured images and the imaging orientation for each captured image is determined. In addition, the user apparatus can determine whether each received data signal is a necessary data signal.
 《本実施の形態による安全支援システム[2]》
 (2-1)本実施の形態による別の安全支援システムは、それぞれ、カーブミラーに併設され、監視カメラ(27,28)、画像処理部(21)および第1無線通信部(23)を備える複数の監視装置(10a,10b等)と、ユーザによって運搬され、第2無線通信部(41,51)および情報処理部(47,48)を備えるユーザ装置(40,50)と、を有する。複数の監視装置内の各画像処理部は、第1~第3処理を実行する。第1処理では、自身の監視カメラの撮像画像に対して、所定の画像認識範囲内に存在するN(Nは1以上の整数)個の移動体を検出する。第2処理では、N個の移動体毎に、移動体の種類を判別し、予め複数の値が規定された第1識別子(画像認識ID)の中から当該判別結果に応じた値を持つ第1識別子を生成する。第3処理では、N個の移動体毎に、撮像画像上の座標に基づき所定の基準位置(交差点の入り口)からの距離を判別し、予め複数の値が規定された第2識別子(距離情報)の中から当該判別結果に応じた値を持つ第2識別子を生成する。複数の監視装置内の各第1無線通信部は、自身に対応する画像処理部によってN個の移動体を対象にそれぞれ生成されたN組の第1および第2識別子を含んだデータ信号を送信する。第2無線通信部は、複数の監視装置内の第1無線通信部からそれぞれ送信されたデータ信号を受信し、情報処理部は、当該各データ信号に含まれるN組の第1および第2識別子に基づいて移動体の存在状況を認識し、当該移動体の存在状況をユーザに通知するための所定の処理を行う。
<< Safety support system according to this embodiment [2] >>
(2-1) Another safety support system according to the present embodiment is provided with a curved mirror, and includes a surveillance camera (27, 28), an image processing unit (21), and a first wireless communication unit (23). It has a plurality of monitoring devices (10a, 10b, etc.) and a user device (40, 50) that is carried by a user and includes a second wireless communication unit (41, 51) and an information processing unit (47, 48). Each image processing unit in the plurality of monitoring apparatuses executes first to third processes. In the first process, N (N is an integer of 1 or more) moving bodies existing within a predetermined image recognition range are detected from the captured image of the own monitoring camera. In the second process, the type of the moving body is determined for each of N moving bodies, and a first value having a value corresponding to the determination result is selected from the first identifiers (image recognition IDs) in which a plurality of values are defined in advance. 1 identifier is generated. In the third process, the distance from a predetermined reference position (entrance of the intersection) is determined for each of N moving bodies based on the coordinates on the captured image, and a second identifier (distance information) in which a plurality of values are defined in advance. ) To generate a second identifier having a value corresponding to the determination result. Each first wireless communication unit in the plurality of monitoring devices transmits a data signal including N sets of first and second identifiers respectively generated for N mobile objects by an image processing unit corresponding to the first wireless communication unit. To do. The second wireless communication unit receives data signals transmitted from the first wireless communication units in the plurality of monitoring devices, and the information processing unit includes N sets of first and second identifiers included in the data signals. Based on the above, the presence state of the moving body is recognized, and a predetermined process for notifying the user of the presence state of the moving body is performed.
 このように、本実施の形態による別の安全支援システムは、上記(1-1)の安全支援システムのもとで、ユーザ装置がさらに複数の監視装置からのデータ信号を受信する構成となっている。監視装置の台数が増えると、その分、ユーザ装置が受信するデータ量も増大するが、上記(1-1)で述べたように、当該安全支援システムでは識別子を用いているため、当該データ量を削減できる。その結果、上記(1-1)の場合と同様に、安全性の向上やプライバシーの保護が図れる。 As described above, another safety support system according to the present embodiment is configured such that the user device further receives data signals from a plurality of monitoring devices under the safety support system of (1-1). Yes. As the number of monitoring devices increases, the amount of data received by the user device increases accordingly. However, as described in (1-1) above, since the safety support system uses identifiers, the data amount Can be reduced. As a result, as in the case of (1-1) above, it is possible to improve safety and protect privacy.
 (2-2)上記(2-1)において、情報処理部は、さらに、ユーザの進行方位を検出する方位検出部(49)を備え、複数の監視装置内の第1無線通信部のそれぞれは、自身のデータ信号の中に、さらに自身の監視カメラの撮像方位を示す第3識別子(方位情報)を加えて送信する。これにより、上記(1-3)の場合と同様に、ユーザ装置は、複数の監視装置からデータ信号を受信した場合に、各データ信号毎に必要な情報か否かを判別することができる。 (2-2) In the above (2-1), the information processing unit further includes an orientation detection unit (49) for detecting the traveling direction of the user, and each of the first wireless communication units in the plurality of monitoring devices is The third identifier (azimuth information) indicating the imaging direction of the own monitoring camera is further added to the own data signal and transmitted. Thus, as in the case of (1-3) above, the user apparatus can determine whether or not the information is necessary for each data signal when receiving the data signal from a plurality of monitoring apparatuses.
 (2-3)上記(2-2)において、複数の監視装置は、異なる交差点に配置されたカーブミラーにそれぞれ併設された第1および第2監視装置を含む。複数の監視装置内の各第1無線通信部は、自身のデータ信号に対して、さらに、監視装置を識別するための装置識別情報(MACアドレス)を付加して送信する。第2無線通信部は、さらに、第1および第2監視装置の第1無線通信部からそれぞれ送信された無線信号を監視し、当該各無線信号に含まれる装置識別情報と、当該各無線信号の電波強度とを検出する。そして、情報処理部は、第1監視装置(10[1])からのデータ信号を対象として処理を行っている状態で、第1監視装置からの無線信号の電波強度がピーク値から所定の値だけ減少した場合、第1監視装置からのデータ信号を対象とする処理を停止し、第2無線通信部が第1監視装置とは異なる装置識別情報を含んだ無線信号を受信するのを待つ。この待ちの状態で、第2無線通信部が第2監視装置(10[2])からの無線信号を受信した場合、情報処理部は、第2監視装置からのデータ信号を対象として処理を行う。 (2-3) In the above (2-2), the plurality of monitoring devices include first and second monitoring devices respectively provided alongside curve mirrors arranged at different intersections. Each first wireless communication unit in the plurality of monitoring devices further transmits the data signal with device identification information (MAC address) for identifying the monitoring device added thereto. The second wireless communication unit further monitors the wireless signal transmitted from each of the first wireless communication units of the first and second monitoring devices, and includes device identification information included in each wireless signal and each wireless signal. Detect the signal strength. Then, the information processing unit performs processing on the data signal from the first monitoring device (10 [1]), and the radio field intensity of the wireless signal from the first monitoring device is a predetermined value from the peak value. In the case of the decrease, the processing for the data signal from the first monitoring device is stopped, and the second wireless communication unit waits to receive a wireless signal including device identification information different from the first monitoring device. In this waiting state, when the second wireless communication unit receives a wireless signal from the second monitoring device (10 [2]), the information processing unit performs processing on the data signal from the second monitoring device. .
 これにより、例えば、比較的近接した交差点にそれぞれ監視装置が併設されたような場合であっても、ユーザ装置は、この異なる監視装置を区別した上で、ユーザ装置の現在位置に応じて真に必要なデータ信号を判別することができる。すなわち、同一の交差点に併設された複数の監視装置からの各データ信号に対して、その各データ信号毎の必要有無を第3識別子によって判別できることに加えて、異なる交差点に併設された複数の監視装置からの各データ信号に対して、その各データ信号毎の必要有無を装置識別情報と電波強度によって判別できる。 As a result, for example, even when a monitoring device is provided at each of relatively close intersections, the user device distinguishes between the different monitoring devices, and is truly true according to the current position of the user device. Necessary data signals can be determined. In other words, for each data signal from a plurality of monitoring devices installed at the same intersection, whether or not each data signal is necessary can be determined by the third identifier, and a plurality of monitoring devices installed at different intersections. For each data signal from the device, the necessity for each data signal can be determined by the device identification information and the radio wave intensity.
 (2-4)上記(2-2)において、交差点の四つ角を時計回りの順に第1角、第2角、第3角、第4角とした場合、複数の監視装置の中の2個は、第1角に配置されたカーブミラーにそれぞれ併設される。2個の監視装置の一方は、第2角と第3角の間を交差点の入り口とする領域を撮像するように併設され、他方は、第3角と第4角の間を交差点の入り口とする領域を撮像するように併設される。この場合、単に、各監視装置の監視カメラの光軸を同等の俯角で設置することで、各監視装置の画像処理を同じ条件で行わせることができる。これにより、複数の監視装置を設置する作業を行う際に、各監視装置毎の個別の調整は特に不要となり、作業効率の向上等が図れる。 (2-4) In the above (2-2), when the four corners of the intersection are the first corner, the second corner, the third corner, and the fourth corner in the clockwise order, two of the plurality of monitoring devices are And a curved mirror disposed at the first corner. One of the two monitoring devices is provided so as to image an area where the intersection is between the second corner and the third corner, and the other is an entrance of the intersection between the third corner and the fourth corner. It is installed side by side to image the area to be. In this case, the image processing of each monitoring device can be performed under the same conditions by simply installing the optical axis of the monitoring camera of each monitoring device at the same depression angle. Thereby, when performing the operation | work which installs a some monitoring apparatus, the separate adjustment for every monitoring apparatus becomes unnecessary especially, and improvement of working efficiency etc. can be aimed at.
 《本実施の形態による安全支援装置(監視装置)》
 (3-1)本実施の形態による安全支援装置(監視装置)は、監視カメラ(27,28)、画像処理部(21)および第1無線通信部(23)を備える。画像処理部は、第1~第3処理を実行する。第1処理では、監視カメラの撮像画像に対して、所定の画像認識範囲内に存在するN(Nは1以上の整数)個の移動体を検出する。第2処理では、N個の移動体毎に、移動体の種類を判別し、予め複数の値が規定された第1識別子(画像認識ID)の中から当該判別結果に応じた値を持つ第1識別子を生成する。第3処理では、N個の移動体毎に、撮像画像上の座標に基づき所定の基準位置(交差点の入り口)からの距離を判別し、予め複数の値が規定された第2識別子(距離情報)の中から当該判別結果に応じた値を持つ第2識別子を生成する。第1無線通信部は、N個の移動体を対象にそれぞれ生成されたN組の第1および第2識別子を含んだデータ信号を送信する。これにより、上記(1-1)の場合と同様の効果が得られる。
<< Safety support device (monitoring device) according to this embodiment >>
(3-1) The safety support device (monitoring device) according to the present embodiment includes a monitoring camera (27, 28), an image processing unit (21), and a first wireless communication unit (23). The image processing unit executes first to third processes. In the first process, N (N is an integer equal to or greater than 1) moving bodies existing within a predetermined image recognition range are detected from the captured image of the monitoring camera. In the second process, the type of the moving body is determined for each of N moving bodies, and a first value having a value corresponding to the determination result is selected from the first identifiers (image recognition IDs) in which a plurality of values are defined in advance. 1 identifier is generated. In the third process, the distance from a predetermined reference position (entrance of the intersection) is determined for each of N moving bodies based on the coordinates on the captured image, and a second identifier (distance information) in which a plurality of values are defined in advance. ) To generate a second identifier having a value corresponding to the determination result. The first wireless communication unit transmits a data signal including N sets of first and second identifiers generated for N mobile objects. Thereby, the same effect as in the case of (1-1) is obtained.
 (3-2)上記(3-1)において、当該安全支援装置は、カーブミラーに併設される。これにより、見通しの悪い交差点等で安全性の向上が実現可能になる。 (3-2) In the above (3-1), the safety support device is attached to the curve mirror. This makes it possible to improve safety at intersections with poor visibility.
 (3-3)上記(3-2)において、第1識別子として規定された複数の値の中には、移動体の種類を判別できなかった場合に生成される値(画像認識ID=“0001”)が含まれる。これにより、上記(1-2)の場合と同様の効果が得られる。 (3-3) Among the plurality of values defined as the first identifier in (3-2) above, a value generated when the type of the moving object cannot be determined (image recognition ID = “0001”) ”). As a result, the same effect as in the case of (1-2) can be obtained.
 (3-4)上記(3-3)において、第1無線通信部は、データ信号の中に、さらに監視カメラの撮像方位を示す第3識別子(方位情報)を加えて送信する。これにより、上記(1-3)の場合と同様の効果が得られる。 (3-4) In the above (3-3), the first wireless communication unit further transmits the data signal by adding a third identifier (azimuth information) indicating the imaging orientation of the surveillance camera. Thereby, the same effect as in the case of the above (1-3) can be obtained.
 (3-5)上記(3-4)において、画像処理部は、撮像画像を所定の視野角毎に分割し、当該分割された複数の撮像画像毎に第1~第3処理を実行する。第1無線通信部は、当該分割された複数の撮像画像毎のデータ信号の中に、さらに分割された複数の撮像画像毎の第3識別子をそれぞれ加えて送信する。これにより、上記(1-8)の場合と同様の効果が得られる。 (3-5) In the above (3-4), the image processing unit divides the captured image for each predetermined viewing angle, and executes the first to third processes for each of the divided plurality of captured images. The first wireless communication unit adds the third identifier for each of the plurality of divided captured images to the divided data signal for each of the plurality of captured images and transmits the data signal. Thereby, the same effect as in the case of (1-8) is obtained.
 《本実施の形態による安全支援装置(ユーザ装置)》
 (4-1)本実施の形態による安全支援装置(ユーザ装置)は、外部の監視装置の撮像画像に基づいて検出されたN(Nは1以上の整数)個の移動体に関する情報を監視装置から受信する第2無線通信部(41,51)および情報処理部(47,48)を備え、ユーザによって運搬される装置である。第2無線通信部は、N個の移動体毎の種類を表す第1識別子(画像認識ID)と、N個の移動体毎の所定の基準位置からの距離を表す第2識別子(距離情報)とを含んだデータ信号を受信する。情報処理部は、第2無線通信部で受信したデータ信号に含まれるN組の第1および第2識別子に基づいて移動体の存在状況を認識し、当該移動体の存在状況をユーザに通知するための所定の処理を行う。これにより、上記(1-1)の場合と同様の効果が得られる。
<< Safety Support Device (User Device) According to this Embodiment >>
(4-1) The safety support device (user device) according to the present embodiment monitors information related to N (N is an integer of 1 or more) moving bodies detected based on the captured image of the external monitoring device. The second wireless communication unit (41, 51) and the information processing unit (47, 48) that are received from the mobile phone are carried by the user. The second wireless communication unit includes a first identifier (image recognition ID) representing a type for each of N mobile objects and a second identifier (distance information) representing a distance from a predetermined reference position for each of N mobile objects. The data signal containing is received. The information processing unit recognizes the presence state of the moving body based on the N sets of first and second identifiers included in the data signal received by the second wireless communication unit, and notifies the user of the presence state of the moving body. Predetermined processing is performed. Thereby, the same effect as in the case of (1-1) is obtained.
 (4-2)上記(4-1)において、さらに、ユーザの進行方位を検出する方位検出部(49)を備え、第2無線通信部は、データ信号として、N組の第1および第2識別子に加えて、さらに撮像画像の撮像方位を示す第3識別子(方位情報)を受信する。これにより、上記(1-3)の場合と同様の効果が得られる。 (4-2) In the above (4-1), an azimuth detecting unit (49) for detecting the moving azimuth of the user is further provided, and the second wireless communication unit receives N sets of first and second sets of data signals as data signals In addition to the identifier, a third identifier (orientation information) indicating the imaging orientation of the captured image is further received. Thereby, the same effect as in the case of the above (1-3) can be obtained.
 (4-3)上記(4-2)において、さらに、画像表示部(11,45)を備え、情報処理部は、予め第1識別子として規定された複数の値にそれぞれ対応するアイコン又はシンボルが設定され、画像表示部に対して、N組の第1および第2識別子の各組毎に、第1識別子に対応するアイコン又はシンボルを第2識別子に応じた座標に表示させる。これによって、ユーザに対して、移動体の存在状況を視覚的に通知することが可能になる。 (4-3) In the above (4-2), an image display unit (11, 45) is further provided, and the information processing unit has icons or symbols respectively corresponding to a plurality of values defined in advance as the first identifier. The icon or symbol corresponding to the first identifier is displayed at the coordinates corresponding to the second identifier for each set of the N first and second identifiers set on the image display unit. As a result, it is possible to visually notify the user of the presence state of the moving object.
 (4-4)上記(4-2)において、ユーザ装置は、さらに、音声出力部(12,45)または振動部(13,45)を備え、情報処理部は、N組の第1および第2識別子に基づいて音声出力部または振動部を制御する。これにより、ユーザに対して、移動体の存在状況を視覚以外の方法で通知することが可能になり、より認識度が高い注意を喚起できる場合がある。 (4-4) In the above (4-2), the user apparatus further includes an audio output unit (12, 45) or a vibration unit (13, 45), and the information processing unit includes N sets of first and first sets. 2. Control the audio output unit or the vibration unit based on the identifier. Thereby, it becomes possible to notify the user of the presence state of the moving body by a method other than vision, and there may be a case where attention with a higher recognition degree can be drawn.
 (4-5)上記(4-2)において、ユーザは、自動車等の車両運転車に限らず、歩行者等であってもよい。すなわち、例えば、ナビゲーションシステム等が実装された携帯電話機等を持ち運ぶ歩行者(例えば、子供や年配者等)を対象に、安全性の向上を図ることも可能である。 (4-5) In the above (4-2), the user is not limited to a vehicle driving vehicle such as a car but may be a pedestrian or the like. That is, for example, it is possible to improve safety for pedestrians (for example, children, elderly people, etc.) who carry mobile phones equipped with a navigation system or the like.
 (4-6)上記(4-2)において、第2無線通信部は、さらに、第1および第2監視装置を含む複数の監視装置からそれぞれ送信される無線信号を監視し、複数の監視装置を識別するために当該各無線信号内に含まれる装置識別情報(MACアドレス)と、当該各無線信号の電波強度とを検出する。そして、情報処理部は、第1監視装置(10[1])からのデータ信号を対象として処理を行っている状態で、第1監視装置からの無線信号の電波強度がピーク値から所定の値だけ減少した場合、第1監視装置からのデータ信号を対象とする処理を停止し、第2無線通信部が第1監視装置とは異なる装置識別情報を含んだ無線信号を受信するのを待つ。この待ちの状態で、第2無線通信部が第2監視装置(10[2])からの無線信号を受信した場合、情報処理部は、第2監視装置からのデータ信号を対象として処理を行う。これにより、上記(2-3)の場合と同様の効果が得られる。 (4-6) In the above (4-2), the second wireless communication unit further monitors wireless signals transmitted from a plurality of monitoring devices including the first and second monitoring devices, and the plurality of monitoring devices Device identification information (MAC address) included in each wireless signal and the radio wave intensity of each wireless signal are detected. Then, the information processing unit performs processing on the data signal from the first monitoring device (10 [1]), and the radio field intensity of the wireless signal from the first monitoring device is a predetermined value from the peak value. In the case of the decrease, the processing for the data signal from the first monitoring device is stopped, and the second wireless communication unit waits to receive a wireless signal including device identification information different from the first monitoring device. In this waiting state, when the second wireless communication unit receives a wireless signal from the second monitoring device (10 [2]), the information processing unit performs processing on the data signal from the second monitoring device. . As a result, the same effect as in the above (2-3) can be obtained.
 《本実施の形態による安全支援方法》
 (5-1)本実施の形態による安全支援方法は、監視装置(10)とユーザ装置(40,50)とを用いた方法である。監視装置は、第1ステップとして、監視カメラで画像を撮像し、当該撮像画像の中からN(Nは1以上の整数)個の移動体を検出する。第2ステップとして、N個の移動体毎に、移動体の種類を判別すると共に移動体の所定の基準位置からの距離を計測する。第3ステップとして、N個の移動体毎に、移動体の種類を表す第1識別子(画像認識ID)と、移動体の距離を表す第2識別子(距離情報)とを生成する。第4ステップとして、N個の移動体を対象にそれぞれ生成されたN組の第1および第2識別子を含んだデータ信号を送信する。一方、ユーザ装置は、第5ステップとして、監視装置から送信されたデータ信号を受信し、当該データ信号に含まれるN組の第1および第2識別子に基づいて移動体の存在状況を認識し、当該移動体の存在状況をユーザに通知する。これにより、上記(1-1)の場合と同様の効果が得られる。
<< Safety support method according to this embodiment >>
(5-1) The safety support method according to the present embodiment is a method using the monitoring device (10) and the user devices (40, 50). As a first step, the monitoring device captures an image with the monitoring camera, and detects N (N is an integer of 1 or more) moving bodies from the captured image. As a second step, for each of N moving bodies, the type of the moving body is determined and the distance from the predetermined reference position of the moving body is measured. As a third step, a first identifier (image recognition ID) indicating the type of the moving object and a second identifier (distance information) indicating the distance of the moving object are generated for each N moving objects. As a fourth step, a data signal including N sets of first and second identifiers generated for N mobile objects is transmitted. On the other hand, as a fifth step, the user device receives the data signal transmitted from the monitoring device, recognizes the presence state of the moving object based on the N sets of first and second identifiers included in the data signal, The user is notified of the presence status of the moving object. Thereby, the same effect as in the case of (1-1) is obtained.
 (5-2)上記(5-1)において、監視装置は、第4ステップにおいて、さらに、データ信号の中に、監視カメラの撮像方位を示す第3識別子(方位情報)を加えて送信する。ユーザ装置は、第5ステップにおいて、さらに、ユーザの進行方位を検出し、当該検出結果と第3識別子とを比較することで受信したデータ信号の必要有無を判別する。これにより、上記(1-3)の場合と同様の効果が得られる。 (5-2) In the above (5-1), in the fourth step, the monitoring apparatus further transmits the data signal by adding a third identifier (azimuth information) indicating the imaging direction of the monitoring camera to the data signal. In the fifth step, the user apparatus further detects the traveling direction of the user and compares the detection result with the third identifier to determine whether the received data signal is necessary. Thereby, the same effect as in the case of the above (1-3) can be obtained.
 以上、本発明者によってなされた発明を実施の形態に基づき具体的に説明したが、本発明は前記実施の形態に限定されるものではなく、その要旨を逸脱しない範囲で種々変更可能である。例えば、前述した実施の形態は、本発明を分かり易く説明するために詳細に説明したものであり、必ずしも説明した全ての構成を備えるものに限定されるものではない。また、ある実施の形態の構成の一部を他の実施の形態の構成に置き換えることが可能であり、また、ある実施の形態の構成に他の実施の形態の構成を加えることも可能である。また、各実施の形態の構成の一部について、他の構成の追加・削除・置換をすることが可能である。 As described above, the invention made by the present inventor has been specifically described based on the embodiment. However, the present invention is not limited to the embodiment, and various modifications can be made without departing from the scope of the invention. For example, the above-described embodiment has been described in detail for easy understanding of the present invention, and is not necessarily limited to one having all the configurations described. Further, a part of the configuration of one embodiment can be replaced with the configuration of another embodiment, and the configuration of another embodiment can be added to the configuration of one embodiment. . Further, it is possible to add, delete, and replace other configurations for a part of the configuration of each embodiment.
 ここでは、サイバーカーブミラー(監視装置)を見通しの悪い交差点、T字路、急カーブ等に設置されている光学式カーブミラーに併設する例を示したが、設置場所は特に限定されず、例えば、踏切や、信号機のある交差点や、駐車場出入口や、一般住宅の見通しの悪い車庫等などに設置することも有益である。この場合、例えば、無理な踏切横断や信号のある交差点への無理な進入に対し警告を行うようなことも可能である。また、スクールゾーンなど学童が通学で利用する市街地での信号の無い交差点や、屋内外での通路・廊下等でヒヤリ・ハットが発生している場所や、高層・大型駐車場(ショッピングモール等)や工場や倉庫等における車の出入り口で道路反射鏡や屋内カーブミラーが配置されている場所に設置することも有益である。 Here, an example is shown in which a cyber curve mirror (monitoring device) is attached to an optical curve mirror installed at an intersection, T-junction, sharp curve, etc. with poor visibility, but the installation location is not particularly limited, for example, It is also useful to install it at railroad crossings, intersections with traffic lights, parking lot entrances and garages with poor prospects for ordinary houses. In this case, for example, it is possible to warn for an unreasonable crossing at a railroad crossing or an unreasonable approach to an intersection with a signal. Also, there are no traffic lights in urban areas that school children use for school, such as school zones, places where there are near-misses in passages and corridors inside and outside, and high-rise / large parking lots (shopping malls, etc.) It is also beneficial to install it in places where road reflectors and indoor curve mirrors are placed at the entrances and exits of cars in factories and warehouses.
 また、光学式カーブミラーは、通常、見通しが良くない場所、事故が起こりやすい場所に配置されるため、サイバーカーブミラー(監視装置)は、主に、この光学式カーブミラーに併設することが望ましい。ただし、場合によっては、光学式カーブミラーが無い場所に単体でサイバーカーブミラー(監視装置)を設置することも可能である。 In addition, since the optical curve mirror is usually placed in a place where the line of sight is not good or where an accident is likely to occur, it is desirable that the cyber curve mirror (monitoring device) is mainly provided along with the optical curve mirror. . However, in some cases, it is also possible to install a cyber curve mirror (monitoring device) alone in a place where there is no optical curve mirror.
 さらに、画像認識の内容をかえることで、例えば、駐車場の監視カメラ情報から、進入車両や歩行者の有無だけでなく、駐車スペースにある白線認識と車両認識を用い、直接見えない駐車場の空きスペースの有無を監視カメラの映像から抽出し、画像表示や音声ガイドにより運転者に情報を提供することもできる。この場合、車両運転者は、直接視認せずとも駐車場の空きスペース探索できるため、直接視認することによる前方不注意やわき見による接触事故等の可能性を低減できる。 Furthermore, by changing the contents of the image recognition, for example, from the monitoring camera information of the parking lot, not only the presence of an approaching vehicle or pedestrian, but also the white line recognition and vehicle recognition in the parking space, the parking lot that is not directly visible The presence or absence of an empty space can be extracted from the video of the surveillance camera, and information can be provided to the driver through image display or voice guidance. In this case, since the vehicle driver can search for an empty space in the parking lot without directly visual recognition, it is possible to reduce the possibility of a careless accident due to forward carelessness due to direct visual recognition or the like.
 また、ここでは、画像認識後、アイコン/シンボル等の画像認識IDを用いて画像表示を行うことで、監視カメラでありながら、プライバシー保護を考慮したブロードキャスティング情報システムを構築したが、必要に応じて、実際の撮像画像を送信できるようなモードを設けることも可能である。例えば、施設管理者等は、防犯・セキュリティのために当該モードを利用し、サイバーカーブミラー(監視装置)を通常の監視カメラとして機能させる。 In addition, here, a broadcast information system that takes privacy protection into consideration while constructing a surveillance camera was constructed by displaying an image using an image recognition ID such as an icon / symbol after image recognition. Thus, it is possible to provide a mode in which an actual captured image can be transmitted. For example, the facility manager or the like uses the mode for crime prevention / security and causes the cyber curve mirror (monitoring device) to function as a normal monitoring camera.
 10,10a~10d 監視装置(サイバーカーブミラー)
 11 画像表示部
 12 音声出力部
 13 振動部
 20 センサ部
 21 画像処理・信号生成部
 22 方位情報生成部
 23 無線通信部
 24 拡張情報生成部
 25 CPU
 26 バス
 27 カメラセンサ
 28 赤外線センサ
 29 超音波レーダ
 30 画像認識部
 31 距離計測部
 32 方位情報記憶部
 33a,33b 無線インタフェース
 34 交通情報記憶部
 40,50 ユーザ装置
 41,51 無線通信部
 42,52 携帯電話/スマートフォン
 43 ナビゲーションシステム
 44 ドライブレコーダ
 45 ユーザ通知部
 46a,46b 無線インタフェース
 47,48 情報処理部
 49 方位検出部
 60 移動体情報
 61 方位情報
 62 拡張情報
 90a,90b,91a~91c,101a~101c 擬似監視装置
 92 警告灯
10, 10a-10d Monitoring device (Cyber curve mirror)
DESCRIPTION OF SYMBOLS 11 Image display part 12 Audio | voice output part 13 Vibration part 20 Sensor part 21 Image processing and signal generation part 22 Direction information generation part 23 Wireless communication part 24 Extended information generation part 25 CPU
26 Bus 27 Camera sensor 28 Infrared sensor 29 Ultrasonic radar 30 Image recognition unit 31 Distance measurement unit 32 Direction information storage unit 33a, 33b Wireless interface 34 Traffic information storage unit 40, 50 User device 41, 51 Wireless communication unit 42, 52 Mobile Telephone / smart phone 43 Navigation system 44 Drive recorder 45 User notification unit 46a, 46b Wireless interface 47, 48 Information processing unit 49 Direction detection unit 60 Moving body information 61 Direction information 62 Extended information 90a, 90b, 91a to 91c, 101a to 101c Monitoring device 92 Warning light

Claims (17)

  1.  カーブミラーに併設され、監視カメラ、画像処理部および第1無線通信部を備える監視装置と、
     ユーザによって運搬され、第2無線通信部および情報処理部を備えるユーザ装置と、を有し、
     前記画像処理部は、
     前記監視カメラの撮像画像に対して、所定の画像認識範囲内に存在するN(Nは1以上の整数)個の移動体を検出する第1処理と、
     前記N個の移動体毎に、移動体の種類を判別し、予め複数の値が規定された第1識別子の中から当該判別結果に応じた値を持つ第1識別子を生成する第2処理と、
     前記N個の移動体毎に、前記撮像画像上の座標に基づき所定の基準位置からの距離を判別し、予め複数の値が規定された第2識別子の中から当該判別結果に応じた値を持つ第2識別子を生成する第3処理と、を実行し、
     前記第1無線通信部は、前記N個の移動体を対象にそれぞれ生成されたN組の第1および第2識別子を含んだデータ信号を送信し、
     前記第2無線通信部は、前記第1無線通信部から送信された前記データ信号を受信し、
     前記情報処理部は、前記データ信号に含まれる前記N組の第1および第2識別子に基づいて移動体の存在状況を認識し、当該移動体の存在状況をユーザに通知するための所定の処理を行う、安全支援システム。
    A monitoring device provided with a curve mirror and provided with a monitoring camera, an image processing unit, and a first wireless communication unit;
    A user device carried by a user and provided with a second wireless communication unit and an information processing unit,
    The image processing unit
    A first process for detecting N (N is an integer greater than or equal to 1) moving bodies existing within a predetermined image recognition range with respect to a captured image of the monitoring camera;
    A second process for determining the type of the moving body for each of the N moving bodies and generating a first identifier having a value corresponding to the determination result from among the first identifiers in which a plurality of values are defined in advance; ,
    For each of the N moving bodies, the distance from a predetermined reference position is determined based on the coordinates on the captured image, and a value corresponding to the determination result is selected from the second identifiers in which a plurality of values are defined in advance. And a third process for generating a second identifier having
    The first wireless communication unit transmits a data signal including N sets of first and second identifiers generated for the N mobile objects,
    The second wireless communication unit receives the data signal transmitted from the first wireless communication unit,
    The information processing unit recognizes the presence state of the moving body based on the N sets of first and second identifiers included in the data signal, and performs predetermined processing for notifying the user of the presence state of the moving body Do a safety support system.
  2.  請求項1記載の安全支援システムにおいて、
     前記第1識別子として規定された複数の値の中には、移動体の種類を判別できなかった場合に生成される値が含まれる、安全支援システム。
    The safety support system according to claim 1,
    The plurality of values defined as the first identifier include a value generated when the type of the moving object cannot be determined.
  3.  請求項2記載の安全支援システムにおいて、
     前記情報処理部は、さらに、前記ユーザの進行方位を検出する方位検出部を備え、
     前記第1無線通信部は、前記データ信号の中に、さらに、前記監視カメラの撮像方位を示す第3識別子を加えて送信する、安全支援システム。
    The safety support system according to claim 2,
    The information processing unit further includes an azimuth detecting unit that detects the traveling azimuth of the user,
    The first wireless communication unit transmits the data signal with a third identifier indicating an imaging direction of the monitoring camera added to the data signal.
  4.  請求項3記載の安全支援システムにおいて、
     前記ユーザ装置は、さらに、画像表示部を備え、
     前記情報処理部は、前記第1識別子として規定された複数の値にそれぞれ対応するアイコンまたはシンボルが予め設定され、前記画像表示部に対して、前記N組の第1および第2識別子の各組毎に、前記第1識別子に対応する前記アイコンまたはシンボルを前記第2識別子に応じた座標に表示させる、安全支援システム。
    The safety support system according to claim 3,
    The user device further includes an image display unit,
    The information processing unit is preset with icons or symbols respectively corresponding to a plurality of values defined as the first identifier, and each of the N sets of first and second identifiers is set to the image display unit. A safety support system that displays the icon or symbol corresponding to the first identifier at coordinates corresponding to the second identifier each time.
  5.  請求項3記載の安全支援システムにおいて、
     前記ユーザ装置は、さらに、音声出力部または振動部を備え、
     前記情報処理部は、前記N組の第1および第2識別子に基づいて前記音声出力部または前記振動部を制御する、安全支援システム。
    The safety support system according to claim 3,
    The user device further includes an audio output unit or a vibration unit,
    The information processing unit is a safety support system that controls the voice output unit or the vibration unit based on the N sets of first and second identifiers.
  6.  請求項3記載の安全支援システムにおいて、
     前記第1処理での前記画像認識範囲は、任意に設定可能となっている、安全支援システム。
    The safety support system according to claim 3,
    The safety support system, wherein the image recognition range in the first process can be arbitrarily set.
  7.  請求項3記載の安全支援システムにおいて、
     前記ユーザは、歩行者である、安全支援システム。
    The safety support system according to claim 3,
    The safety support system, wherein the user is a pedestrian.
  8.  請求項3記載の安全支援システムにおいて、
     前記画像処理部は、前記撮像画像を所定の視野角毎に分割し、前記分割された複数の撮像画像毎に前記第1~第3処理を実行し、
     前記第1無線通信部は、前記分割された複数の撮像画像毎のデータ信号の中に、さらに前記分割された複数の撮像画像毎の前記第3識別子をそれぞれ加えて送信する、安全支援システム。
    The safety support system according to claim 3,
    The image processing unit divides the captured image for each predetermined viewing angle, executes the first to third processes for each of the divided plurality of captured images,
    The first wireless communication unit adds the third identifier for each of the plurality of divided captured images to the data signal for each of the divided plurality of captured images, and transmits the data signal.
  9.  それぞれ、カーブミラーに併設され、監視カメラ、画像処理部および第1無線通信部を備える複数の監視装置と、
     ユーザによって運搬され、第2無線通信部および情報処理部を備えるユーザ装置と、を有し、
     前記複数の監視装置内のそれぞれの前記画像処理部は、
     前記監視カメラの撮像画像に対して、所定の画像認識範囲内に存在するN(Nは1以上の整数)個の移動体を検出する第1処理と、
     前記N個の移動体毎に、移動体の種類を判別し、予め複数の値が規定された第1識別子の中から当該判別結果に応じた値を持つ第1識別子を生成する第2処理と、
     前記N個の移動体毎に、前記撮像画像上の座標に基づき所定の基準位置からの距離を判別し、予め複数の値が規定された第2識別子の中から当該判別結果に応じた値を持つ第2識別子を生成する第3処理と、を実行し、
     前記複数の監視装置内のそれぞれの前記第1無線通信部は、自身に対応する前記画像処理部が検出した前記N個の移動体を対象にそれぞれ生成されたN組の第1および第2識別子を含んだデータ信号を送信し、
     前記第2無線通信部は、前記複数の監視装置の前記第1無線通信部からそれぞれ送信された複数の前記データ信号を受信し、
     前記情報処理部は、前記複数のデータ信号のそれぞれに含まれる前記N組の第1および第2識別子に基づいて移動体の存在状況を認識し、当該移動体の存在状況をユーザに通知するための所定の処理を行う、安全支援システム。
    A plurality of monitoring devices each provided with a curve mirror and including a monitoring camera, an image processing unit, and a first wireless communication unit,
    A user device carried by a user and provided with a second wireless communication unit and an information processing unit,
    Each of the image processing units in the plurality of monitoring devices is
    A first process for detecting N (N is an integer greater than or equal to 1) moving bodies existing within a predetermined image recognition range with respect to a captured image of the monitoring camera;
    A second process for determining the type of the moving body for each of the N moving bodies and generating a first identifier having a value corresponding to the determination result from among the first identifiers in which a plurality of values are defined in advance; ,
    For each of the N moving bodies, the distance from a predetermined reference position is determined based on the coordinates on the captured image, and a value corresponding to the determination result is selected from the second identifiers in which a plurality of values are defined in advance. And a third process for generating a second identifier having
    Each of the first wireless communication units in the plurality of monitoring devices includes N sets of first and second identifiers generated for the N moving objects detected by the image processing unit corresponding to the first wireless communication unit. Send a data signal containing
    The second wireless communication unit receives the plurality of data signals respectively transmitted from the first wireless communication unit of the plurality of monitoring devices;
    The information processing unit recognizes the presence state of a moving body based on the N sets of first and second identifiers included in each of the plurality of data signals, and notifies the user of the presence state of the moving body. A safety support system that performs predetermined processing.
  10.  請求項9記載の安全支援システムにおいて、
     前記情報処理部は、さらに、前記ユーザの進行方位を検出する方位検出部を備え、
     前記複数の監視装置内のそれぞれの第1無線通信部は、自身の前記データ信号の中に、さらに、自身に対応する前記監視カメラの撮像方位を示す第3識別子を加えて送信する、安全支援システム。
    The safety support system according to claim 9,
    The information processing unit further includes an azimuth detecting unit that detects the traveling azimuth of the user,
    Each of the first wireless communication units in the plurality of monitoring devices transmits the data signal by adding a third identifier indicating the imaging direction of the monitoring camera corresponding to the data signal to the first wireless communication unit. system.
  11.  請求項10記載の安全支援システムにおいて、
     前記複数の監視装置は、異なる交差点に配置されたカーブミラーにそれぞれ併設された第1および第2監視装置を含み、
     前記複数の監視装置内のそれぞれの前記第1無線通信部は、自身の前記データ信号に対して、さらに、監視装置を識別するための装置識別情報を付加して送信し、
     前記第2無線通信部は、さらに、前記第1および第2監視装置の前記第1無線通信部からそれぞれ送信された無線信号を監視し、当該各無線信号に含まれる前記装置識別情報と、当該各無線信号の電波強度とを検出し、
     前記情報処理部は、前記第1監視装置からの前記データ信号を対象として処理を行っている状態で、前記第1監視装置からの無線信号の電波強度がピーク値から所定の値だけ減少した場合、前記第1監視装置からの前記データ信号を対象とする処理を停止し、前記第2無線通信部が前記第1監視装置とは異なる前記装置識別情報を含んだ無線信号を受信するのを待ち、当該待ちの状態で、前記第2無線通信部が前記第2監視装置からの無線信号を受信した場合、前記第2監視装置からの前記データ信号を対象として処理を行う、安全支援システム。
    The safety support system according to claim 10,
    The plurality of monitoring devices include a first monitoring device and a second monitoring device provided respectively on curve mirrors arranged at different intersections,
    Each of the first wireless communication units in the plurality of monitoring devices further transmits device identification information for identifying the monitoring device to the data signal thereof,
    The second wireless communication unit further monitors the wireless signals transmitted from the first wireless communication units of the first and second monitoring devices, and the device identification information included in the wireless signals, Detect the radio field intensity of each radio signal,
    When the information processing unit is processing the data signal from the first monitoring device and the radio signal intensity of the wireless signal from the first monitoring device is reduced from a peak value by a predetermined value , Stop processing the data signal from the first monitoring device, and wait for the second wireless communication unit to receive a wireless signal including the device identification information different from the first monitoring device. In the waiting state, when the second wireless communication unit receives a wireless signal from the second monitoring device, the safety support system performs processing on the data signal from the second monitoring device.
  12.  請求項10記載の安全支援システムにおいて、
     前記複数の監視装置の中の2個の監視装置は、交差点の四つの角を時計回りの順に第1角、第2角、第3角、第4角とした場合、前記第1角に配置されたカーブミラーにそれぞれ併設され、
     前記2個の監視装置の一方は、前記第2角と前記第3角の間を交差点の入り口とする領域を撮像するように併設され、
     前記2個の監視装置の他方は、前記第3角と前記第4角の間を交差点の入り口とする領域を撮像するように併設される、安全支援システム。
    The safety support system according to claim 10,
    Two monitoring devices among the plurality of monitoring devices are arranged at the first corner when the four corners of the intersection are the first corner, the second corner, the third corner, and the fourth corner in the clockwise order. Are attached to each curved mirror,
    One of the two monitoring devices is provided side by side so as to image a region having an entrance of an intersection between the second corner and the third corner,
    The other of the two monitoring devices is a safety support system that is provided side by side so as to image a region having an entrance of an intersection between the third corner and the fourth corner.
  13.  監視カメラ、画像処理部および第1無線通信部を備える安全支援装置であって、
     前記画像処理部は、
     前記監視カメラの撮像画像に対して、所定の画像認識範囲内に存在するN(Nは1以上の整数)個の移動体を検出する第1処理と、
     前記N個の移動体毎に、移動体の種類を判別し、予め複数の値が規定された第1識別子の中から当該判別結果に応じた値を持つ第1識別子を生成する第2処理と、
     前記N個の移動体毎に、前記撮像画像上の座標に基づき所定の基準位置からの距離を判別し、予め複数の値が規定された第2識別子の中から当該判別結果に応じた値を持つ第2識別子を生成する第3処理と、を実行し、
     前記第1無線通信部は、前記N個の移動体を対象にそれぞれ生成されたN組の第1および第2識別子を含んだデータ信号を送信する、安全支援装置。
    A safety support apparatus including a monitoring camera, an image processing unit, and a first wireless communication unit,
    The image processing unit
    A first process for detecting N (N is an integer greater than or equal to 1) moving bodies existing within a predetermined image recognition range with respect to a captured image of the monitoring camera;
    A second process for determining the type of the moving body for each of the N moving bodies and generating a first identifier having a value corresponding to the determination result from among the first identifiers in which a plurality of values are defined in advance; ,
    For each of the N moving bodies, the distance from a predetermined reference position is determined based on the coordinates on the captured image, and a value corresponding to the determination result is selected from the second identifiers in which a plurality of values are defined in advance. And a third process for generating a second identifier having
    The first wireless communication unit is a safety support device that transmits a data signal including N sets of first and second identifiers generated for the N mobile objects.
  14.  請求項13記載の安全支援装置において、
     前記安全支援装置は、カーブミラーに併設される、安全支援装置。
    The safety support device according to claim 13,
    The safety support device is a safety support device provided alongside a curve mirror.
  15.  請求項14記載の安全支援装置において、
     前記第1識別子として規定された複数の値の中には、移動体の種類を判別できなかった場合に割り当てられる値が含まれる、安全支援装置。
    The safety support device according to claim 14, wherein
    The plurality of values defined as the first identifier include a value assigned when the type of the moving object cannot be determined.
  16.  請求項15記載の安全支援装置において、
     前記第1無線通信部は、前記データ信号の中に、さらに、前記監視カメラの撮像方位を示す第3識別子を加えて送信する、安全支援装置。
    The safety support device according to claim 15,
    The first wireless communication unit transmits the data signal with a third identifier indicating an imaging direction of the monitoring camera added to the data signal.
  17.  請求項16記載の安全支援装置において、
     前記画像処理部は、前記撮像画像を所定の視野角毎に分割し、前記分割された複数の撮像画像毎に前記第1~第3処理を実行し、
     前記第1無線通信部は、前記分割された複数の撮像画像毎の前記データ信号の中に、さらに、前記分割された複数の撮像画像毎の前記第3識別子をそれぞれ加えて送信する、安全支援装置。
    The safety support device according to claim 16, wherein
    The image processing unit divides the captured image for each predetermined viewing angle, executes the first to third processes for each of the divided plurality of captured images,
    The first wireless communication unit adds the third identifier for each of the plurality of divided captured images to the data signal for each of the plurality of divided captured images, and transmits the data signal. apparatus.
PCT/JP2013/068553 2013-07-05 2013-07-05 Safety assistance system and safety assistance device WO2015001677A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2013/068553 WO2015001677A1 (en) 2013-07-05 2013-07-05 Safety assistance system and safety assistance device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2013/068553 WO2015001677A1 (en) 2013-07-05 2013-07-05 Safety assistance system and safety assistance device

Publications (1)

Publication Number Publication Date
WO2015001677A1 true WO2015001677A1 (en) 2015-01-08

Family

ID=52143292

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/068553 WO2015001677A1 (en) 2013-07-05 2013-07-05 Safety assistance system and safety assistance device

Country Status (1)

Country Link
WO (1) WO2015001677A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3050710A1 (en) * 2016-04-28 2017-11-03 Peugeot Citroen Automobiles Sa METHOD AND DEVICE FOR ASSISTING THE DRIVING OF A MANEUVERING VEHICLE FOR PARKING IN A PARKING
JP2019204393A (en) * 2018-05-25 2019-11-28 アルパイン株式会社 Image processing device and image processing method
WO2021140621A1 (en) * 2020-01-09 2021-07-15 三菱電機株式会社 Information generation device, warning device, information generation method, warning method, information generation program, and warning program
EP3913600A1 (en) * 2020-05-19 2021-11-24 Beijing Baidu Netcom Science And Technology Co., Ltd. Information processing method and apparatus for vehicle driving on curve
WO2022163544A1 (en) * 2021-01-26 2022-08-04 京セラ株式会社 Observation device and observation method
WO2022239709A1 (en) * 2021-05-10 2022-11-17 京セラ株式会社 Observation device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0567298A (en) * 1991-09-06 1993-03-19 Omron Corp Device for reporting object approach
JPH11161888A (en) * 1997-11-28 1999-06-18 Hitachi Denshi Ltd Itv display method/device for monitoring important traffic point
JP2006228004A (en) * 2005-02-18 2006-08-31 Fuji Photo Film Co Ltd Drive support system
JP2007102577A (en) * 2005-10-05 2007-04-19 Kawasaki Heavy Ind Ltd Information providing device and traveling support system using the same
JP2007156754A (en) * 2005-12-02 2007-06-21 Aisin Aw Co Ltd Intervehicular communication system
JP2008123367A (en) * 2006-11-14 2008-05-29 Denso Corp Communication device used for inter-vehicle communication and program for communication device
JP2009301494A (en) * 2008-06-17 2009-12-24 Sumitomo Electric Ind Ltd Image processing unit and image processing method
JP2010055157A (en) * 2008-08-26 2010-03-11 Panasonic Corp Intersection situation recognition system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0567298A (en) * 1991-09-06 1993-03-19 Omron Corp Device for reporting object approach
JPH11161888A (en) * 1997-11-28 1999-06-18 Hitachi Denshi Ltd Itv display method/device for monitoring important traffic point
JP2006228004A (en) * 2005-02-18 2006-08-31 Fuji Photo Film Co Ltd Drive support system
JP2007102577A (en) * 2005-10-05 2007-04-19 Kawasaki Heavy Ind Ltd Information providing device and traveling support system using the same
JP2007156754A (en) * 2005-12-02 2007-06-21 Aisin Aw Co Ltd Intervehicular communication system
JP2008123367A (en) * 2006-11-14 2008-05-29 Denso Corp Communication device used for inter-vehicle communication and program for communication device
JP2009301494A (en) * 2008-06-17 2009-12-24 Sumitomo Electric Ind Ltd Image processing unit and image processing method
JP2010055157A (en) * 2008-08-26 2010-03-11 Panasonic Corp Intersection situation recognition system

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3050710A1 (en) * 2016-04-28 2017-11-03 Peugeot Citroen Automobiles Sa METHOD AND DEVICE FOR ASSISTING THE DRIVING OF A MANEUVERING VEHICLE FOR PARKING IN A PARKING
JP2019204393A (en) * 2018-05-25 2019-11-28 アルパイン株式会社 Image processing device and image processing method
JP7021001B2 (en) 2018-05-25 2022-02-16 アルパイン株式会社 Image processing device and image processing method
WO2021140621A1 (en) * 2020-01-09 2021-07-15 三菱電機株式会社 Information generation device, warning device, information generation method, warning method, information generation program, and warning program
EP3913600A1 (en) * 2020-05-19 2021-11-24 Beijing Baidu Netcom Science And Technology Co., Ltd. Information processing method and apparatus for vehicle driving on curve
US11498583B2 (en) 2020-05-19 2022-11-15 Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. Information processing method for vehicle driving on curve, electronic device and storage medium
WO2022163544A1 (en) * 2021-01-26 2022-08-04 京セラ株式会社 Observation device and observation method
WO2022239709A1 (en) * 2021-05-10 2022-11-17 京セラ株式会社 Observation device

Similar Documents

Publication Publication Date Title
JP6840240B2 (en) Dynamic route determination for autonomous vehicles
US11562651B2 (en) Autonomous vehicle notification system
US11619998B2 (en) Communication between autonomous vehicle and external observers
CN108417087B (en) Vehicle safe passing system and method
US11092456B2 (en) Object location indicator system and method
US11155268B2 (en) Utilizing passenger attention data captured in vehicles for localization and location-based services
US20180330610A1 (en) Traffic accident warning method and traffic accident warning apparatus
KR20200106131A (en) Operation of a vehicle in the event of an emergency
US20220013008A1 (en) System and method for using v2x and sensor data
WO2015001677A1 (en) Safety assistance system and safety assistance device
US20100020169A1 (en) Providing vehicle information
CN111724616B (en) Method and device for acquiring and sharing data based on artificial intelligence
US20140063196A1 (en) Comprehensive and intelligent system for managing traffic and emergency services
WO2012164601A1 (en) Mobile body navigation device, and mobile body navigation system
KR20160122368A (en) Method and Apparatus for image information of car navigation to Improve the accuracy of the location using space information
US20220032907A1 (en) Vehicle management system, management method, and program
US20200211379A1 (en) Roundabout assist
US10699576B1 (en) Travel smart collision avoidance warning system
US20220200701A1 (en) System and method for determining actions of a vehicle by visible light communication
KR102366042B1 (en) Apparatus for artificial intelligence based safe driving system and operation method thereof
US20230398866A1 (en) Systems and methods for heads-up display
WO2022195798A1 (en) Evacuation route guidance system, evacuation route creation method, and program-recording medium
US11180090B2 (en) Apparatus and method for camera view selection/suggestion
US20240054779A1 (en) Systems and methods for finding group members via augmented reality
KR20240059020A (en) Method for providing stereoscopic sound alarm service through relative position conversion of moving objects, and apparatus and system therefor

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13888673

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03/06/2016)

NENP Non-entry into the national phase

Ref country code: JP

122 Ep: pct application non-entry in european phase

Ref document number: 13888673

Country of ref document: EP

Kind code of ref document: A1