WO2017130368A1 - Dispositif de surveillance et programme - Google Patents

Dispositif de surveillance et programme Download PDF

Info

Publication number
WO2017130368A1
WO2017130368A1 PCT/JP2016/052579 JP2016052579W WO2017130368A1 WO 2017130368 A1 WO2017130368 A1 WO 2017130368A1 JP 2016052579 W JP2016052579 W JP 2016052579W WO 2017130368 A1 WO2017130368 A1 WO 2017130368A1
Authority
WO
WIPO (PCT)
Prior art keywords
display
captured image
area
region
mode
Prior art date
Application number
PCT/JP2016/052579
Other languages
English (en)
Japanese (ja)
Inventor
彰 木下
征規 小浦
Original Assignee
富士通周辺機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士通周辺機株式会社 filed Critical 富士通周辺機株式会社
Priority to PCT/JP2016/052579 priority Critical patent/WO2017130368A1/fr
Publication of WO2017130368A1 publication Critical patent/WO2017130368A1/fr

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/26Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view to the rear of the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/28Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with an adjustable field of view
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to a monitoring device and a program.
  • a monitoring device has been proposed in which a camera is provided in a vehicle and an image taken by the camera is displayed on a display provided in the vehicle (see, for example, Patent Documents 1, 2, 3, and 4).
  • the camera captures the rear of the vehicle, it is possible to support safety confirmation behind the vehicle by displaying a captured image of the camera on the display.
  • For the rear image taken when the vehicle moves backward for example, it is desirable to take more images of the ground side than the rear image taken when the vehicle moves forward, because an obstacle near the ground can be detected.
  • a monitoring device for example, refer to Patent Document 3 that switches the shooting direction of the camera that shoots the rear of the vehicle from the upper direction to the lower direction by the motor, and the camera took a picture when the vehicle is moving backward.
  • a monitoring device that compresses an image in the vertical direction has also been proposed.
  • a mechanism such as a motor for switching the shooting direction of the camera is provided in addition to the sensor and the camera. Also in the case of a monitoring device that compresses an image captured by a camera in the vertical direction, a mechanism for changing the tilt angle of the camera is provided to change the shooting direction of the camera.
  • an object of the present invention is to provide a monitoring apparatus and a measurement program that can display an image equivalent to a change in the shooting direction of the camera with a simple configuration.
  • a camera that captures an image behind the vehicle and data of the captured image of the camera are processed.
  • the first mode the first upper region of the entire region of the captured image is processed.
  • the processor extracts a second region below the entire region of the photographed image in the second mode, and extracts the region of the photographed image in the first region in the first mode.
  • a monitoring device larger than the possible size is provided.
  • an image equivalent to changing the shooting direction of the camera can be displayed with a simple configuration.
  • the disclosed monitoring apparatus and program perform processing on data of a captured image of a camera that captures an image behind the vehicle, and in the first mode, extract the first region above the entire region of the captured image, In the second mode, the second region below is extracted from the entire region of the captured image.
  • the display displays captured image data in the first area in the first mode, and displays captured image data in the second area in the second mode.
  • the size of the entire area of the captured image is larger than the size that can be displayed in the display area of the display.
  • FIG. 1 is a block diagram illustrating an example of a monitoring device according to an embodiment.
  • a monitoring device 1 illustrated in FIG. 1 includes a camera 2, a processing device 3, and a display 4.
  • the camera 2 is provided in a vehicle (not shown) and captures an image behind the vehicle.
  • the camera 2 can be formed by a digital camera, for example, and may include a wide angle lens.
  • the photographed image of the camera 2 refers to an image photographed within a range that falls within the field of view of the camera 2. There is no particular mechanism for mechanically changing the shooting direction of the camera 2. For this reason, when the camera 2 is attached to the vehicle, the position of the camera 2 is fixed, and the shooting direction of the camera 2 is also fixed.
  • the processing device 3 processes the captured image data of the camera 2 and, in the first mode of the monitoring device 1 (or the processing device 3), extracts the first region above the entire region of the captured image.
  • the second mode of the monitoring device 1 or the processing device 3
  • the lower second region is extracted from the entire region of the captured image.
  • the upper part of the captured image indicates an upward direction along a direction orthogonal to the horizontal scanning line
  • the lower part of the captured image indicates a downward direction along a direction orthogonal to the horizontal scanning line.
  • the first region and the second region may partially overlap, but the upper end of the first region is located above the upper end of the second region in the entire region of the captured image, and the second region The lower end of the region is located below the lower end of the first region.
  • the first region and the second region have the same shape and the same area in this example.
  • the first region and the second region are limited by the restriction caused by at least one of the shooting range of the camera 2, the shape and area of the first region, the shape and area of the second region, and the display region of the display 4. At least one of the shape and area of the region may be different from each other.
  • the processing device 3 When the processing device 3 receives an external signal indicating the reverse of the vehicle (hereinafter also referred to as “reverse signal”), the processing device 3 transitions from the first mode to the second mode in response to the reverse signal.
  • the reverse signal can be generated in a well-known manner, for example, when the vehicle transmission (or a shift lever or the like) is in a reverse (ie, reverse) position.
  • the processing device 3 stops receiving the reverse signal, the processing device 3 transitions from the second mode to the first mode in response to the disappearance of the reverse signal.
  • the reverse signal may be input to the processing device 3 via, for example, a vehicle CAN (Controller Area Network).
  • the display 4 displays captured image data in the first area in the first mode, and displays captured image data in the second area in the second mode.
  • the display 4 may be provided at a position inside the vehicle such as an instrument panel or inside the windshield, or may be provided at a position outside the vehicle such as a side mirror.
  • the processing device 3 can be formed by a processor 31 such as a CPU (Central Processing Unit) and a memory 32, for example.
  • the processor 31 executes a process of displaying a photographed image of the camera 2 by executing a program stored in the memory 32, for example.
  • the memory 32 includes parameters used by the program, intermediate results of operations executed by the processor 31, data of captured images, setting of areas to be extracted and displayed from the captured images, setting of first and second areas, and the like. Stores various data.
  • the memory 32 can be formed by various storage devices such as a semiconductor storage device, a magnetic recording medium, an optical recording medium, and a magneto-optical recording medium, or various recording media.
  • the memory 32 can form an example of a computer-readable storage medium that stores a program executed by the processor 31.
  • FIG. 2 is a flowchart for explaining an example of processing of the monitoring device in the first embodiment.
  • the processing shown in FIG. 2 can be executed by executing the program stored in the memory 32 by the processor 31 shown in FIG.
  • step S ⁇ b> 1 the processor 31 acquires captured image data from the camera 2 and stores it in the memory 32.
  • step S ⁇ b> 2 the processor 31 sets an area to be extracted and displayed on the display 4 from the entire area of the captured image.
  • the area set in step S ⁇ b> 2 is an arbitrary area in the captured image having a size that matches the display area of the display 4.
  • the region set in step S2 may be the first region of the photographed image, the second region of the photographed image, or the region of the central portion of the photographed region, for example.
  • step S3 the processor 31 determines whether or not a reverse signal has been received. If the determination result is NO, the process proceeds to step S4, and if the determination result is YES, the process proceeds to step S8.
  • step S4 the processor 31 determines whether or not the set area matches the first area displayed on the display 4 in the first mode. If the determination result is NO, the process proceeds to step S5. If the determination result is YES, the process proceeds to step S6.
  • the first area is preset by default, for example, and stored in the memory 32.
  • step S5 the processor 31 moves the position of the region set in step S2 upward by one step in the entire region of the captured image, and the process proceeds to step S6.
  • the process of moving the position of the set area upward by one step in the entire area of the captured image can be executed by a known method.
  • the process of moving the position of the set area upward by one step in the entire area of the photographed image for example, moves the position of the set area upward by one or more horizontal scanning lines.
  • step S6 the processor 31 extracts the data of the captured image within the set area from the captured image data stored in the memory 32.
  • the process of extracting the data of the captured image in the set area from the captured image data stored in the memory 32 can be executed by a known method.
  • step S7 the processor 31 outputs the captured image data in the region extracted in step S6 to the display 4 to display the extracted captured image in the region, and the process returns to step S1. Therefore, in a state where no reverse signal is received, the position of the set area is gradually shifted upward until the set area matches the first area displayed on the display 4 in the first mode.
  • step S8 the processor 31 determines whether or not the set area matches the second area displayed on the display 4 in the second mode. If the determination result is NO, the process proceeds to step S9. When the determination result is YES, the process proceeds to step S6.
  • the second area is preset by default, for example, and stored in the memory 32.
  • step S9 the processor 31 moves the position of the region set in step S2 downward by one step in the entire region of the captured image, and the process proceeds to step S6.
  • the process of moving the position of the set area downward by one step in the entire area of the captured image can be executed by a known method.
  • the set area position In the process of moving the set area position downward by one step in the entire area of the captured image, for example, the set area position is moved downward by the number of one or more horizontal scanning lines. Therefore, in the state where the reverse signal is received, the display position of the set area is gradually shifted downward until the set area matches the second area displayed on the display 4 in the second mode. .
  • the position of the set region is gradually shifted downward from the position of the first region.
  • the captured image data can be sequentially extracted, and finally the captured image data in the second region can be extracted.
  • the position of the set area is gradually shifted downward while moving the captured image in the area.
  • the displayed image looks unnatural, and if the driver of the vehicle misses the switching of the captured image, It is difficult to recognize the transition from the first mode to the second mode.
  • the display on the display 4 gradually changes from the captured image in the first area to the captured image in the second area, the displayed image changes continuously, so that the displayed image is Compared with the case where the captured image in the first area is switched directly or instantaneously to the captured image in the second area without appearing unnatural, the driver of the vehicle changes from the first mode to the second mode. The transition to can be recognized more reliably.
  • the captured image data in the region is sequentially extracted while the position of the set region is gradually shifted upward from the position of the second region.
  • the data of the captured image in the first area can be extracted.
  • the position of the set area is gradually shifted upward while the captured image in the area is By sequentially extracting the data and finally extracting the data of the photographed image in the first area, the display on the display 4 gradually starts from the photographed image in the second area. Transition to a captured image.
  • the displayed image looks unnatural, and if the driver of the vehicle misses the switching of the captured image, It is difficult to recognize the transition from the second mode to the first mode.
  • the display on the display 4 gradually changes from the captured image in the second area to the captured image in the first area, the displayed image changes continuously, so that the displayed image is Compared to the case where the captured image in the second area is switched directly or instantaneously to the captured image in the first area without appearing unnatural, the driver of the vehicle changes from the second mode to the first mode. The transition to can be recognized more reliably.
  • the first area and the second area also have a size that matches the display area of the display 4, respectively.
  • the sizes of the first area and the second area are the same as the display area of the display 4, but may be smaller than the size of the display area of the display 4.
  • both the first area and the second area can be shifted in a direction orthogonal to the horizontal scanning lines in units of a certain number of horizontal scanning lines.
  • Both the captured image in the first area and the captured image in the second area displayed on the display 4 can support safety confirmation behind the vehicle, and can be used, for example, in the same manner as an image captured in the rearview mirror.
  • the captured image in the first area displayed on the display 4 is set so that, for example, an image at a position separated by a certain distance D1 from the rear of the vehicle can be confirmed. Suitable for assisting.
  • the captured image in the second area displayed on the display 4 is set so that, for example, an image within a certain distance D2 (D2 ⁇ D1) from the rear of the vehicle can be confirmed.
  • D2 a certain distance
  • the image information in the vicinity of the ground behind the vehicle is included more, which is suitable for assisting safety confirmation behind the vehicle when the vehicle is moving backward.
  • the first area from the upper end to a certain distance below the entire area of the captured image is extracted
  • the first area from the lower end to the certain distance above the entire area of the captured image is extracted.
  • Two regions may be extracted.
  • the first area may correspond to a portion above the entire area of the captured image (the side far from the ground in the captured image) from the second area.
  • the second area only needs to correspond to the lower part of the whole area of the captured image (the side closer to the ground in the captured image) than the first area.
  • the first area is an area from the upper end of the entire captured image or from the vicinity of the upper end below the upper end to a certain distance below
  • the second area is the lower end of the entire captured image area. Or, it is a region from the vicinity of the lower end above the lower end to the upper part of a certain distance.
  • data of a photographed image from the camera 2 is always input to the processing device 3, and a photographed image in an area processed (that is, extracted) by the processing device 3 is displayed on the display 4. .
  • the processing shown in FIG. 2 is continuously executed. Even if the reception state of the reverse signal changes while the mode of the monitoring device 1 (or the processing device 3) is changing between the first mode and the second mode, the display on the display 4 That is, the photographed image in the extracted area changes according to the reception state of the reverse signal.
  • FIG. 3 is a perspective view showing an example of a vehicle.
  • the vehicle 51 illustrated in FIG. 3 is a passenger car, for example, but the vehicle 51 is not limited to a passenger car, and may be a large vehicle such as a truck or a bus.
  • two cameras 2 are provided on the side mirrors 52 on both sides of the vehicle 51, but the number and positions of the cameras 2 are not limited to the example in FIG. It is sufficient that at least one camera 2 is provided, and the position of the camera 2 is not particularly limited as long as the position behind the vehicle 51 can be photographed.
  • the two displays 4 are provided in the upper part inside the windshield 53 of the vehicle 51, but the number and positions of the displays 4 are not limited to the example of FIG.
  • the display 4 may be provided at a position inside the vehicle 51 such as an instrument panel, or may be provided at a position outside the vehicle such as a side mirror 52. It is sufficient that at least one display 4 is provided, and the position of the display 4 is not particularly limited as long as the driver of the vehicle 51 can visually check the position. For example, instead of providing two displays 4, the display screen of one display 4 is divided into two, and the image data from the two cameras 2 is output to one display 4. An image captured by the camera 2 may be displayed in a corresponding divided area of one display 4. A second embodiment using two cameras and two displays will be described later.
  • FIG. 4 is a diagram for explaining an example of an image captured by the camera and an image displayed on the display.
  • a region 201A surrounded by a broken line is extracted by the processing device 3 from the captured image 201 of the camera 2 shown on the left side of FIG. 4, and is displayed on the display region 401 on the display 4 shown on the right side of FIG. Is displayed.
  • the area 201 ⁇ / b> A has a size that matches the display area 401 of the display 4.
  • the size of the area 201 ⁇ / b> A is the same as the size of the display area 401 of the display 4. That is, the size of the entire area of the captured image 201 is larger than the size that can be displayed in the display area 401 of the display 4.
  • the width along the horizontal direction of the area 201 ⁇ / b> A (that is, the direction parallel to the horizontal scanning line) is narrower than the width of the captured image 201, but the width of the area 201 ⁇ / b> A is the same as the width of the captured image 201. It may be the same. That is, the width of the area 201A may be equal to or smaller than the width of the captured image 201.
  • the length of the region 201A along the vertical direction (that is, the direction orthogonal to the horizontal scanning line) is shorter than the length of the captured image 201.
  • the area 201A is set on the upper side of the photographed image 201 and the area 201A is used as a first area that is set so that an image at a certain distance D1 from the rear of the vehicle 51 can be confirmed, the area 201A This captured image is suitable for supporting safety confirmation behind the vehicle 51 when the vehicle 51 moves forward, for example.
  • the position of the area 201A is set to the lower side of the photographed image 201, and the second area is set so that the image of the area 201A can be confirmed within a certain distance D2 (D2 ⁇ D1) from the rear of the vehicle 51.
  • the captured image in the area 201 ⁇ / b> A is suitable for supporting safety confirmation behind the vehicle 51 when the vehicle 51 moves backward, for example.
  • the image information of the captured image 201 includes another vehicle 59 behind the vehicle 51 on which the monitoring device 1 is mounted, a road surface 81 that is an example of a ground below the vehicle 59, and a background 82. including.
  • FIG. 5 is a diagram illustrating an example of a captured image in the first area displayed on the display in the first mode.
  • a captured image 201-1 in the first region shown in FIG. 5 includes another vehicle 59 behind the vehicle 51 on which the monitoring device 1 is mounted, a road surface 81 just below the vehicle 59, and a background 82.
  • FIG. 6 is a diagram illustrating an example of a captured image in an area displayed on the display during the transition between the first mode and the second mode.
  • a photographed image 201-i in the region shown in FIG. 6 includes another vehicle 59 behind the vehicle 51 on which the monitoring device 1 is mounted, a part of the front of the vehicle 59, a road surface 81 directly below, and an obstacle 81A. Part and background 82 are included.
  • Reference numeral 6 denotes a captured image 201-i in one area among a plurality of areas between the first area and the second area.
  • FIG. 7 is a diagram illustrating an example of a captured image in the second area displayed on the display in the second mode.
  • the captured image 201-2 in the second area shown in FIG. 7 includes another vehicle 59 behind the vehicle 51 on which the monitoring device 1 is mounted, a road surface 81 directly below and below the vehicle 59, and an obstacle 81A. , And background 82.
  • the first area is set so that an image at a position away from the rear of the vehicle 51 by a certain distance D1 can be confirmed.
  • the captured image 201-1 in the first area shown in FIG. The road surface 81 and the background 82 immediately below and directly below can be recognized.
  • the captured image 201-1 in the first region is suitable for supporting safety confirmation behind the vehicle 51 when the vehicle 51 moves forward, for example.
  • the second area is set so that an image at a position within a certain distance D2 (D2 ⁇ D1) from the rear of the vehicle 51 can be confirmed.
  • D2 a certain distance
  • the captured image 201-2 in the second area is compared with the captured image 201-1 in the first area shown in FIG. 5 and the captured image 201-i in the area shown in FIG. It contains more image information in the vicinity of the ground behind the vehicle, and is suitable for supporting safety confirmation behind the vehicle 51 when the vehicle 51 moves backward.
  • the captured image 201-1 in the first area shown in FIG. 5 does not include the obstacle 81A present on the road surface 81 directly behind the vehicle 51. 6 includes only a part of the obstacle 81A on the road surface 81.
  • the entire area of the captured image of the camera 2 includes an obstacle 81A that exists on the road surface 81 directly behind the vehicle 51. Therefore, the captured image 201-2 in the second area shown in FIG. 7 extracted from the entire area of the captured image of the camera 2 includes the entire obstacle 81A on the road surface 81, and includes the first area shown in FIG.
  • An image immediately behind the vehicle 51 that cannot be recognized by the captured image 201-1 can be recognized, which is particularly suitable for assisting safety confirmation behind the vehicle 51 when the vehicle 51 is moving backward.
  • FIG. 8 is a block diagram showing an example of a monitoring device in the second embodiment.
  • a monitoring device 1A shown in FIG. 8 includes cameras 2-1, 2-2, a processing device 3A, and displays 4-1, 4-2.
  • the cameras 2-1 and 2-2 are provided in a vehicle (not shown), and the camera 2-1 captures an image on the rear right side of the vehicle, and the camera 2-2 captures an image on the rear left side of the vehicle. To do.
  • the camera 2-2 is attached to the vehicle, the position of the camera 2-2 is fixed and the shooting direction of the camera 2-2 is also fixed.
  • the processing device 3A includes a processor 31A such as a CPU and a memory 32A.
  • the processing device 3A executes the same processing as that of the processing device 3 in the first embodiment on the data of the captured images from the cameras 2-1 and 2-2.
  • the display 4-1 displays the captured image of the camera 2-1 processed by the processing device 3A
  • the display 4-2 displays the captured image of the camera 2-2 processed by the processing device 3A.
  • the safety confirmation behind the vehicle can be more reliably supported.
  • FIG. 9 is a flowchart for explaining an example of processing of the monitoring device in the second embodiment. 9, the same steps as those in FIG. 2 are denoted by the same reference numerals, and the description thereof is omitted.
  • the processing shown in FIG. 9 can be executed by the processor 31A shown in FIG. 8 executing a program stored in the memory 32A.
  • the processor 31A extracts the captured image data in the set area extracted from the captured image data stored in the memory 32A in step S6. It is determined whether or not the data is.
  • step S18 the processor 31A outputs the photographed image data in the region extracted in the step S6 to the display 4-1, and displays the photographed image in the extracted region. The process returns to step S1.
  • step S19 the processor 31A extracts in step S6.
  • the captured image data in the region is output to the display 4-2 to display the extracted captured image in the region, and the process returns to step S1.
  • the data of the captured images from the cameras 2-1 and 2-2 are always input to the processing device 3A, and the captured images in the region processed (ie, extracted) by the processing device 3A are displayed. It is displayed on the corresponding displays 4-1 and 4-2. For this reason, for example, when the vehicle is in a specific state, such as a state where the engine of the vehicle is operating, the processing shown in FIG. 9 is continuously executed. Even if the reception state of the reverse signal changes while the mode of the monitoring device 1A (or the processing device 3A) is changing between the first mode and the second mode, the display 4-1 The display on 4-2, that is, the captured image in the extracted area, changes according to the reception state of the reverse signal. As described above, in the processing shown in FIG.
  • the processing of the captured image data from the two cameras 2-1 and 2-2 is performed by the single processing device 3A, and the first mode is used in the first mode.
  • the captured image data in the area is displayed on the corresponding two displays 4-1 and 4-2.
  • the second mode the captured image data in the second area is displayed on the corresponding two displays 4-. 1, 4-2 can be displayed.
  • an image equivalent to a change in the shooting direction of the camera can be simply configured using the camera provided in the vehicle without providing a mechanism for mechanically changing the shooting direction of the camera. It can be displayed by a monitoring device having In addition, since an image equivalent to changing the shooting direction of the camera can be displayed with a simple configuration, it can be displayed with a monitoring device having an inexpensive configuration.
  • the displayed compressed image may become an unnatural image depending on the compression method, for example, a method of thinning scanning lines. is there.
  • a desired area having a size corresponding to the display area of the display is extracted and displayed from the captured image without compressing the image. It does not look natural, and complicated arithmetic processing for compressing images can be omitted.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Signal Processing (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

L'invention concerne un dispositif de surveillance qui comporte : une caméra qui capture une image d'une zone derrière un véhicule ; une unité de traitement qui traite des données de l'image capturée par la caméra, extrait une première région qui est une partie supérieure d'une région entière de l'image capturée dans un premier mode, et extrait une seconde région qui est une partie inférieure de la région entière de l'image capturée dans un second mode ; et un dispositif d'affichage qui affiche les données de l'image capturée dans la première région dans le premier mode, et affiche les données de l'image capturée dans la seconde région dans le second mode, la taille de la région entière de l'image capturée étant plus grande qu'une taille pouvant être affichée dans une région d'affichage du dispositif d'affichage.
PCT/JP2016/052579 2016-01-29 2016-01-29 Dispositif de surveillance et programme WO2017130368A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2016/052579 WO2017130368A1 (fr) 2016-01-29 2016-01-29 Dispositif de surveillance et programme

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2016/052579 WO2017130368A1 (fr) 2016-01-29 2016-01-29 Dispositif de surveillance et programme

Publications (1)

Publication Number Publication Date
WO2017130368A1 true WO2017130368A1 (fr) 2017-08-03

Family

ID=59397916

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/052579 WO2017130368A1 (fr) 2016-01-29 2016-01-29 Dispositif de surveillance et programme

Country Status (1)

Country Link
WO (1) WO2017130368A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009184557A (ja) * 2008-02-07 2009-08-20 Nissan Motor Co Ltd 車両周辺監視装置
JP2010163104A (ja) * 2009-01-16 2010-07-29 Denso Corp 車両用表示装置及びそれを備えた車両周辺視野表示システム
JP2012140106A (ja) * 2011-01-05 2012-07-26 Denso Corp 後方視界支援システム
JP2013129386A (ja) * 2011-12-22 2013-07-04 Toyota Motor Corp 車両用後方監視装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009184557A (ja) * 2008-02-07 2009-08-20 Nissan Motor Co Ltd 車両周辺監視装置
JP2010163104A (ja) * 2009-01-16 2010-07-29 Denso Corp 車両用表示装置及びそれを備えた車両周辺視野表示システム
JP2012140106A (ja) * 2011-01-05 2012-07-26 Denso Corp 後方視界支援システム
JP2013129386A (ja) * 2011-12-22 2013-07-04 Toyota Motor Corp 車両用後方監視装置

Similar Documents

Publication Publication Date Title
JP5321267B2 (ja) 車両用画像表示装置及び俯瞰画像の表示方法
JP6077210B2 (ja) 車両用後方監視装置
JP5194679B2 (ja) 車両用周辺監視装置および映像表示方法
JP6558989B2 (ja) 車両用電子ミラーシステム
EP2424221B1 (fr) Dispositif d'imagerie, système d'imagerie et procédé d'imagerie
JP5953824B2 (ja) 車両用後方視界支援装置及び車両用後方視界支援方法
JP2005311868A (ja) 車両周辺視認装置
JP5321711B2 (ja) 車両用周辺監視装置および映像表示方法
JP7052632B2 (ja) 車両用周辺表示装置
JP5404932B2 (ja) 交通状況のビデオデータと間隔データを組み合わせて視覚的に表示するための方法および装置
JP5051263B2 (ja) 車両用後方視認システム
JP2008301091A (ja) 車両用周辺監視装置
JP2003219413A (ja) 車両後方モニタシステムおよびモニタ装置
JP2018127083A (ja) 画像表示装置
JP2016168877A (ja) 車両用視認装置
JP4679816B2 (ja) 車両周辺表示制御装置
JP4945315B2 (ja) 運転支援システム及び車両
WO2017130368A1 (fr) Dispositif de surveillance et programme
JP6602721B2 (ja) 車両用視認装置
CN111186375B (zh) 车辆用电子镜系统
CN113051997A (zh) 用于监视车辆周围环境的设备和非暂时性计算机可读介质
JP6555240B2 (ja) 車両用撮影表示装置及び車両用撮影表示プログラム
EP3315366B1 (fr) Dispositif d'imagerie et d'affichage pour véhicule et support d'enregistrement
JP5286811B2 (ja) 駐車支援装置
JP2009262881A (ja) 車両用表示装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16887955

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16887955

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP