WO2018146997A1 - Dispositif de détection d'objet tridimensionnel - Google Patents

Dispositif de détection d'objet tridimensionnel Download PDF

Info

Publication number
WO2018146997A1
WO2018146997A1 PCT/JP2018/000666 JP2018000666W WO2018146997A1 WO 2018146997 A1 WO2018146997 A1 WO 2018146997A1 JP 2018000666 W JP2018000666 W JP 2018000666W WO 2018146997 A1 WO2018146997 A1 WO 2018146997A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
camera
bird
imaging
overlapping
Prior art date
Application number
PCT/JP2018/000666
Other languages
English (en)
Japanese (ja)
Inventor
一気 尾形
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Priority to JP2018566805A priority Critical patent/JPWO2018146997A1/ja
Publication of WO2018146997A1 publication Critical patent/WO2018146997A1/fr

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R21/00Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to a three-dimensional object detection device, a three-dimensional object detection method, and a program.
  • Patent Document 1 describes a technique for detecting and displaying a three-dimensional object based on two images obtained at different times by a single camera attached to the periphery of a vehicle (referred to as a first related technique). ) Is described. Specifically, in the first related technology, correlation processing is performed between the bird's-eye image of the captured current frame image and the bird's-eye image of the previous frame image shifted by the movement amount of the vehicle per one frame time, A solid object is detected and displayed.
  • Patent Document 2 every time a vehicle moves a predetermined distance by two cameras attached around the vehicle, a three-dimensional object is detected from a bird's-eye image of two frames of captured images acquired at the same timing.
  • the technology (second related technology) to be displayed is described. Specifically, based on two bird's-eye images generated from two frames of captured images acquired at the same timing, an image area extraction process, a three-dimensional object candidate area extraction process, and a three-dimensional object area extraction process are performed in this order. Do. In the image region extraction processing, a differential filter that enhances the contour of each bird's-eye view image is applied to extract the contour by edge enhancement, and noise is removed by binarization processing and processing that extracts a predetermined threshold value. Extract clear images.
  • the three-dimensional object candidate area extraction process two bird's-eye images after the image area extraction process are overlapped to perform a logical product. By this processing, overlapping image areas are extracted as solid object candidate areas.
  • the second related technology for detecting a three-dimensional object from images captured at the same timing by two cameras it is possible to detect a three-dimensional object existing around the vehicle even when the vehicle is stopped.
  • a region other than an object such as a person, a vehicle, or a pole is detected as a three-dimensional object candidate region. Therefore, the number of detected three-dimensional candidate areas increases, and the processing amount of the subsequent three-dimensional object region extraction process increases accordingly, and as a result, the calculation amount necessary to detect the three-dimensional object increases.
  • An object of the present invention is to provide a three-dimensional object detection apparatus that solves the above-described problem, that is, the problem that the amount of calculation required to detect a three-dimensional object from a camera image increases.
  • a three-dimensional object detection device is provided.
  • An object is detected as a first object from a first image obtained by imaging the first monitoring area with the first camera, and the second camera has an overlapping imaging area where the first camera and the imaging area overlap.
  • a recognizer for recognizing a three-dimensional object from the overlapping imaging region based on the first object and the second object appearing in the overlapping imaging region in the first bird's-eye image and the second bird's-eye image.
  • a three-dimensional object detection method includes: An object is detected as a first object from a first image obtained by imaging the first monitoring area with the first camera, and the second camera has an overlapping imaging area where the first camera and the imaging area overlap. Detecting the object as the second object from the second image obtained by imaging the second monitoring area with the camera of A first bird's-eye image and a second bird's-eye image are created by performing viewpoint transformation on the first image and the second image, Based on the first object and the second object appearing in the overlapping imaging area in the first bird's-eye image and the second bird's-eye image, a three-dimensional object is recognized from the overlapping imaging area.
  • a program is: Computer An object is detected as a first object from a first image obtained by imaging the first monitoring area with the first camera, and the second camera has an overlapping imaging area where the first camera and the imaging area overlap.
  • a recognizer for recognizing a three-dimensional object from the overlapping imaging region based on the first object and the second object appearing in the overlapping imaging region in the first bird's-eye image and the second bird's-eye image.
  • the present invention having the above-described configuration can reduce the amount of calculation required to detect a three-dimensional object from a camera image.
  • a three-dimensional object detection device 100 is installed with a plurality of cameras 101 and 102 that are installed from different viewpoints and have overlapping visual fields, and images are captured by the cameras 101 and 102.
  • the control unit 103 receives and processes the image signal obtained through the signal line, and the display 104 connected to the control unit 103 through the signal line.
  • the cameras 101 and 102 have different angles of view, and are attached around the vehicle, for example, as shown in FIG.
  • the cameras 101 and 102 capture images at a constant cycle and at the same timing.
  • An image signal obtained by one imaging with the cameras 101 and 102 is also called a frame image.
  • Image signals obtained by imaging with the cameras 101 and 102 are input to the control unit 103.
  • the control unit 103 performs object recognition by processing the input image signal, and recognizes whether the recognized object is a three-dimensional object.
  • the control unit 103 can be configured by a computer having a processor such as a microprocessor and a memory, for example.
  • the display 104 is a device such as a liquid crystal display device that presents the processing result of the control unit 103.
  • FIG. 3 is a flowchart showing an example of the operation of the three-dimensional object detection device 100 according to the present embodiment.
  • the operation shown in FIG. 3 can be realized by causing the processor constituting the control unit 103 to execute a program stored in the memory.
  • the operation of the present embodiment will be described with reference to FIG.
  • step S1 images of the periphery of the vehicle are photographed almost simultaneously by the cameras 101 and 102 (step S1).
  • Image signals obtained by being photographed by the cameras 101 and 102 are transmitted to the control unit 103 through signal lines.
  • control unit 103 performs object recognition on the image obtained by photographing with the camera 101 by an arbitrary method such as pattern matching and recognizes the object (step S2). Further, the control unit 103 performs arbitrary object recognition by pattern matching or the like on the image obtained by photographing with the camera 102 and recognizes the object (step S2).
  • the object recognition performed by the control unit 103 may be specific object recognition that recognizes a predetermined specific object such as a person or a pole. Note that the control unit 103 may execute an arbitrary correction process for correcting the mounting conditions and lens aberration of the cameras 101 and 102 before executing object recognition.
  • the image 111 obtained by capturing with the camera 101 includes an image of the object 105 as indicated by reference numeral 113 in FIG. 4, for example, and the image 112 obtained by capturing with the camera 102 includes, for example, FIG. An image of the object 105 as shown by reference numeral 114 in FIG.
  • the control unit 103 recognizes the image 113 from the image 111 as an object and recognizes the image 114 from the image 112 as an object by an arbitrary object recognition method such as pattern matching. Then, the control unit 103 generates outermost bounding rectangles 115 and 116 of the image recognized as the object in the images 111 and 112.
  • the control unit 103 performs viewpoint conversion on the images 111 and 112 and converts them into bird's-eye images such as the images 121 and 122 shown in FIG. 5 (step S3).
  • the viewpoint conversion is an operation for converting a landscape seen from the camera position as if viewed from a certain point in the sky (perspective projection conversion).
  • the control unit 103 may use a lookup table in order to reduce the amount of viewpoint conversion calculation.
  • the look-up table for viewpoint conversion calculates the pixel values after viewpoint conversion for the pixels captured by each camera in advance and stores them in an array (table), without performing the calculation each time. It is a table for performing coordinate transformation efficiently by referring to.
  • the three-dimensional object has a feature that appears to be distorted so as to extend radially from the position of the camera.
  • 127 is the position of the camera 101
  • 128 is the position of the camera 102
  • 125 is the outermost rectangle of the image 123
  • 126 is the outermost rectangle of the image 124.
  • control unit 103 aligns the two bird's-eye images 121 and 122 using parameters obtained from the installation position, height, and angle of the cameras 101 and 102 (step S4).
  • the control unit 103 may use a lookup table for registration in order to reduce the amount of calculation for registration.
  • FIG. 6 shows examples of bird's-eye images 121 and 122 that have been aligned.
  • the control unit 103 determines whether or not a plurality of objects (that is, the images 123 and 124 or the outermost bounding rectangles 125 and 126) overlap in the overlapping region in the aligned bird's-eye images 121 and 122, respectively. It is determined whether or not the object extends in a direction extending radially from the camera position (step S5).
  • the control unit 103 recognizes the object as one solid object if a plurality of objects overlap and each object extends in a direction extending radially from the camera position. (Step S5). In the example shown in FIG. 6, the object specified by the image 123 or the outermost bound rectangle 125 and the object specified by the image 124 or the outermost bound rectangle 126 overlap each other.
  • the object specified by the image 123 or the outermost bounding rectangle 125 extends in a radial direction from the position 127 of the camera 101.
  • the object specified by the image 124 or the outermost bounding rectangle 126 extends in a direction extending radially from the position 128 of the camera 102. Therefore, the control unit 103 recognizes that the object specified by the image 123 or the outermost bounding rectangle 125 and the object specified by the image 124 or the outermost bounding rectangle 126 are one and the same three-dimensional object.
  • control unit 103 passes through the camera position 127 of the bird's-eye image 121, passes through the straight line 131 that matches the extension direction of the object specified by the image 123 or the outermost rectangle 125, and the camera position 128 of the bird's-eye image 122,
  • the intersection 133 with the straight line 132 that coincides with the extension direction of the object specified by the image 124 or the outermost rectangle 126 is estimated as the ground contact point of the three-dimensional object, that is, the position of the three-dimensional object (step S6).
  • the straight line 131 and the straight line 132 are virtual lines.
  • Step S6 is also referred to as a first ground point detection unit.
  • control unit 103 displays the result of object recognition, the result of solid object recognition, and the estimated position on the display 104 (step S7). Further, the control unit 103 displays a figure looking down from the sky using the bird's-eye view image on the display 104 (step S7). Accordingly, the driver can objectively grasp the positional relationship between the host vehicle and the three-dimensional object.
  • the present embodiment generates an image obtained by extracting an object by object recognition from images obtained by photographing with a plurality of cameras, converts the image into a bird's-eye view image, and then superimposes a plurality of images. It recognizes whether or not the object is a three-dimensional object, and estimates the ground contact point of the three-dimensional object. Thus, by recognizing whether or not the object is a three-dimensional object by limiting to the objects recognized by the object recognition, it is possible to reduce the calculation cost necessary for detecting the three-dimensional object.
  • the object on the image can be recognized as a three-dimensional object, for example, when a pole is a recognition target, even if a line drawn on the road surface is mistakenly recognized, information on the three-dimensional object recognition is used. Can be judged as misrecognition. On the other hand, when a white line or road marking drawn on a plane is a recognition target, an object recognized as a three-dimensional object can be rejected as a false detection.
  • FIG. 7 is a flowchart showing an example of the operation of the three-dimensional object detection device 200 according to the present embodiment.
  • the operation shown in FIG. 7 can be realized by causing the processor constituting the control unit 103 to execute a program stored in the memory.
  • the operation of the present embodiment will be described with reference to FIG.
  • the control unit 103 performs processing similar to steps S1 to S6 in FIG. 3 in the first embodiment (steps S11 to S16). In addition, the control unit 103 executes the following processing in parallel with the processing in steps S13 to S16.
  • the control unit 103 sets a non-overlapping imaging region (denoted as a first non-overlapping imaging region) that does not overlap with the imaging region of the camera 102 among the regions of the image 111 acquired by the camera 101 in step S12.
  • a non-overlapping imaging region denoted as a first non-overlapping imaging region
  • the lower end of the outermost circumscribed rectangle of the included object is detected as a grounding point of the object (step S18).
  • an object image 201 and its outermost bounding rectangle 203 are drawn in the first non-overlapping imaging region.
  • the control unit 103 detects the lower end of the outermost circumscribed rectangle 203 as the ground point of the object.
  • the control unit 103 performs step S12 in a non-overlapping imaging region (denoted as a second non-overlapping imaging region) that does not overlap with the imaging region of the camera 101 among the regions of the image 112 obtained by imaging with the camera 102. If the detected object is included, the lower end of the outermost circumscribed rectangle of the included object is detected as the grounding point of the object (step S18). For example, in the image 112 shown in FIG. 8, the object image 202 and its outermost rectangle 204 are drawn in the second non-overlapping imaging region. The control unit 103 detects the lower end of the outermost bound rectangle 204 as the ground point of the object.
  • control unit 103 calculates the coordinates on the image of the ground point of the object, the installation height and angle of the cameras 101 and 102, and the position of the object and the distance from the angle of view to the object (step S19). Steps S18 and S19 are also referred to as a second ground point detection unit.
  • control unit 103 outputs the result of object recognition, the result of solid object recognition, and the estimated position to the display 104.
  • the control unit 103 displays a figure looking down from the sky using the bird's-eye view image on the display 104. Accordingly, the driver can objectively grasp the positional relationship between the host vehicle and the three-dimensional object.
  • the position and distance of an object existing in an overlapping area of images taken by a plurality of cameras are detected by the same method as in the first embodiment, and non-overlapping areas are detected from images of a single camera. To detect. This makes it possible to estimate the positions and distances of objects that exist in all areas photographed by a plurality of cameras.
  • the reason why the position and distance of the object existing in the overlapping area is detected by the same method as in the first embodiment and is not detected from the image of the camera alone as in the non-overlapping area is that the object recognition by pattern matching etc. This is because the boundary between the recognition target such as a pedestrian or a pole and the road surface may not be accurately captured.
  • FIG. 9 is a flowchart showing an example of the operation of the three-dimensional object detection device 300 according to the present embodiment.
  • the operation shown in FIG. 9 can be realized by causing the processor constituting the control unit 103 to execute a program stored in the memory. The operation of this embodiment will be described below with reference to FIG.
  • steps S21 to S27 processing similar to steps S1 to S7 in FIG. 3 in the first embodiment is performed (steps S21 to S27).
  • the control unit 103 simultaneously images the surroundings of the vehicle by the cameras 101 and 102 at the next imaging timing (t1; t1 is also referred to as a second time) (step S28).
  • the control unit 103 captures an image (G11) of an area overlapping with the camera 102 in the image area of the camera 101 obtained by imaging at the timing t1, and the immediately preceding imaging timing (t0 is t0.
  • a difference image is generated from an image (referred to as G10) of an area overlapping with the camera 102 among the images of the camera 101 obtained by imaging at the first time (step S29).
  • the control unit 103 detects whether or not there has been a change in the image in the overlapping area of the imaging areas of the camera 101 and the camera 102 between the timing t0 and the timing t1 (step S1). S30). Next, if there is a change in the image (YES in step S31), the control unit 103 returns to the object recognition process in step S22 and repeats the same process as described above. On the other hand, if there is no change in the image (NO in step S31), steps S22 to S27 are skipped, and the process returns to step S28 to wait for the next imaging timing.
  • Step S29 is also referred to as a difference image detection unit. Steps S30 and S31 are also referred to as a control unit.
  • the three-dimensional object detection device 400 includes an object detection unit 401, a viewpoint conversion unit 402, and a three-dimensional object recognition unit 403.
  • the object detection unit 401 is configured to detect an object as a first object from a first image obtained by imaging the first monitoring area with the first camera 410. In addition, the object detection unit 401 obtains an object from the second image obtained by imaging the second monitoring area with the second camera 411 having an overlapping imaging area where the imaging area overlaps with the first camera 410. It is configured to detect as an object.
  • the object detector 401 is also called a detector.
  • the viewpoint conversion unit 402 is configured to perform viewpoint conversion on the first image and the second image to create a first bird's-eye image and a second bird's-eye image.
  • the viewpoint conversion unit 402 is also referred to as a converter.
  • the three-dimensional object recognition unit 403 detects a three-dimensional object from the overlapping imaging area based on the first object and the second object that appear in the overlapping imaging area in the first bird's-eye image and the second bird's-eye image. Is configured to do.
  • the three-dimensional object recognition unit 403 is also called a recognizer.
  • the three-dimensional object detection device 400 configured as described above operates as follows. That is, first, the object detection unit 401 detects an object as the first object from the first image obtained by imaging the first monitoring area with the first camera 410, and the second camera 411 detects the second object. An object is detected as a second object from the second image obtained by imaging the monitoring area. Next, the viewpoint conversion unit 402 performs viewpoint conversion on the first image and the second image to create a first bird's-eye image and a second bird's-eye image. Next, the three-dimensional object detection unit detects a three-dimensional object from the overlapping imaging region based on the first object and the second object that appear in the overlapping imaging region in the first bird's-eye image and the second bird's-eye image. .
  • the present embodiment it is possible to suppress the calculation cost required to detect a three-dimensional object by recognizing whether or not the object is a three-dimensional object by limiting to objects recognized by object recognition.
  • the three-dimensional object detection device 500 includes a computer 701 and a program 704.
  • the computer 701 includes a processor 702 configured with a microprocessor and the like, and a memory 703 configured with RAM, ROM, and the like.
  • the program 704 is read into the memory 703 of the computer 701 when the computer 701 is started up, etc., and controls the operation of the computer 701 so that the computer 701 can detect the three-dimensional object according to the first to fourth embodiments described above. It functions as a detector, converter, recognizer, etc. of the device.
  • a plurality of cameras may be installed so as to capture the entire periphery of the vehicle.
  • four cameras a front camera 501, a rear camera 502, a left camera 503, and a right camera 504, are attached to the vehicle body.
  • the front camera 501 is attached to the front of the vehicle body and acquires an image obtained by imaging the front of the vehicle.
  • the rear camera 502 is attached to the rear of the vehicle body and acquires an image obtained by imaging the rear of the vehicle.
  • the left camera 503 is attached to the left side of the vehicle body and acquires an image obtained by imaging the left side of the vehicle.
  • the right camera 504 is attached to the right side of the vehicle body and acquires an image obtained by capturing the right side of the vehicle.
  • a three-dimensional object can be recognized in a region 505 where the fields of view overlap between adjacent cameras.
  • multiple cameras with overlapping fields of view may be set for one camera.
  • a right rear camera 512 having a field of view that overlaps the left region of the field of view of the rear camera 511
  • a left rear camera 513 having a field of view that overlaps the region of the right side of the field of view of the rear camera 511. Is attached to the vehicle body.
  • the lower end of the outermost rectangle surrounding the object detected from the non-overlapping imaging area in the imaging area before the viewpoint conversion of the cameras 101 and 102 is detected as the grounding point of the object.
  • the three-dimensional object detection apparatus 500 according to the fifth embodiment of the present invention as shown in the flowchart of FIG. 13, the objects detected from the non-overlapping imaging regions in the bird's-eye images after the viewpoint conversion of the cameras 101 and 102 are enclosed.
  • the lower end of the outermost rectangle may be detected as the ground point of the object. That is, FIG. 13 is different from FIG. 7 in that steps S18 and S19 are replaced with steps S18A and S19A, and is otherwise the same as FIG.
  • step S18A of FIG. 13 the control unit 103 is detected in step S12 in a non-overlapping imaging region that does not overlap with the bird's-eye image 122 of the camera 102 out of the bird's-eye image 121 of the image 111 acquired by the camera 101. If the object is included, the lower end of the circumscribed rectangle of the included object is detected as the grounding point of the object. Similarly, the control unit 103 includes the object detected in step S12 in a non-overlapping imaging region that does not overlap with the bird's-eye image 121 of the camera 101 out of the bird's-eye image 122 of the image 112 acquired by the camera 102. If it is, the lower end of the outermost circumscribed rectangle of the contained object is detected as the ground point of the object.
  • step S19A of FIG. 13 the control unit 103 calculates the coordinates of the ground point of the object on the image, the installation height and angle of the cameras 101 and 102, and the position of the object and the distance from the angle of view to the object. (Hereinafter, this method is referred to as a first distance calculation method).
  • step S19A in FIG. 13 the control unit 103 obtains in advance how many mm squares in the real world one pixel of the bird's-eye view image corresponds to based on the installation height and angle of the cameras 101 and 102 and the angle of view. From the correspondence, the distance to the object may be obtained (hereinafter, this method is referred to as a second distance calculation method).
  • FIG. 14 is an explanatory diagram of the second distance calculation method.
  • the coefficient is calculated as 0.5 (cm / pix) and stored. .
  • the coordinates of the lower end of the outermost rectangle of the object on the bird's-eye view image are calculated as (200, 100), for example.
  • the second distance calculation method can also be used for the distance calculation in step S6 of FIG. 3 in the first embodiment.
  • the present invention is applied to a camera that captures an image of the periphery of a vehicle.
  • the present invention is not limited to a camera attached to a vehicle. If a plurality of cameras having different viewpoints and overlapping fields of view are used, the present invention can be applied to, for example, surveillance cameras installed in stores and streets.
  • the present invention can be used in the field of detecting a three-dimensional object, and in particular, can be used in the field of detecting a three-dimensional object existing in an overlapping imaging region using a plurality of cameras.
  • DESCRIPTION OF SYMBOLS 100 Three-dimensional object detection apparatus 101 ... Camera 102 ... Camera 103 ... Control unit 104 ... Display 105 ... Object 111 ... Image 112 ... Image 113 ... Object image 114 ... Object image 115 ... Outermost rectangle 116 ... Outermost rectangle 121 ... Bird's-eye view image 122 ... Bird's-eye view image 123 ... Object image 124 ... Object image 125 ... Outermost rectangle 126 ... Outermost rectangle 127 ... Camera position 128 ... Camera position 131 ... A straight line that passes through the camera position and matches the extension direction of the object 132 ...

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Processing (AREA)

Abstract

L'invention concerne un dispositif de détection d'objet tridimensionnel qui comprend un détecteur, un convertisseur et un dispositif de reconnaissance. Le détecteur détecte en tant que premier objet, un objet à partir d'une première image obtenue par l'imagerie d'une première zone de surveillance à l'aide d'une première caméra ; et détecte en tant que second objet, un objet à partir d'une seconde image, obtenue par l'imagerie d'une seconde zone de surveillance à l'aide d'une seconde caméra et qui comporte une zone d'imagerie de chevauchement chevauchant une zone d'imagerie de la première caméra. Le convertisseur applique une conversion de point de vue à la première image et à la seconde image afin de produire une première image en vue plongeante et une seconde image en vue plongeante. Le dispositif de reconnaissance reconnaît un objet tridimensionnel à partir de la zone d'imagerie de chevauchement, sur la base du premier objet et du second objet apparaissant dans la zone d'imagerie de chevauchement dans la première image en vue plongeante et la seconde image en vue plongeante.
PCT/JP2018/000666 2017-02-07 2018-01-12 Dispositif de détection d'objet tridimensionnel WO2018146997A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2018566805A JPWO2018146997A1 (ja) 2017-02-07 2018-01-12 立体物検出装置

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017-020292 2017-02-07
JP2017020292 2017-02-07

Publications (1)

Publication Number Publication Date
WO2018146997A1 true WO2018146997A1 (fr) 2018-08-16

Family

ID=63108197

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/000666 WO2018146997A1 (fr) 2017-02-07 2018-01-12 Dispositif de détection d'objet tridimensionnel

Country Status (2)

Country Link
JP (1) JPWO2018146997A1 (fr)
WO (1) WO2018146997A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022102515A (ja) * 2020-12-25 2022-07-07 エヌ・ティ・ティ・コムウェア株式会社 物体検出装置、物体検出方法、およびプログラム
JP2023117203A (ja) * 2022-02-10 2023-08-23 本田技研工業株式会社 移動体制御装置、移動体制御方法、学習装置、学習方法、およびプログラム

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006339960A (ja) * 2005-06-01 2006-12-14 Nissan Motor Co Ltd 物体検出装置、および物体検出方法
JP2007172501A (ja) * 2005-12-26 2007-07-05 Alpine Electronics Inc 車両運転支援装置
JP2010109452A (ja) * 2008-10-28 2010-05-13 Panasonic Corp 車両周囲監視装置及び車両周囲監視方法
JP2010147523A (ja) * 2008-12-16 2010-07-01 Panasonic Corp 車両周囲の俯瞰画像生成装置
JP2012147149A (ja) * 2011-01-11 2012-08-02 Aisin Seiki Co Ltd 画像生成装置
JP2015192198A (ja) * 2014-03-27 2015-11-02 クラリオン株式会社 映像表示装置および映像表示システム
JP2016052867A (ja) * 2014-09-04 2016-04-14 株式会社デンソー 運転支援装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4677737B2 (ja) * 2004-06-01 2011-04-27 沖電気工業株式会社 防犯支援システム
JP5752631B2 (ja) * 2012-03-27 2015-07-22 住友重機械工業株式会社 画像生成方法、画像生成装置、及び操作支援システム

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006339960A (ja) * 2005-06-01 2006-12-14 Nissan Motor Co Ltd 物体検出装置、および物体検出方法
JP2007172501A (ja) * 2005-12-26 2007-07-05 Alpine Electronics Inc 車両運転支援装置
JP2010109452A (ja) * 2008-10-28 2010-05-13 Panasonic Corp 車両周囲監視装置及び車両周囲監視方法
JP2010147523A (ja) * 2008-12-16 2010-07-01 Panasonic Corp 車両周囲の俯瞰画像生成装置
JP2012147149A (ja) * 2011-01-11 2012-08-02 Aisin Seiki Co Ltd 画像生成装置
JP2015192198A (ja) * 2014-03-27 2015-11-02 クラリオン株式会社 映像表示装置および映像表示システム
JP2016052867A (ja) * 2014-09-04 2016-04-14 株式会社デンソー 運転支援装置

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022102515A (ja) * 2020-12-25 2022-07-07 エヌ・ティ・ティ・コムウェア株式会社 物体検出装置、物体検出方法、およびプログラム
JP7138157B2 (ja) 2020-12-25 2022-09-15 エヌ・ティ・ティ・コムウェア株式会社 物体検出装置、物体検出方法、およびプログラム
JP2023117203A (ja) * 2022-02-10 2023-08-23 本田技研工業株式会社 移動体制御装置、移動体制御方法、学習装置、学習方法、およびプログラム
JP7450654B2 (ja) 2022-02-10 2024-03-15 本田技研工業株式会社 移動体制御装置、移動体制御方法、学習装置、学習方法、およびプログラム

Also Published As

Publication number Publication date
JPWO2018146997A1 (ja) 2019-11-14

Similar Documents

Publication Publication Date Title
JP6118096B2 (ja) Av画像基盤の駐車位置設定装置及びその方法
US8005266B2 (en) Vehicle surroundings monitoring apparatus
JP4930046B2 (ja) 路面判別方法および路面判別装置
EP3641298B1 (fr) Procédé et dispositif de capture d'un objet cible et dispositif de surveillance vidéo
JP5687702B2 (ja) 車両周辺監視装置
WO2020244414A1 (fr) Procédé de détection d'obstacle, dispositif, support de stockage et robot mobile
KR20160023409A (ko) 차선 이탈 경보 시스템의 동작방법
CN107950023B (zh) 车辆用显示装置以及车辆用显示方法
EP3203725A1 (fr) Dispositif de reconnaissance d'image embarqué sur un véhicule
US20120212615A1 (en) Far-infrared pedestrian detection device
US20160180158A1 (en) Vehicle vision system with pedestrian detection
JP5178454B2 (ja) 車両周囲監視装置及び車両周囲監視方法
JP2011070593A (ja) 車両周辺監視装置
JP2013137767A (ja) 障害物検出方法及び運転者支援システム
JP2001216520A (ja) 車両用周辺監視装置
JP5539250B2 (ja) 接近物体検知装置及び接近物体検知方法
Khalifa et al. A hyperbola-pair based lane detection system for vehicle guidance
JP6617150B2 (ja) 物体検出方法及び物体検出装置
JP5951785B2 (ja) 画像処理装置及び車両前方監視装置
WO2018146997A1 (fr) Dispositif de détection d'objet tridimensionnel
JP2000293693A (ja) 障害物検出方法および装置
JP6949090B2 (ja) 障害物検知装置及び障害物検知方法
KR20160126254A (ko) 도로 영역 검출 시스템
EP3163533A1 (fr) Projection de vue adaptative pour un système de caméra d'un véhicule
JP2010165299A (ja) 白線検出装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18751051

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2018566805

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18751051

Country of ref document: EP

Kind code of ref document: A1