WO2018088263A1 - Parking frame recognition device - Google Patents

Parking frame recognition device Download PDF

Info

Publication number
WO2018088263A1
WO2018088263A1 PCT/JP2017/039141 JP2017039141W WO2018088263A1 WO 2018088263 A1 WO2018088263 A1 WO 2018088263A1 JP 2017039141 W JP2017039141 W JP 2017039141W WO 2018088263 A1 WO2018088263 A1 WO 2018088263A1
Authority
WO
WIPO (PCT)
Prior art keywords
parking frame
line
parking
same
shape
Prior art date
Application number
PCT/JP2017/039141
Other languages
French (fr)
Japanese (ja)
Inventor
大輔 杉浦
博彦 柳川
Original Assignee
株式会社デンソー
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社デンソー filed Critical 株式会社デンソー
Publication of WO2018088263A1 publication Critical patent/WO2018088263A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R21/00Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/14Traffic control systems for road vehicles indicating individual free spaces in parking areas
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • This disclosure relates to a technique for recognizing a parking frame that represents a parking frame in which the host vehicle can park.
  • Patent Document 1 As a technique for recognizing a parking frame, a white line farther to the host vehicle than a white line closer to the host vehicle out of the white lines constituting the parking frame that can be observed from the host vehicle. A technique has been proposed for determining that the frame is a parking frame when it is long.
  • One aspect of the present disclosure is to provide a technique that allows a parking frame recognition device that recognizes a parking frame representing a parking frame in which the host vehicle can be parked to recognize the parking frame more accurately.
  • the parking frame recognition device includes an image acquisition unit, a line extraction unit, and a possible frame recognition unit.
  • the image acquisition unit is configured to acquire a plurality of captured images representing respective captured images captured at different positions by an imaging device arranged in the host vehicle.
  • the line extraction unit is configured to extract a parking frame from a plurality of captured images and extract a target line representing a line that defines an end of the parking frame farther from the vehicle.
  • the possible frame recognizing unit determines whether or not the shape of the target line is the same for the same parking frame extracted from a plurality of captured images, and can park the parking frame determined to have the same shape of the target line It is configured to be recognized as a frame.
  • the shape of the same target line captured at different positions is different because the target line is hidden by the parked vehicle. Take advantage of the property of being observed.
  • the shape of the target line can be determined by how much the comparison parameters such as the number of line corners, line length, aspect ratio, and edge curvature match.
  • the degree of matching can be determined, for example, by comparing the comparison parameter ratio with a threshold value.
  • the parkingable frame is accurately determined. Can be recognized.
  • the imaging system 1 of this embodiment is a system mounted on a vehicle such as a passenger car, and includes a control unit 10. Moreover, you may provide the front camera 4F, the rear camera 4B, the display part 30, the vehicle control part 32, etc. A vehicle on which the imaging system 1 is mounted is also referred to as a host vehicle.
  • the front camera 4F and the rear camera 4B are for imaging roads ahead and behind the host vehicle, respectively, and are attached to the front and rear of the host vehicle.
  • the control unit 10 generates a bird's-eye image obtained by bird's-eye view of the road around the vehicle from the images captured by the cameras 4F and 4B. Then, the generated bird's-eye view image is displayed on the display unit 30 configured with a liquid crystal display or the like and disposed in the vehicle interior.
  • a parking frame means an object having at least one line having a width within a preset range and a side or line parallel to the line at a position separated from the line by a distance corresponding to the width of the vehicle. Represents the area between.
  • the parking frame means a parking frame where the own vehicle can be parked.
  • parallel includes substantially parallel.
  • the “object having a side or line parallel to the line” referred to here includes, for example, a curbstone, another line, a wall, a tree, a guardrail, and the like.
  • the control unit 10 includes an imaging signal input unit 12, a detection signal input unit 14, a memory 16, a display control unit 18, and an image processing unit 20.
  • the imaging signal input unit 12 has a function of capturing imaging signals from the front camera 4F and the rear camera 4B and inputting them to the image processing unit 20 as captured image data.
  • the detection signal input unit 14 takes in detection signals from the wheel speed sensor 6 that detects the rotational speed of each wheel of the host vehicle and the steering angle sensor 8 that detects the steering angle of the steering, respectively, and wheel speed data and steering angle data. And a function of inputting to the image processing unit 20.
  • the image processing unit 20 is mainly configured by a known microcomputer having a CPU 18 and a semiconductor memory (hereinafter, memory 16) such as a RAM, a ROM, and a flash memory.
  • memory 16 a semiconductor memory
  • Various functions of the image processing unit 20 are realized by the CPU 18 executing a program stored in a non-transitional physical recording medium.
  • the memory 16 corresponds to a non-transitional tangible recording medium that stores a program.
  • the non-transitional tangible recording medium means that the electromagnetic waves in the recording medium are excluded.
  • the number of microcomputers constituting the image processing unit 20 may be one or plural.
  • the image processing unit 20 includes a line detection unit 22, a position correspondence unit 24, a frame estimation unit 26, and a tracking unit 28 as functional configurations realized by the CPU 18 executing a program. And comprising.
  • the method of realizing these elements constituting the image processing unit 20 is not limited to software, and some or all of the elements may be realized using one or a plurality of hardware.
  • the electronic circuit may be realized by a digital circuit including a large number of logic circuits, an analog circuit, or a combination thereof.
  • the function as the line detection unit 22 performs a process of detecting a line such as a white line or a yellow line by performing a known image process such as a Hough transform on each captured image.
  • a known image process such as a Hough transform
  • correspondence information in which an imaging range common to other cameras and corresponding coordinates are set for each camera is stored in the memory 16, and it is determined whether or not the lines are the same using the correspondence information.
  • the function as the frame estimation unit 26 performs a process of estimating one or a plurality of parking frames present in the captured image and estimating a parking available frame from these parking frames.
  • the function as the tracking unit 28 performs processing for tracking an object in the captured image by associating the amount of movement of the host vehicle with the amount of movement of the object in the captured image.
  • the parking frame is tracked and the position of the parking frame is recognized. Information on the position of the parking frame is sent to the vehicle control unit 32.
  • the function as the tracking unit 28 generates an image showing a parking frame while taking into account the amount of movement of the host vehicle, and this image is output to the display control unit 18.
  • the display control unit 18 converts the image sent from the image processing unit into a video signal that can be displayed on the display unit 30, and sends the video signal to the display unit 30.
  • the vehicle control unit 32 receives the position of the parking frame, generates a track of the host vehicle for parking in the parking frame, and moves the host vehicle according to the track. Control the rudder angle, etc.
  • the parking-permitted frame recognition process is a process that is started, for example, when the vehicle is turned on and then repeatedly executed.
  • the parkingable frame recognition process may be activated every imaging cycle of the camera.
  • a captured image captured by the front camera 4F at the current position of the host vehicle and a captured image captured in front of a preset distance are acquired.
  • a captured image captured in front for example, an image captured in front of the current location about several meters may be acquired.
  • a captured image as illustrated in FIG. 3 is obtained as the captured image captured in the foreground
  • a captured image as illustrated in FIG. 4 is acquired as the captured image captured at the current position.
  • an image captured before a preset time from the present time may be acquired as a captured image captured in front.
  • the preset time may be set to about several seconds, for example.
  • a line such as a white line or a yellow line is detected for each acquired captured image.
  • the edges that are the boundaries of brightness and color are detected in a large number of pixels that make up the captured image, and processing such as well-known Hough transform is performed on the edges, so that all existing in the captured image are detected.
  • Detect the line The line here has a width and includes, for example, paint on the road.
  • the detected line is called a detection line.
  • all the lines 41, 42, and 43 among the white lines 40 indicating all the parking frames are detection lines.
  • S130 it is determined whether or not there are a plurality of detection lines in each captured image.
  • an area sandwiched between two lines is recognized as a parking frame, it is recognized that there is no parking frame when two or more lines are not detected. Therefore, in S130, it can be said that it is determined whether there is a possibility that a parking frame exists.
  • the parking frame recognition process is terminated. If there are a plurality of detection lines, the entire captured image obtained in S140 or at least the detection lines of the captured image is converted into a planar coordinate system. This process may be realized by a well-known process for generating a bird's-eye image obtained by looking down on a captured image from the vertical direction.
  • the parking frame is estimated at S150.
  • a parking frame is extracted from a plurality of captured images.
  • the parking frame is an area that is sandwiched between at least two lines, and an area in which the interval between the two lines is within a preset range that is set based on the width of the vehicle is extracted.
  • all the parking frames present in the captured image are extracted.
  • the area between the white lines 41 and 42 and the area between the white lines 42 and 43 are extracted as parking frames.
  • the area between the white lines 41 and 43 can also be extracted as a parking frame, in this process, only the area where the distance between the white lines is within a preset range is recognized as a parking frame. The area is excluded from the parking frame.
  • a target line representing a line that defines the end of the parking frame farther from the host vehicle is extracted.
  • the white line 42 is the target line in the parking frame between the white lines 41 and 42
  • the white line 43 is the target line in the parking frame between the white lines 42 and 43.
  • the candidate parking frame includes a parking frame, but is not limited to a parking frame, and indicates a general parking frame including a parking frame in which another vehicle has already been parked.
  • the process proceeds to S240 described above. If there is a parking frame candidate, it is determined in S210 whether or not the shape of the target line is the same in an imaging region common to a plurality of captured images.
  • the same parking frame observed from different positions is extracted from a plurality of captured images, and the shapes of the same target lines constituting the parking frame are compared.
  • the same parking frame specifies a common area in which the same object appears in the plurality of captured images according to the conditions under which a plurality of captured images are obtained, for example, the moving distance of the front camera 4F, and is the same as the parking frame in a certain captured image. It is specified by extracting a parking frame within the area.
  • the lengths of the target lines are compared.
  • the length of the line represents the length of the line in the longitudinal direction.
  • the longitudinal direction is a direction orthogonal to the line width direction.
  • the target line such as the white line 43 is observed in a relatively wide range without being hidden by the vehicle. In this case, the difference in the length of the target line is reduced.
  • the shapes are recognized as being the same. If the shape of the target line is the same in S210, the parking frame determined to have the same shape of the target line in S220 is recognized as a parking available frame. Further, if the shape of the target line is not the same, the parking frame determined in S230 that the shape of the target line is not the same is recognized as not being a parkingable frame.
  • the image processing unit 20 is configured to acquire a plurality of captured images representing respective captured images captured at different positions by an imaging device disposed in the host vehicle. Then, the image processing unit 20 extracts a parking frame from a plurality of captured images, and extracts a target line representing a line that defines an end of the parking frame farther from the host vehicle. Further, the image processing unit 20 determines whether or not the shape of the target line is the same for the same parking frame extracted from the plurality of captured images, and determines the parking frame that is determined to have the same shape as the target line. Recognize as a parking frame.
  • the parking-capable frame is accurately recognized. it can.
  • the image processing unit 20 determines whether or not the length of the target line is the same as the shape of the target line. According to such an imaging system 1, since the shape of the target line is determined by comparing the lengths of the target lines, the parking frame can be configured with a simple configuration as compared with the configuration that is determined by the shape of the entire line. Can be recognized.
  • the image processing unit 20 extracts a region sandwiched between at least two lines as a parking frame.
  • the parking frame can be extracted using a known white line recognition technique. Therefore, the parking frame can be recognized with a simpler configuration.
  • the image processing unit 20 converts each captured image into a planar coordinate system, and applies the subject line to the same parking frame extracted from each captured image converted into the planar coordinate system. It is determined whether or not the shape of the target line is the same, and the parking frame in which the shape of the target line is determined to be the same is recognized as a parkingable frame.
  • whether or not the shape of the target line is the same is determined based on the length of the target line, but the present invention is not limited to this.
  • the shape of the target line it may be determined whether or not the overall shape of the target line, the shape of the end portion on the side far from the own vehicle in the target line, and the like are the same.
  • the shapes that is, the shapes of the white lines 42 in the region [A] and the region [B] are compared.
  • the end of the white line 42 far from the vehicle is hidden by the vehicle, and the part of the vehicle where the white line 42 is hidden changes depending on the position of the camera.
  • the shape of the end far from the vehicle changes.
  • the white line 42 is not hidden, the shape of the end of the white line 42 far from the vehicle does not change. Therefore, it can be determined whether or not the parking frame is a parkingable frame based on whether or not the shape of the end portion on the side far from the host vehicle is the same in the target line.
  • the shape of the target line is determined by comparing the shape of the end of the target line that is far from the host vehicle, it is simpler than the configuration that is determined by the shape of the entire line. It is possible to recognize the parking available frame with a simple configuration.
  • the imaging system 1 when determining whether or not the parking frame is a parking frame, it may be determined using only the parking frame recognition process described above. You may determine by combining a process and a well-known process. For example, a parkable frame may be temporarily extracted by a known process, and a parkable frame recognition process may be used for final determination. Moreover, you may determine with a parking frame being a parking possible frame, when it determines with it being a parking possible frame by at least one of a well-known process and a parking possible frame recognition process, or a majority of processes.
  • a plurality of functions of one constituent element in the embodiment may be realized by a plurality of constituent elements, or a single function of one constituent element may be realized by a plurality of constituent elements. . Further, a plurality of functions possessed by a plurality of constituent elements may be realized by one constituent element, or one function realized by a plurality of constituent elements may be realized by one constituent element. Moreover, you may abbreviate
  • various devices such as a parking frame recognition device as a component of the imaging system 1, a program for causing a computer to function as the imaging system 1, a semiconductor memory storing the program, and the like
  • the present disclosure can also be realized in various forms such as a non-transition actual recording medium and a parking frame recognition method.
  • the image processing unit 20 of the imaging system 1 corresponds to a parking frame recognition device in the present disclosure.
  • the process of S110 among the processes executed by the image processing unit 20 corresponds to the image acquisition unit referred to in the present disclosure
  • the processes of S120 and S150 correspond to the line extraction unit referred to in the present disclosure in the above embodiment.
  • the process of S140 corresponds to the coordinate conversion unit referred to in the present disclosure
  • the processes of S210 and S220 in the above embodiment correspond to the possible frame recognition unit referred to in the present disclosure.

Abstract

This parking frame recognition device (20) is provided with: an image acquisition unit (S110); a line extraction unit (S120, S150); and a possible frame recognition unit (S210, S220). The image acquisition unit is configured so as to acquire a plurality of captured images respectively captured at different positions by image capturing devices provided to a host vehicle. The line extraction unit is configured so as to extract a parking frame from the plurality of captured images, and extract a line which defines an end of the parking frame at the side furthest from the host vehicle. The possible frame recognition unit is configured so as to determine whether the shape of the line is the same for the same parking frame extracted from the plurality of captured images, and recognize, as a frame in which parking is possible, a parking frame for which the shape of the line has been determined to be the same.

Description

駐車枠認識装置Parking frame recognition device 関連出願の相互参照Cross-reference of related applications
 本国際出願は、2016年11月10日に日本国特許庁に出願された日本国特許出願第2016-219646号に基づく優先権を主張するものであり、日本国特許出願第2016-219646号の全内容を参照により本国際出願に援用する。 This international application claims priority based on Japanese Patent Application No. 2016-219646 filed with the Japan Patent Office on November 10, 2016, and is based on Japanese Patent Application No. 2016-219646. The entire contents are incorporated herein by reference.
 本開示は、自車両が駐車可能な駐車枠を表す駐車可能枠を認識する技術に関する。 This disclosure relates to a technique for recognizing a parking frame that represents a parking frame in which the host vehicle can park.
 下記の特許文献1には、駐車可能枠を認識する技術として、自車両から観察可能な駐車枠を構成する白線のうち、自車両に近い側の白線よりも、自車両に遠い側の白線が長い場合に、駐車可能枠であると判断する技術が提案されている。 In the following Patent Document 1, as a technique for recognizing a parking frame, a white line farther to the host vehicle than a white line closer to the host vehicle out of the white lines constituting the parking frame that can be observed from the host vehicle. A technique has been proposed for determining that the frame is a parking frame when it is long.
特開2016-016681号公報JP 2016-016681 A
 発明者の詳細な検討の結果、上記特許文献1の技術では、軽自動車等の車長が短い車両が駐車枠に駐車している場合に、自車両に近い側の白線よりも自車両に遠い側の白線が長くなるように観察されることがあるので、他車両が駐車している駐車枠を駐車可能枠であると誤認識しやすいという課題が見出された。 As a result of the detailed examination by the inventor, in the technique of Patent Document 1 described above, when a vehicle with a short vehicle length such as a light vehicle is parked in the parking frame, it is farther from the own vehicle than the white line on the side closer to the own vehicle. Since the white line on the side may be observed to be long, a problem has been found that it is easy to misrecognize that a parking frame in which another vehicle is parked is a parking frame.
 本開示の1つの局面は、自車両が駐車可能な駐車枠を表す駐車可能枠を認識する駐車枠認識装置において駐車可能枠をより精度よく認識できるようにする技術を提供することにある。 One aspect of the present disclosure is to provide a technique that allows a parking frame recognition device that recognizes a parking frame representing a parking frame in which the host vehicle can be parked to recognize the parking frame more accurately.
 本開示の一側面の駐車枠認識装置は、画像取得部と、線抽出部と、可能枠認識部と、を備える。画像取得部は、自車両に配置された撮像装置が異なる位置で撮像したそれぞれの撮像画像を表す複数の撮像画像を取得するように構成される。 The parking frame recognition device according to one aspect of the present disclosure includes an image acquisition unit, a line extraction unit, and a possible frame recognition unit. The image acquisition unit is configured to acquire a plurality of captured images representing respective captured images captured at different positions by an imaging device arranged in the host vehicle.
 線抽出部は、複数の撮像画像から駐車枠を抽出し、該駐車枠の自車両からより遠い側の端部を規定する線を表す対象線を抽出するように構成される。可能枠認識部は、複数の撮像画像から抽出された同一の駐車枠について、対象線の形状が同一であるか否かを判定し、対象線の形状が同一と判定された駐車枠を駐車可能枠として認識するように構成される。 The line extraction unit is configured to extract a parking frame from a plurality of captured images and extract a target line representing a line that defines an end of the parking frame farther from the vehicle. The possible frame recognizing unit determines whether or not the shape of the target line is the same for the same parking frame extracted from a plurality of captured images, and can park the parking frame determined to have the same shape of the target line It is configured to be recognized as a frame.
 すなわち、本開示の駐車枠認識装置では、駐車枠に駐車車両が存在する場合には、対象線が駐車車両によって隠されることによって、異なる位置で撮像された同一の対象線の形状が異なるものとして観察されるという特性を利用する。なお、対象線の形状とは、線の角の数、線の長さ、アスペクト比、エッジの曲率等の比較用パラメータがどの程度一致するかによって判定されうる。どの程度一致するかについては、例えば、比較用パラメータの比と閾値と比較することによって判定できる。 That is, in the parking frame recognition device according to the present disclosure, when a parked vehicle is present in the parking frame, the shape of the same target line captured at different positions is different because the target line is hidden by the parked vehicle. Take advantage of the property of being observed. The shape of the target line can be determined by how much the comparison parameters such as the number of line corners, line length, aspect ratio, and edge curvature match. The degree of matching can be determined, for example, by comparing the comparison parameter ratio with a threshold value.
 このような駐車枠認識装置によれば、異なる位置で撮像された同一の対象線の形状が同一であるか否かによって駐車可能枠であるか否かを判定するので、駐車可能枠を精度よく認識できる。 According to such a parking frame recognizing device, since it is determined whether or not it is a parkingable frame depending on whether or not the shape of the same target line imaged at different positions is the same, the parkingable frame is accurately determined. Can be recognized.
 なお、請求の範囲に記載した括弧内の符号は、一つの態様として後述する実施形態に記載の具体的手段との対応関係を示すものであって、本開示の技術的範囲を限定するものではない。 In addition, the code | symbol in the parenthesis described in the claim shows the correspondence with the specific means as described in embodiment mentioned later as one aspect, Comprising: It does not limit the technical scope of this indication. Absent.
撮像システムの構成を示すブロック図である。It is a block diagram which shows the structure of an imaging system. 駐車可能枠認識処理を示すフローチャートである。It is a flowchart which shows a parking possible frame recognition process. 現在地でのフロントカメラによる撮像画像の一例を示す画像図である。It is an image figure which shows an example of the captured image with the front camera in the present location. より手前側でのフロントカメラからの撮像画像の一例を示す画像図である。It is an image figure which shows an example of the captured image from the front camera in the near side. 手前側での対象線の見え方の一例を示す鳥瞰図である。It is a bird's-eye view which shows an example of how the target line looks at the near side. 現在値での対象線の見え方の一例を示す鳥瞰図である。It is a bird's-eye view which shows an example of how the target line looks at the present value.
 以下、図面を参照しながら、本開示の実施形態を説明する。
 [1.実施形態]
 [1-1.構成]
 図1に示すように、本実施形態の撮像システム1は、乗用車等の車両に搭載されたシステムであり、制御ユニット10を備える。また、フロントカメラ4F、リヤカメラ4B、表示部30、車両制御部32、等を備えてもよい。なお、撮像システム1が搭載された車両を自車両ともいう。
Hereinafter, embodiments of the present disclosure will be described with reference to the drawings.
[1. Embodiment]
[1-1. Constitution]
As shown in FIG. 1, the imaging system 1 of this embodiment is a system mounted on a vehicle such as a passenger car, and includes a control unit 10. Moreover, you may provide the front camera 4F, the rear camera 4B, the display part 30, the vehicle control part 32, etc. A vehicle on which the imaging system 1 is mounted is also referred to as a host vehicle.
 フロントカメラ4Fおよびリヤカメラ4Bは、それぞれ、自車両の前方および後方の道路を撮像するためのものであり、自車両の前部および後部に取り付けられている。
 制御ユニット10は、これら各カメラ4F、4Bによる撮像画像から、車両周囲の道路を鉛直方向から俯瞰した鳥瞰画像を生成する。そして、その生成した鳥瞰画像を、液晶ディスプレイ等にて構成されて車室内に配置される表示部30に表示させる。
The front camera 4F and the rear camera 4B are for imaging roads ahead and behind the host vehicle, respectively, and are attached to the front and rear of the host vehicle.
The control unit 10 generates a bird's-eye image obtained by bird's-eye view of the road around the vehicle from the images captured by the cameras 4F and 4B. Then, the generated bird's-eye view image is displayed on the display unit 30 configured with a liquid crystal display or the like and disposed in the vehicle interior.
 また、制御ユニット10は、これら各カメラ4F、4Bによる撮像画像から駐車可能枠を認識する。なお、駐車枠とは、少なくとも予め設定された範囲内の幅を有する1本の線と、該線から車両の幅に応じた距離だけ隔てた位置に該線と平行な辺または線を有する物体との間の領域を表す。 Also, the control unit 10 recognizes a parking frame from images captured by the cameras 4F and 4B. A parking frame means an object having at least one line having a width within a preset range and a side or line parallel to the line at a position separated from the line by a distance corresponding to the width of the vehicle. Represents the area between.
 また、駐車可能枠とは、自車両が駐車可能な駐車枠を表す。また、ここでいう「平行」には略平行を含む。また、ここでいう「線と平行な辺または線を有する物体」には、例えば、縁石、他の線、壁、樹木、ガードレール等が該当する。 Also, the parking frame means a parking frame where the own vehicle can be parked. In addition, the term “parallel” includes substantially parallel. Further, the “object having a side or line parallel to the line” referred to here includes, for example, a curbstone, another line, a wall, a tree, a guardrail, and the like.
 制御ユニット10は、撮像信号入力部12、検出信号入力部14、メモリ16、表示制御部18、および画像処理部20を備える。
 撮像信号入力部12は、フロントカメラ4F、リヤカメラ4Bからの撮像信号を取り込み、撮像画像データとして画像処理部20に入力する機能を有する。
The control unit 10 includes an imaging signal input unit 12, a detection signal input unit 14, a memory 16, a display control unit 18, and an image processing unit 20.
The imaging signal input unit 12 has a function of capturing imaging signals from the front camera 4F and the rear camera 4B and inputting them to the image processing unit 20 as captured image data.
 検出信号入力部14は、自車両の各車輪の回転速度を検出する車輪速センサ6や、ステアリングの操舵角を検出する操舵角センサ8からの検出信号をそれぞれ取り込み、車輪速データ、操舵角データに変換して、画像処理部20に入力する機能を有する。 The detection signal input unit 14 takes in detection signals from the wheel speed sensor 6 that detects the rotational speed of each wheel of the host vehicle and the steering angle sensor 8 that detects the steering angle of the steering, respectively, and wheel speed data and steering angle data. And a function of inputting to the image processing unit 20.
 画像処理部20は、CPU18と、RAM、ROM、フラッシュメモリ等の半導体メモリ(以下、メモリ16)と、を有する周知のマイクロコンピュータを中心に構成される。画像処理部20の各種機能は、CPU18が非遷移的実体的記録媒体に格納されたプログラムを実行することにより実現される。この例では、メモリ16が、プログラムを格納した非遷移的実体的記録媒体に該当する。 The image processing unit 20 is mainly configured by a known microcomputer having a CPU 18 and a semiconductor memory (hereinafter, memory 16) such as a RAM, a ROM, and a flash memory. Various functions of the image processing unit 20 are realized by the CPU 18 executing a program stored in a non-transitional physical recording medium. In this example, the memory 16 corresponds to a non-transitional tangible recording medium that stores a program.
 また、このプログラムが実行されることで、プログラムに対応する方法が実行される。なお、非遷移的実体的記録媒体とは、記録媒体のうちの電磁波を除く意味である。また、画像処理部20を構成するマイクロコンピュータの数は1つでも複数でもよい。 Also, when this program is executed, a method corresponding to the program is executed. Note that the non-transitional tangible recording medium means that the electromagnetic waves in the recording medium are excluded. Further, the number of microcomputers constituting the image processing unit 20 may be one or plural.
 画像処理部20は、CPU18がプログラムを実行することで実現される機能の構成として、図1に示すように、線検出部22と、位置対応部24と、枠推定部26と、トラッキング部28と、を備える。画像処理部20を構成するこれらの要素を実現する手法はソフトウェアに限るものではなく、その一部または全部の要素について、一つあるいは複数のハードウェアを用いて実現してもよい。例えば、上記機能がハードウェアである電子回路によって実現される場合、その電子回路は多数の論理回路を含むデジタル回路、またはアナログ回路、あるいはこれらの組合せによって実現してもよい。 As shown in FIG. 1, the image processing unit 20 includes a line detection unit 22, a position correspondence unit 24, a frame estimation unit 26, and a tracking unit 28 as functional configurations realized by the CPU 18 executing a program. And comprising. The method of realizing these elements constituting the image processing unit 20 is not limited to software, and some or all of the elements may be realized using one or a plurality of hardware. For example, when the above function is realized by an electronic circuit that is hardware, the electronic circuit may be realized by a digital circuit including a large number of logic circuits, an analog circuit, or a combination thereof.
 線検出部22としての機能では、それぞれの撮像画像に対して周知のハフ変換等の画像処理を行うことによって白線や黄線のような線を検出する処理を行う。
 位置対応部24としての機能では、他のカメラとの共通する撮像範囲内に存在する線が同一の線であるかどうかを検出された線毎に判定し、同一のものを対応付ける処理を行う。なお、予めカメラ毎に他のカメラとの共通する撮像範囲および対応する座標が設定された対応情報がメモリ16に格納されており、この対応情報を用いて同一の線か否かを判定する。
The function as the line detection unit 22 performs a process of detecting a line such as a white line or a yellow line by performing a known image process such as a Hough transform on each captured image.
In the function as the position correspondence unit 24, it is determined for each detected line whether or not the lines existing in the imaging range common to other cameras are the same line, and the process of associating the same line is performed. Note that correspondence information in which an imaging range common to other cameras and corresponding coordinates are set for each camera is stored in the memory 16, and it is determined whether or not the lines are the same using the correspondence information.
 枠推定部26としての機能は、撮像画像中に存在する1または複数の駐車枠を推定し、これらの駐車枠の中から駐車可能枠を推定する処理を行う。
 トラッキング部28としての機能は、自車両の移動量と撮像画像中の物体が移動する移動量とを対応付けることによって撮像画像中の物体を追跡する処理を行う。特に、本実施形態では、駐車枠を追跡し、駐車枠の位置を認識する。駐車可能枠の位置の情報は車両制御部32に送られる。
The function as the frame estimation unit 26 performs a process of estimating one or a plurality of parking frames present in the captured image and estimating a parking available frame from these parking frames.
The function as the tracking unit 28 performs processing for tracking an object in the captured image by associating the amount of movement of the host vehicle with the amount of movement of the object in the captured image. In particular, in this embodiment, the parking frame is tracked and the position of the parking frame is recognized. Information on the position of the parking frame is sent to the vehicle control unit 32.
 また、トラッキング部28としての機能では、自車両の移動量を加味しつつ駐車枠を示す画像を生成し、この画像は表示制御部18に出力される。表示制御部18は、画像処理部より送られた画像を表示部30にて表示可能な映像信号に変換し、表示部30に送る。 Also, the function as the tracking unit 28 generates an image showing a parking frame while taking into account the amount of movement of the host vehicle, and this image is output to the display control unit 18. The display control unit 18 converts the image sent from the image processing unit into a video signal that can be displayed on the display unit 30, and sends the video signal to the display unit 30.
 車両制御部32は、駐車可能枠の位置を受けて、駐車可能枠に駐車するための自車両の軌道を生成し、この軌道に従って自車両を移動させるために、自車両の加減速、自車両の舵角等を制御する。 The vehicle control unit 32 receives the position of the parking frame, generates a track of the host vehicle for parking in the parking frame, and moves the host vehicle according to the track. Control the rudder angle, etc.
 [1-2.処理]
 次に、画像処理部20が実行する駐車可能枠認識処理について、図2のフローチャートを用いて説明する。駐車可能枠認識処理は、例えば自車両の電源が投入されると開始され、その後、繰り返し実施される処理である。駐車可能枠認識処理はカメラの撮像周期毎に起動されてもよい。
[1-2. processing]
Next, the parkable frame recognition process executed by the image processing unit 20 will be described with reference to the flowchart of FIG. The parking-permitted frame recognition process is a process that is started, for example, when the vehicle is turned on and then repeatedly executed. The parkingable frame recognition process may be activated every imaging cycle of the camera.
 駐車可能枠認識処理は、図2に示すように、まず、S110にて、カメラによる撮像画像を取得する。この処理では、自車両に配置されたフロントカメラ4Fが異なる位置で撮像したそれぞれの撮像画像を取得する。 In the parking frame recognition process, as shown in FIG. 2, first, in S110, an image captured by the camera is acquired. In this process, each captured image captured at a different position by the front camera 4F arranged in the host vehicle is acquired.
 例えば、フロントカメラ4Fが現在の自車両の位置で撮像した撮像画像と、予め設定された距離だけ手前で撮像した撮像した撮像画像とを取得する。手前で撮像した撮像画像としては、例えば、現在地の数m程度手前で撮像された画像を取得するとよい。手前で撮像した撮像画像としては、例えば図3に示すような撮像画像が得られ、現在の位置で撮像した撮像画像としては、例えば図4に示すような撮像画像が得られる。 For example, a captured image captured by the front camera 4F at the current position of the host vehicle and a captured image captured in front of a preset distance are acquired. As a captured image captured in front, for example, an image captured in front of the current location about several meters may be acquired. For example, a captured image as illustrated in FIG. 3 is obtained as the captured image captured in the foreground, and a captured image as illustrated in FIG. 4 is acquired as the captured image captured at the current position.
 また、自車両が移動中であれば、現在から予め設定された時間だけ前に撮像された画像を、手前で撮像した撮像画像として取得してもよい。この場合、予め設定された時間は、例えば数秒程度に設定されるとよい。 Further, if the host vehicle is moving, an image captured before a preset time from the present time may be acquired as a captured image captured in front. In this case, the preset time may be set to about several seconds, for example.
 続いて、S120にて、取得した撮像画像毎に、白線や黄線等の線を検出する。この処理では、撮像画像を構成する多数の画素において、輝度や色の境界であるエッジを検出し、エッジに対して周知のハフ変換等の処理を実施することによって、撮像画像中に存在する全ての線を検出する。ここでの線は、幅を有し、例えば路上のペイントが含まれる。なお、検出された線を検出線と呼ぶ。図3、図4に示す例では、全ての駐車枠を示す白線40のうち、全ての線41,42,43が検出線となる。 Subsequently, in S120, a line such as a white line or a yellow line is detected for each acquired captured image. In this process, the edges that are the boundaries of brightness and color are detected in a large number of pixels that make up the captured image, and processing such as well-known Hough transform is performed on the edges, so that all existing in the captured image are detected. Detect the line. The line here has a width and includes, for example, paint on the road. The detected line is called a detection line. In the example shown in FIGS. 3 and 4, all the lines 41, 42, and 43 among the white lines 40 indicating all the parking frames are detection lines.
 続いて、S130にて、それぞれの撮像画像に複数の検出線が存在するか否かを判定する。本処理では、2本の線で挟まれた領域を駐車枠と認識するため、2本以上の線が検出されない場合は駐車枠がないものと認識する。よって、S130では、駐車枠が存在する可能性があるか否かを判定しているともいえる。 Subsequently, in S130, it is determined whether or not there are a plurality of detection lines in each captured image. In this process, since an area sandwiched between two lines is recognized as a parking frame, it is recognized that there is no parking frame when two or more lines are not detected. Therefore, in S130, it can be said that it is determined whether there is a possibility that a parking frame exists.
 複数の検出線が存在しなければ、S240にて使用可能な駐車枠が存在しない旨をメモリ16において記録し、駐車可能枠認識処理を終了する。また、複数の検出線が存在すれば、S140にて得られた撮像画像の全体、或いは撮像画像のうちの少なくとも検出線を平面座標系に変換する。この処理は、撮像画像を鉛直方向から俯瞰した鳥瞰画像を生成する周知の処理にて実現すればよい。 If there are not a plurality of detection lines, the fact that there is no usable parking frame is recorded in the memory 16 in S240, and the parking frame recognition process is terminated. If there are a plurality of detection lines, the entire captured image obtained in S140 or at least the detection lines of the captured image is converted into a planar coordinate system. This process may be realized by a well-known process for generating a bird's-eye image obtained by looking down on a captured image from the vertical direction.
 続いて、S150にて駐車枠推定を行う。この処理では、複数の撮像画像から駐車枠を抽出する。駐車枠としては、少なくとも2本の線で挟まれた領域であって、2本の線の間隔が車両の幅を基準に設定される予め設定された範囲内となる領域を抽出する。この際、撮像画像中の存在する全ての駐車枠を抽出する。図3、図4に示す例では白線41,42の間の領域、白線42,43の間の領域が駐車枠として抽出されることになる。白線41,43の間の領域も駐車枠として抽出されうるが、本処理では白線間の距離が予め設定された範囲内となる領域のみを駐車枠と認識するため、白線41,43の間の領域は駐車枠から除外される。 Subsequently, the parking frame is estimated at S150. In this process, a parking frame is extracted from a plurality of captured images. The parking frame is an area that is sandwiched between at least two lines, and an area in which the interval between the two lines is within a preset range that is set based on the width of the vehicle is extracted. At this time, all the parking frames present in the captured image are extracted. In the example shown in FIGS. 3 and 4, the area between the white lines 41 and 42 and the area between the white lines 42 and 43 are extracted as parking frames. Although the area between the white lines 41 and 43 can also be extracted as a parking frame, in this process, only the area where the distance between the white lines is within a preset range is recognized as a parking frame. The area is excluded from the parking frame.
 なお、駐車枠を抽出する際には、自車両から見て最も手前の駐車枠、自車両の右側或いは左側のみに位置する駐車枠等、特定の駐車枠のみを抽出してもよい。また、この処理では、該駐車枠の自車両からより遠い側の端部を規定する線を表す対象線を抽出しておく。図3、図4に示す例では、白線41,42の間の駐車枠では白線42が対象線となり、白線42,43の間の駐車枠では白線43が対象線となる。 In addition, when extracting a parking frame, you may extract only a specific parking frame, such as the parking frame nearest to the own vehicle and the parking frame located only on the right side or the left side of the own vehicle. Further, in this process, a target line representing a line that defines the end of the parking frame farther from the host vehicle is extracted. In the example shown in FIGS. 3 and 4, the white line 42 is the target line in the parking frame between the white lines 41 and 42, and the white line 43 is the target line in the parking frame between the white lines 42 and 43.
 続いて、S160にて駐車枠の候補が存在するか否かを判定する。ここでの駐車枠の候補とは、駐車可能枠を含むが駐車可能枠とは限らず、他車両が既に駐車している駐車枠をも含む一般的な駐車枠を示す。 Subsequently, in S160, it is determined whether there is a parking frame candidate. Here, the candidate parking frame includes a parking frame, but is not limited to a parking frame, and indicates a general parking frame including a parking frame in which another vehicle has already been parked.
 駐車枠の候補が存在しなければ、前述のS240に移行する。また、駐車枠の候補が存在すれば、S210にて複数の撮像画像の共通する撮像領域において対象線の形状が同一か否かを判定する。 If there is no parking frame candidate, the process proceeds to S240 described above. If there is a parking frame candidate, it is determined in S210 whether or not the shape of the target line is the same in an imaging region common to a plurality of captured images.
 この処理では、まず、複数の撮像画像から異なる位置から観察した同一の駐車枠を抽出し、この駐車枠を構成する同一の対象線の形状を比較する。同一の駐車枠は、複数の撮像画像が得られる条件、例えば、フロントカメラ4Fの移動距離等によって複数の撮像画像中において同じ物体が写る共通領域を特定し、ある撮像画像での駐車枠と共通領域内となる駐車枠を抽出することで特定される。 In this process, first, the same parking frame observed from different positions is extracted from a plurality of captured images, and the shapes of the same target lines constituting the parking frame are compared. The same parking frame specifies a common area in which the same object appears in the plurality of captured images according to the conditions under which a plurality of captured images are obtained, for example, the moving distance of the front camera 4F, and is the same as the parking frame in a certain captured image. It is specified by extracting a parking frame within the area.
 また、対象線の形状を比較する際には、例えば、対象線の長さを比較する。ここで、線の長さとは、線の長手方向の長さを表す。長手方向とは線の幅方向と直交する方向である。対象線の長さを比較する場合、例えば、図3,4に示すように、同じ白線42の長さLとLとを比較する。白線の長さは、平面座標系に座標変換したときの長さを比較する。 Further, when comparing the shapes of the target lines, for example, the lengths of the target lines are compared. Here, the length of the line represents the length of the line in the longitudinal direction. The longitudinal direction is a direction orthogonal to the line width direction. When comparing the lengths of the target lines, for example, as shown in FIGS. 3 and 4, the lengths L 1 and L 2 of the same white line 42 are compared. The length of the white line is compared with the length when coordinate conversion is performed in the plane coordinate system.
 図5、図6に示すように、駐車枠に車両が駐車している場合には、この車両によって白線42の一部が隠され、カメラの撮像位置の差によって白線42のうちの隠される部位が異なる。このため、カメラによって観察される白線42の長さに差が生じる。 As shown in FIG. 5 and FIG. 6, when a vehicle is parked in the parking frame, a part of the white line 42 is hidden by the vehicle, and a part of the white line 42 is hidden by the difference in the imaging position of the camera. Is different. For this reason, a difference occurs in the length of the white line 42 observed by the camera.
 一方、駐車枠に車両が駐車していない駐車枠であれば、例えば白線43等の対象線は比較的広い範囲で対象線の全てが車両に隠されることなく観察される。この場合、対象線の長さの差が小さくなる。 On the other hand, in the case of a parking frame in which no vehicle is parked in the parking frame, for example, the target line such as the white line 43 is observed in a relatively wide range without being hidden by the vehicle. In this case, the difference in the length of the target line is reduced.
 よって、本実施形態では、対象線の長さの差が予め設定された閾値以内であれば形状が同一であるものと認識する。S210にて対象線の形状が同一であれば、S220にて対象線の形状が同一と判定された駐車枠を駐車可能枠として認識する。また、対象線の形状が同一でなければ、S230にて対象線の形状が同一でないと判定された駐車枠を駐車可能枠でないものと認識する。 Therefore, in the present embodiment, if the difference in length of the target line is within a preset threshold value, the shapes are recognized as being the same. If the shape of the target line is the same in S210, the parking frame determined to have the same shape of the target line in S220 is recognized as a parking available frame. Further, if the shape of the target line is not the same, the parking frame determined in S230 that the shape of the target line is not the same is recognized as not being a parkingable frame.
 このような処理が終了すると、駐車可能枠認識処理を終了する。
 [1-3.効果]
 以上詳述した実施形態によれば、以下の効果を奏する。
When such a process ends, the parking frame recognition process ends.
[1-3. effect]
According to the embodiment described in detail above, the following effects can be obtained.
 (1a)上記の撮像システム1において画像処理部20は、自車両に配置された撮像装置が異なる位置で撮像したそれぞれの撮像画像を表す複数の撮像画像を取得するように構成される。そして、画像処理部20は、複数の撮像画像から駐車枠を抽出し、該駐車枠の自車両からより遠い側の端部を規定する線を表す対象線を抽出する。また、画像処理部20は、複数の撮像画像から抽出された同一の駐車枠について、対象線の形状が同一であるか否かを判定し、対象線の形状が同一と判定された駐車枠を駐車可能枠として認識する。 (1a) In the imaging system 1 described above, the image processing unit 20 is configured to acquire a plurality of captured images representing respective captured images captured at different positions by an imaging device disposed in the host vehicle. Then, the image processing unit 20 extracts a parking frame from a plurality of captured images, and extracts a target line representing a line that defines an end of the parking frame farther from the host vehicle. Further, the image processing unit 20 determines whether or not the shape of the target line is the same for the same parking frame extracted from the plurality of captured images, and determines the parking frame that is determined to have the same shape as the target line. Recognize as a parking frame.
 このような撮像システム1によれば、異なる位置で撮像された同一の対象線の形状が同一であるか否かによって駐車可能枠であるか否かを判定するので、駐車可能枠を精度よく認識できる。 According to such an imaging system 1, since it is determined whether or not it is a parking-capable frame depending on whether or not the shape of the same target line captured at different positions is the same, the parking-capable frame is accurately recognized. it can.
 (1b)上記の撮像システム1において画像処理部20は、対象線の形状として、対象線の長さが同一であるか否かを判定する。
 このような撮像システム1によれば、対象線の長さを比較することによって対象線の形状を判定するので、線全体の形状で判定する構成と比較して、簡素な構成で駐車可能枠を認識することができる。
(1b) In the imaging system 1 described above, the image processing unit 20 determines whether or not the length of the target line is the same as the shape of the target line.
According to such an imaging system 1, since the shape of the target line is determined by comparing the lengths of the target lines, the parking frame can be configured with a simple configuration as compared with the configuration that is determined by the shape of the entire line. Can be recognized.
 (1c)上記の撮像システム1において画像処理部20は、駐車枠として、少なくとも2本の線で挟まれた領域を抽出する。
 このような撮像システム1によれば、少なくとも2本の線で挟まれた領域を駐車枠として抽出するので、周知の白線認識の技術を用いて駐車枠を抽出することができる。よって、より簡素な構成で駐車可能枠を認識することができる。
(1c) In the imaging system 1 described above, the image processing unit 20 extracts a region sandwiched between at least two lines as a parking frame.
According to such an imaging system 1, since an area sandwiched by at least two lines is extracted as a parking frame, the parking frame can be extracted using a known white line recognition technique. Therefore, the parking frame can be recognized with a simpler configuration.
 (1d)上記の撮像システム1において画像処理部20は、それぞれの撮像画像を平面座標系に変換し、平面座標系に変換されたそれぞれの撮像画像から抽出された同一の駐車枠について、対象線の形状が同一であるか否かを判定し、対象線の形状が同一と判定された駐車枠を駐車可能枠として認識する。 (1d) In the imaging system 1 described above, the image processing unit 20 converts each captured image into a planar coordinate system, and applies the subject line to the same parking frame extracted from each captured image converted into the planar coordinate system. It is determined whether or not the shape of the target line is the same, and the parking frame in which the shape of the target line is determined to be the same is recognized as a parkingable frame.
 このような撮像システム1によれば、平面座標系で対象線の形状が同一であるか否かを判定するので、撮像画像を得る位置が異なることによって対象線に変形が生じていたとしても良好に形状の判定を行うことができる。 According to such an imaging system 1, since it is determined whether or not the shape of the target line is the same in the planar coordinate system, it is good even if the target line is deformed due to different positions where the captured image is obtained. The shape can be determined.
 [2.他の実施形態]
 以上、本開示の実施形態について説明したが、本開示は上述の実施形態に限定されることなく、種々変形して実施することができる。
[2. Other Embodiments]
As mentioned above, although embodiment of this indication was described, this indication is not limited to the above-mentioned embodiment, and can carry out various modifications.
 (2a)上記実施形態では、対象線の形状が同一であるか否かを対象線の長さで判断したが、これに限定されるものではない。例えば、対象線の形状として、対象線の全体形状や、対象線において自車両から遠い側の端部の形状等が同一であるか否かを判定してもよい。 (2a) In the above embodiment, whether or not the shape of the target line is the same is determined based on the length of the target line, but the present invention is not limited to this. For example, as the shape of the target line, it may be determined whether or not the overall shape of the target line, the shape of the end portion on the side far from the own vehicle in the target line, and the like are the same.
 具体的に、対象線において自車両から遠い側の端部の形状が同一であるか否かを判定する場合には、図3、図4に示すように、白線42の車両から遠い端部の形状、すなわち、領域[A]および領域[B]内における白線42の形状を比較する。駐車枠に車両が存在する場合には、白線42の車両から遠い端部が車両によって隠され、カメラの位置に応じて白線42が車両のどの部位に隠されるかが変化するため、白線42の車両から遠い端部の形状が変化する。一方で、白線42が隠されていない場合には、白線42の車両から遠い端部の形状は変化しない。このため、対象線において自車両から遠い側の端部の形状が同一であるか否かによって駐車枠が駐車可能枠であるか否かを判定できる。 Specifically, when determining whether or not the shape of the end portion on the side far from the host vehicle in the target line is the same, as shown in FIG. 3 and FIG. The shapes, that is, the shapes of the white lines 42 in the region [A] and the region [B] are compared. When there is a vehicle in the parking frame, the end of the white line 42 far from the vehicle is hidden by the vehicle, and the part of the vehicle where the white line 42 is hidden changes depending on the position of the camera. The shape of the end far from the vehicle changes. On the other hand, when the white line 42 is not hidden, the shape of the end of the white line 42 far from the vehicle does not change. Therefore, it can be determined whether or not the parking frame is a parkingable frame based on whether or not the shape of the end portion on the side far from the host vehicle is the same in the target line.
 このような撮像システムによれば、対象線において自車両から遠い側の端部の形状を比較することによって対象線の形状を判定するので、線全体の形状で判定する構成と比較して、簡素な構成で駐車可能枠を認識することができる。 According to such an imaging system, since the shape of the target line is determined by comparing the shape of the end of the target line that is far from the host vehicle, it is simpler than the configuration that is determined by the shape of the entire line. It is possible to recognize the parking available frame with a simple configuration.
 (2b)上記撮像システム1では、駐車枠が駐車可能枠であるか否かを判定する際に、上記の駐車可能枠認識処理のみを用いて判定してもよいが、上記の駐車可能枠認識処理と周知の処理とを組み合わせて判定してもよい。例えば、周知の処理で駐車可能枠を仮に抽出しておき、最終的な判断のために駐車可能枠認識処理を用いてもよい。また、周知の処理と駐車可能枠認識処理との少なくとも一方、或いは過半数の処理等で駐車可能枠であると判定された場合に駐車枠が駐車可能枠であると判定してもよい。 (2b) In the imaging system 1, when determining whether or not the parking frame is a parking frame, it may be determined using only the parking frame recognition process described above. You may determine by combining a process and a well-known process. For example, a parkable frame may be temporarily extracted by a known process, and a parkable frame recognition process may be used for final determination. Moreover, you may determine with a parking frame being a parking possible frame, when it determines with it being a parking possible frame by at least one of a well-known process and a parking possible frame recognition process, or a majority of processes.
 (2c)上記実施形態における1つの構成要素が有する複数の機能を、複数の構成要素によって実現したり、1つの構成要素が有する1つの機能を、複数の構成要素によって実現したりしてもよい。また、複数の構成要素が有する複数の機能を、1つの構成要素によって実現したり、複数の構成要素によって実現される1つの機能を、1つの構成要素によって実現したりしてもよい。また、上記実施形態の構成の一部を省略してもよい。また、上記実施形態の構成の少なくとも一部を、他の上記実施形態の構成に対して付加または置換してもよい。なお、請求の範囲に記載した文言から特定される技術思想に含まれるあらゆる態様が本開示の実施形態である。 (2c) A plurality of functions of one constituent element in the embodiment may be realized by a plurality of constituent elements, or a single function of one constituent element may be realized by a plurality of constituent elements. . Further, a plurality of functions possessed by a plurality of constituent elements may be realized by one constituent element, or one function realized by a plurality of constituent elements may be realized by one constituent element. Moreover, you may abbreviate | omit a part of structure of the said embodiment. Further, at least a part of the configuration of the above embodiment may be added to or replaced with the configuration of the other embodiment. In addition, all the aspects included in the technical idea specified from the wording described in the claims are embodiments of the present disclosure.
 (2d)上述した撮像システム1の他、当該撮像システム1の構成要素となる駐車枠認識装置等の各種装置、当該撮像システム1としてコンピュータを機能させるためのプログラム、このプログラムを記録した半導体メモリ等の非遷移的実態的記録媒体、駐車枠認識方法など、種々の形態で本開示を実現することもできる。 (2d) In addition to the imaging system 1 described above, various devices such as a parking frame recognition device as a component of the imaging system 1, a program for causing a computer to function as the imaging system 1, a semiconductor memory storing the program, and the like The present disclosure can also be realized in various forms such as a non-transition actual recording medium and a parking frame recognition method.
 [3.実施形態の構成と本開示の構成との対応関係]
 上記実施形態において撮像システム1の画像処理部20は本開示でいう駐車枠認識装置に相当する。また、画像処理部20が実行する処理のうちのS110の処理は本開示でいう画像取得部に相当し、上記実施形態においてS120、S150の処理は本開示でいう線抽出部に相当する。また、上記実施形態においてS140の処理は本開示でいう座標変換部に相当し、上記実施形態においてS210、S220の処理は本開示でいう可能枠認識部に相当する。
[3. Correspondence between Configuration of Embodiment and Configuration of Present Disclosure]
In the above-described embodiment, the image processing unit 20 of the imaging system 1 corresponds to a parking frame recognition device in the present disclosure. In addition, the process of S110 among the processes executed by the image processing unit 20 corresponds to the image acquisition unit referred to in the present disclosure, and the processes of S120 and S150 correspond to the line extraction unit referred to in the present disclosure in the above embodiment. In the above embodiment, the process of S140 corresponds to the coordinate conversion unit referred to in the present disclosure, and the processes of S210 and S220 in the above embodiment correspond to the possible frame recognition unit referred to in the present disclosure.

Claims (5)

  1.  駐車枠認識装置(20)であって、
     自車両に配置された撮像装置が異なる位置で撮像したそれぞれの撮像画像を表す複数の撮像画像を取得するように構成された画像取得部(S110)と、
     前記複数の撮像画像から駐車枠を抽出し、該駐車枠の自車両からより遠い側の端部を規定する線を表す対象線を抽出するように構成された線抽出部(S120、S150)と、
     前記複数の撮像画像から抽出された同一の駐車枠について、前記対象線の形状が同一であるか否かを判定し、前記対象線の形状が同一と判定された駐車枠を、自車両が駐車可能な駐車枠を表す駐車可能枠として認識するように構成された可能枠認識部(S210、S220)と、
     を備えた駐車枠認識装置。
    A parking frame recognition device (20),
    An image acquisition unit (S110) configured to acquire a plurality of captured images representing respective captured images captured at different positions by an imaging device disposed in the host vehicle;
    A line extraction unit (S120, S150) configured to extract a parking frame from the plurality of captured images and extract a target line representing a line defining an end of the parking frame farther from the vehicle; ,
    For the same parking frame extracted from the plurality of captured images, it is determined whether or not the shape of the target line is the same, and the host vehicle parks the parking frame determined to have the same shape of the target line. A possible frame recognition unit (S210, S220) configured to recognize as a possible parking frame representing a possible parking frame;
    A parking frame recognition device.
  2.  請求項1に記載の駐車枠認識装置であって、
     前記可能枠認識部は、前記対象線の形状として、前記対象線の長さが同一であるか否かを判定する
     ように構成された駐車枠認識装置。
    The parking frame recognition device according to claim 1,
    The parking frame recognition device configured to determine whether or not the length of the target line is the same as the shape of the target line.
  3.  請求項1または請求項2に記載の駐車枠認識装置であって、
     前記可能枠認識部は、前記対象線の形状として、前記対象線において自車両から遠い側の端部の形状が同一であるか否かを判定する
     ように構成された駐車枠認識装置。
    The parking frame recognition device according to claim 1 or 2,
    The said possible frame recognition part is the parking frame recognition apparatus comprised so that it might determine whether the shape of the edge part far from the own vehicle in the said target line is the same as the shape of the said target line.
  4.  請求項1から請求項3の何れか1項に記載の駐車枠認識装置であって、
     前記線抽出部は、前記駐車枠として、少なくとも2本の線で挟まれた領域を抽出する
     ように構成された駐車枠認識装置。
    The parking frame recognition device according to any one of claims 1 to 3,
    The said line extraction part is the parking frame recognition apparatus comprised so that the area | region pinched | interposed by the at least 2 line as the said parking frame may be extracted.
  5.  請求項1から請求項4の何れか1項に記載の駐車枠認識装置であって、
     前記それぞれの撮像画像を平面座標系に変換するように構成された座標変換部(S140)、
     をさらに備え、
     前記可能枠認識部は、平面座標系に変換されたそれぞれの撮像画像から抽出された同一の駐車枠について、前記対象線の形状が同一であるか否かを判定し、前記対象線の形状が同一と判定された駐車枠を前記駐車可能枠として認識する
     ように構成された駐車枠認識装置。
    The parking frame recognition device according to any one of claims 1 to 4,
    A coordinate conversion unit (S140) configured to convert each captured image into a planar coordinate system;
    Further comprising
    The possible frame recognizing unit determines whether or not the shape of the target line is the same for the same parking frame extracted from each captured image converted into the plane coordinate system, and the shape of the target line is The parking frame recognition apparatus comprised so that the parking frame determined to be the same may be recognized as the said parking possible frame.
PCT/JP2017/039141 2016-11-10 2017-10-30 Parking frame recognition device WO2018088263A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016-219646 2016-11-10
JP2016219646A JP6677142B2 (en) 2016-11-10 2016-11-10 Parking frame recognition device

Publications (1)

Publication Number Publication Date
WO2018088263A1 true WO2018088263A1 (en) 2018-05-17

Family

ID=62110438

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/039141 WO2018088263A1 (en) 2016-11-10 2017-10-30 Parking frame recognition device

Country Status (2)

Country Link
JP (1) JP6677142B2 (en)
WO (1) WO2018088263A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7245640B2 (en) * 2018-12-14 2023-03-24 株式会社デンソーテン Image processing device and image processing method

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016016681A (en) * 2014-07-04 2016-02-01 クラリオン株式会社 Parking frame recognition device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4997191B2 (en) * 2008-07-02 2012-08-08 本田技研工業株式会社 Device for assisting parking
JP5287344B2 (en) * 2009-02-26 2013-09-11 日産自動車株式会社 Parking assistance device and obstacle detection method
JP5303356B2 (en) * 2009-05-21 2013-10-02 本田技研工業株式会社 Vehicle parking support system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016016681A (en) * 2014-07-04 2016-02-01 クラリオン株式会社 Parking frame recognition device

Also Published As

Publication number Publication date
JP6677142B2 (en) 2020-04-08
JP2018078479A (en) 2018-05-17

Similar Documents

Publication Publication Date Title
Wu et al. Lane-mark extraction for automobiles under complex conditions
JP6649738B2 (en) Parking lot recognition device, parking lot recognition method
JP7206583B2 (en) Information processing device, imaging device, device control system, moving object, information processing method and program
US10192444B2 (en) Forward collision warning system and method
Aytekin et al. Increasing driving safety with a multiple vehicle detection and tracking system using ongoing vehicle shadow information
JP2013190421A (en) Method for improving detection of traffic-object position in vehicle
WO2014002692A1 (en) Stereo camera
CN107787496B (en) Vanishing point correcting device and method
JP6617150B2 (en) Object detection method and object detection apparatus
KR102372296B1 (en) Apparatus and method for recognize driving lane on image
US9824449B2 (en) Object recognition and pedestrian alert apparatus for a vehicle
WO2018088262A1 (en) Parking frame recognition device
JP5097681B2 (en) Feature position recognition device
JP2011033594A (en) Distance calculation device for vehicle
WO2018088263A1 (en) Parking frame recognition device
JP5785515B2 (en) Pedestrian detection device and method, and vehicle collision determination device
WO2022009537A1 (en) Image processing device
US9030560B2 (en) Apparatus for monitoring surroundings of a vehicle
JP6263436B2 (en) Travel path recognition device
JP6729358B2 (en) Recognition device
US20180268228A1 (en) Obstacle detection device
WO2018097269A1 (en) Information processing device, imaging device, equipment control system, mobile object, information processing method, and computer-readable recording medium
JP7322651B2 (en) Obstacle recognition device
JP4876277B2 (en) VEHICLE IMAGE PROCESSING DEVICE, VEHICLE, AND VEHICLE IMAGE PROCESSING PROGRAM
JP6334773B2 (en) Stereo camera

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17868522

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17868522

Country of ref document: EP

Kind code of ref document: A1