WO2024121880A1 - Information processing device, information processing method, and computer-readable medium - Google Patents

Information processing device, information processing method, and computer-readable medium Download PDF

Info

Publication number
WO2024121880A1
WO2024121880A1 PCT/JP2022/044666 JP2022044666W WO2024121880A1 WO 2024121880 A1 WO2024121880 A1 WO 2024121880A1 JP 2022044666 W JP2022044666 W JP 2022044666W WO 2024121880 A1 WO2024121880 A1 WO 2024121880A1
Authority
WO
WIPO (PCT)
Prior art keywords
road
area
vehicle
image
information processing
Prior art date
Application number
PCT/JP2022/044666
Other languages
French (fr)
Japanese (ja)
Inventor
匡孝 西田
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Priority to PCT/JP2022/044666 priority Critical patent/WO2024121880A1/en
Publication of WO2024121880A1 publication Critical patent/WO2024121880A1/en

Links

Images

Definitions

  • the present disclosure relates to an information processing device, an information processing method, and a non-transitory computer-readable medium on which a program is stored.
  • Patent Document 1 discloses a technology that, when there are multiple lane candidates that are line candidates and whose brightness is determined to be equal to or greater than a brightness threshold, recognizes the lane candidate that is closest to the vehicle among the multiple lane candidates as the lane marking line. This makes it possible to address the issue that, when multiple line candidates with different brightness such as white lines and yellow lines are detected, there is a risk that only the white lines will always be recognized.
  • Patent Document 1 may not be able to properly detect lane markings depending on their shape, for example, when the lane marking (e.g., center line, lane boundary line) indicating one side of the lane in which the vehicle is traveling is dashed.
  • lane marking e.g., center line, lane boundary line
  • the objective of the present disclosure is to provide an information processing device, an information processing method, and a non-transitory computer-readable medium on which a program is stored that can properly detect lane markings.
  • an information processing device includes a recognition unit that recognizes an area of the road's dividing line from a first image of the road around a vehicle, a generation unit that generates a second image in which the recognized area of the road's dividing line is represented on a coordinate plane having the longitudinal and lateral directions of the vehicle as two axes, and an estimation unit that estimates linear figures representing the road's dividing line based on the second image.
  • a second aspect of the present disclosure provides an information processing method that recognizes an area of the road's dividing line from a first image of the road around a vehicle, generates a second image that represents the recognized area of the road's dividing line on a coordinate plane having the longitudinal and lateral directions of the vehicle as two axes, and estimates a linear figure representing the road's dividing line based on the second image.
  • a non-transitory computer-readable medium stores a program for causing a computer to execute a process of recognizing an area of road dividing lines from a first image of a road around a vehicle, generating a second image in which the recognized area of road dividing lines is represented on a coordinate plane having the longitudinal and lateral directions of the vehicle as two axes, and estimating linear figures representing the road dividing lines based on the second image.
  • lane markings can be properly detected.
  • FIG. 1 is a diagram illustrating an example of a configuration of an information processing device according to an embodiment.
  • FIG. 2 is a diagram illustrating an example of a hardware configuration of an information processing device according to an embodiment.
  • 10 is a flowchart illustrating an example of processing of the information processing device according to the embodiment.
  • FIG. 2 is a diagram showing an example of a captured image according to the embodiment;
  • FIG. 13 is a diagram showing an example of a captured image after threshold processing according to the embodiment.
  • 11 is a diagram illustrating an example of a binarized image in which a dashed line portion of a demarcation line is detected according to the embodiment;
  • FIG. 13 is a diagram showing an example of a dashed demarcation line on a bird's-eye view according to an embodiment
  • FIG. 2 is a diagram illustrating an example of lanes on a bird's-eye view according to the embodiment
  • FIG. 4 is a diagram illustrating an example of an output image according to the embodiment.
  • FIG. 1 is a diagram showing an example of the configuration of the information processing device 10 according to an embodiment.
  • the information processing device 10 has a recognition unit 11, a generation unit 12, and an estimation unit 13. Each of these units may be realized by cooperation between one or more programs installed in the information processing device 10 and hardware such as a processor and a memory of the information processing device 10.
  • the recognition unit 11 recognizes the area of the road dividing lines from a first image of the road around the vehicle.
  • the generation unit 12 generates a second image in which the area of the road dividing lines recognized by the recognition unit 11 is represented on a coordinate plane with the vehicle's vertical direction (e.g., forward direction) and horizontal direction as two axes.
  • the estimation unit 13 estimates linear figures representing the road dividing lines based on the second image.
  • FIG. 2 is a diagram showing an example of a hardware configuration of an information processing device 10 according to an embodiment.
  • the information processing device 10 (computer 100) includes a processor 101, a memory 102, and a communication interface 103. These units may be connected by a bus or the like.
  • the memory 102 stores at least a part of a program 104.
  • the communication interface 103 includes an interface required for communication with other network elements.
  • the memory 102 may be of any type.
  • the memory 102 may be, as a non-limiting example, a non-transitory computer-readable storage medium.
  • the memory 102 may also be implemented using any suitable data storage technology, such as semiconductor-based memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory. Although only one memory 102 is shown in the computer 100, there may be several physically different memory modules in the computer 100.
  • the processor 101 may be of any type.
  • the processor 101 may include one or more of a general-purpose computer, a special-purpose computer, a microprocessor, a digital signal processor (DSP), and a processor based on a multi-core processor architecture, as a non-limiting example.
  • the computer 100 may have multiple processors, such as application-specific integrated circuit chips that are time-slaved to a clock that synchronizes the main processor.
  • Embodiments of the present disclosure may be implemented in hardware or special purpose circuits, software, logic, or any combination thereof. Some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software that may be executed by a controller, microprocessor, or other computing device.
  • the present disclosure also provides at least one computer program product tangibly stored on a non-transitory computer-readable storage medium.
  • the computer program product includes computer-executable instructions, such as instructions included in program modules, that execute on a target real or virtual processor device to perform the processes or methods of the present disclosure.
  • Program modules include routines, programs, libraries, objects, classes, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • the functionality of the program modules may be combined or divided among program modules as desired in various embodiments.
  • the machine-executable instructions of the program modules may be executed in local or distributed devices. In a distributed device, the program modules may be located in both local and remote storage media.
  • Program codes for carrying out the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes are provided to a processor or controller of a general purpose computer, a special purpose computer, or other programmable data processing apparatus. When the program code is executed by the processor or controller, the functions/operations in the flowcharts and/or implementing block diagrams are performed. The program code may be executed entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine, or entirely on a remote machine or server.
  • Non-transitory computer-readable media include various types of tangible recording media.
  • Examples of non-transitory computer-readable media include magnetic recording media, magneto-optical recording media, optical disk media, semiconductor memory, etc.
  • Magnetic recording media include, for example, flexible disks, magnetic tapes, hard disk drives, etc.
  • Magneto-optical recording media include, for example, magneto-optical disks, etc.
  • Optical disk media include, for example, Blu-ray disks, CD (Compact Disc)-ROM (Read Only Memory), CD-R (Recordable), CD-RW (ReWritable), etc.
  • Semiconductor memories include, for example, solid-state drives, mask ROM, PROM (Programmable ROM), EPROM (Erasable PROM), flash ROM, RAM (random access memory), etc.
  • the program may also be supplied to the computer by various types of temporary computer-readable media. Examples of temporary computer-readable media include electrical signals, optical signals, and electromagnetic waves.
  • the temporary computer-readable medium can provide the program to the computer via a wired communication path, such as an electric wire or optical fiber, or via a wireless communication path.
  • FIG. 3 is a flowchart showing an example of processing of the information processing device 10 according to the embodiment.
  • Fig. 4 is a diagram showing an example of a captured image according to the embodiment.
  • Fig. 5 is a diagram showing an example of a captured image after threshold processing according to the embodiment.
  • Fig. 6 is a diagram showing an example of a binarized image in which dashed line portions of a demarcation line according to the embodiment are detected.
  • Fig. 7 is a diagram showing an example of dashed lines of a demarcation line on a bird's-eye view according to the embodiment.
  • Fig. 8 is a diagram showing an example of lanes on a bird's-eye view according to the embodiment.
  • Fig. 9 is a diagram showing an example of an output image according to the embodiment.
  • the recognition unit 11 acquires a still image (first image) captured by a camera.
  • the recognition unit 11 may acquire, for example, one frame of a video captured by a monocular camera mounted on a vehicle.
  • the recognition unit 11 acquires a captured image 401 of the road in the forward direction (the direction of travel when traveling straight) of the vehicle (hereinafter also referred to as "own vehicle” as appropriate) on which the camera is mounted.
  • the recognition unit 11 recognizes dashed areas that are dividing lines (e.g., center lines, lane boundaries) that indicate one side of the road lanes within the area in the first image in which the road ahead of the vehicle is photographed (step S102).
  • the recognition unit 11 may perform threshold processing based on the brightness of each pixel, for example.
  • the recognition unit 11 may convert pixel values in a specific range of brightness into a specific color, for example. This makes it possible to delete unnecessary background information and emphasize contours, for example.
  • the recognition unit 11 may then recognize dashed areas that are dividing lines, for example, based on the contours of areas in the image.
  • image 501 is generated by subjecting captured image 401 of Fig. 4 to threshold processing, in which the contours of the dashed line areas that are demarcation lines are emphasized.
  • a binarized image 601 is generated based on image 501 of Fig. 5, in which the dashed line areas of the demarcation lines are distinguished from other areas.
  • the generation unit 12 Based on the recognition result by the recognition unit 11, the generation unit 12 generates a bird's-eye view (bird's-eye view; "second image") in which the recognized dashed lines are mapped along two axes, the forward direction and the horizontal direction of the vehicle (step S103).
  • the generation unit 12 generates a bird's-eye view in which the values of the two axes correspond to the distance from the vehicle in each of the directions of the two axes in real space.
  • the bird's-eye view may be, for example, a view looking down vertically, or a view looking down diagonally from above the vehicle.
  • the generation unit 12 may generate the bird's-eye view by, for example, converting the coordinates in the captured image of the pixels in the dashed area of the demarcation line into coordinates in the bird's-eye view.
  • the camera may be attached so that the optical axis direction of the camera (center of the image) coincides with the forward direction of the vehicle.
  • information on the shooting conditions such as the camera's angle of view may be registered in the information processing device by the operator.
  • the generation unit 12 may determine a mathematical formula for converting the coordinates of each pixel in the captured image into the forward position and horizontal position of the vehicle in real space based on the information on the shooting conditions such as the camera's angle of view.
  • the forward and horizontal positions of the vehicle in real space may be registered for each of the coordinates of multiple points in the captured image.
  • the multiple points may be, for example, the four corner points of a trapezoid whose upper base is shorter than its lower base when the far side in the forward direction of the vehicle is taken as the upper base and the near side as the lower base.
  • the generation unit 12 may then determine a mathematical formula for converting the coordinates of each pixel in the captured image into the forward and horizontal positions of the vehicle in real space, based on information that associates the coordinates of each point in the captured image with the forward and horizontal positions of the vehicle in real space.
  • a lane line segment that is relatively far from the vehicle e.g., line segment 611
  • the coordinates in the captured image are converted to coordinates in a bird's-eye view, and then the lines are detected using a Hough transform, so that, for example, the detection accuracy of lane lines that are relatively far from the vehicle can be improved.
  • the estimation unit 13 divides the bird's-eye view into a plurality of areas according to the distance in the forward direction of the vehicle (step S104).
  • the estimation unit 13 may divide the bird's-eye view into, for example, a first area including a range less than a specific distance in the forward direction of the vehicle, and a second area including a range equal to or greater than the specific distance in the forward direction of the vehicle.
  • the estimation unit 13 divides the bird's-eye view 701 into areas 711, 712, 713, 714, and 715 in order of proximity to the vehicle, at specific distances (e.g., 10 m) in accordance with the distance ahead of the vehicle.
  • the generating unit 12 detects the line segments of the demarcation lines for each area of the bird's-eye view (step S105).
  • the generating unit 12 may detect the line segments of the demarcation lines for each area, for example, by performing a Hough transform for each area.
  • the generating unit 12 is not limited to using the Hough transform, and may detect the line segments using other known methods.
  • the estimation unit 13 estimates a linear figure representing the road dividing line based on the line segments detected in each area (step S106).
  • the estimation unit 13 may estimate, for example, a line passing through a position on a first line segment detected in the first area and a position on a second line segment detected in the second area as one side of a lane on the road. This can improve the detection accuracy of a linear figure representing a curved dividing line, for example, even when the road ahead of the vehicle is curved.
  • the estimation unit 13 may detect, for example, a straight line passing through a specific point on the line segment detected in each area as one side of the lane on the road.
  • the specific point may be, for example, a point on the line segment at a position closest to the midpoint in the forward direction of the vehicle in each area.
  • straight lines passing through points 721, 722, 723, 724, and 725 on the line segment at positions closest to the midpoint in the forward direction of the vehicle (vertical direction in FIG. 7) in each of areas 711, 712, 713, 714, and 715 are detected as one side of the lane on the road.
  • a left edge line 811 and a right edge line 812 of the lane in which the vehicle is traveling are detected, as shown in the bird's-eye view 801 of FIG. 8.
  • the estimation unit 13 may estimate a linear figure representing a lane marking of a road based on a line segment that satisfies a predetermined angle condition among the multiple detected line segments. In this case, the estimation unit 13 may determine, among the multiple line segments detected in the second area, a line segment that satisfies a predetermined angle condition from the direction in which the first line segment detected in the first area extends toward the second area, as a second line segment that forms one side of the same lane as the first line segment. This can improve the detection accuracy of the curved lane marking even when, for example, the road ahead of the vehicle is curved.
  • the estimation unit 13 may determine, among the multiple line segments detected in the second area, a line segment that exists within a specific angle (for example, 15°) centered on the direction in which the first line segment extends toward the second area from the end of the first line segment that is closer to the second area, as the second line segment.
  • the estimation unit 13 may determine the reliability of the estimation for the specific line segment to be higher.
  • the estimation unit 13 may detect (correct, modify) a line of a specific width from one side of the lane based on the line segment on both sides of the lane that has a higher reliability of line segment detection by Hough transformation as the line on the other side of the lane. This can improve the detection accuracy of lanes at positions relatively far from the vehicle, for example.
  • the estimation unit 13 may, for example, in the Hough transformation, first project (coordinate transformation) the coordinates (x, y) of each pixel on the line segment of the division line in the bird's-eye view to a point ( ⁇ , ⁇ ) in a two-dimensional polar coordinate space of distance ⁇ and angle ⁇ .
  • the distance ⁇ may be, for example, the length of a perpendicular line drawn from the origin to a straight line passing through the coordinates (x, y).
  • the angle ⁇ may be, for example, the angle between the x-axis and a perpendicular line drawn from the origin to a straight line passing through the coordinates (x, y).
  • the estimation unit 13 may then detect line segments in the xy coordinate system in the bird's-eye view based on the projected points ( ⁇ , ⁇ ), for example.
  • the estimation unit 13 may then determine that the greater the number of projected points ( ⁇ , ⁇ ), the higher the reliability of the line segments in the xy coordinate system in the bird's-eye view that are detected based on the points ( ⁇ , ⁇ ).
  • the estimation unit 13 may also calculate the reliability of the estimation for the linear figure representing the estimated lane markings of the road, and identify the linear figures representing the two lane markings that form both ends of the lane markings of the road based on the calculated reliability. For example, when a camera mounted on a vehicle continuously captures images, the estimation unit 13 may determine the reliability of the linear figure estimated in the current frame to be higher the closer the distance between the linear figure representing the lane markings estimated in the past frame and the linear figure estimated in the current frame is. The estimation unit 13 may also calculate the width between the linear figures representing the estimated lane markings of the road, and calculate the reliability based on a comparison between the calculated width and a predetermined reference value representing the width (width) of the lane markings of the road.
  • the estimation unit 13 may determine the reliability of the specific line segment to be higher.
  • the estimation unit 13 may estimate the dividing line that forms the left end of the lane of the road based on the linear figure that has the highest reliability among the linear figures that are within a first lateral distance (e.g., within 3 m to the left of the center of the vehicle) in each area.
  • the estimation unit 13 may estimate the dividing line that forms the right end of the lane of the road based on the linear figure that has the highest reliability among the linear figures that are within a second lateral distance (e.g., within 3 m to the right of the center of the vehicle) in each area.
  • the estimation unit 13 outputs information based on the estimated road dividing lines (step S107).
  • the estimation unit 13 may output, for example, information indicating the distance from one end of the lane in which the vehicle is traveling to the vehicle itself.
  • the estimation unit 13 may also output, for example, an image in which the lines on both sides of the lane in which the vehicle is traveling are superimposed on the captured image, as shown in FIG. 9.
  • the left end line 811 and the right end line 812 of the lane in which the vehicle is traveling in FIG. 8 are converted from the coordinates of the bird's-eye view to the coordinates of the captured image, and lines 911 and 912 are superimposed on the captured image 401 in FIG. 4 to output an image 901.
  • This can, for example, assist in the generation of simulation data for the development of an autonomous driving system or a driving assistance system that mimics the swaying of a vehicle actually driven. It can also, for example, assist in the verification of the operation of a vehicle equipped with an autonomous driving system or a driving assistance system when it is actually driven. It can also, for example, detect the lane in which a vehicle equipped with an autonomous driving system or a driving assistance system is traveling in real time when the vehicle is actually driven.
  • the information processing device 10 may be a device contained in one housing, but the information processing device 10 of the present disclosure is not limited to this. Each unit of the information processing device 10 may be realized by cloud computing configured by one or more computers, for example. Such an information processing device 10 is also included in an example of the "information processing device" of the present disclosure.
  • (Appendix 1) a recognition unit that recognizes an area of a lane marking of a road from a first image obtained by capturing a road around a vehicle; a generating unit that generates a second image in which the recognized lane marking area of the road is represented on a coordinate plane having two axes that are the longitudinal direction and the lateral direction of the vehicle; an estimation unit that estimates a linear figure representing a dividing line of the road based on the second image;
  • An information processing device comprising: (Appendix 2) the estimation unit detects a plurality of line segments based on an area of the lane markings of the road represented in the second image, and estimates a linear figure representing the lane markings of the road based on the detected plurality of line segments.
  • the estimation unit divides the second image into a first area including a range less than a predetermined distance in the vertical direction of the vehicle and a second area including a range equal to or greater than the predetermined distance in the vertical direction of the vehicle, and estimates a linear figure representing a dividing line of the road based on a first line segment detected in the first area and a second line segment detected in the second area.
  • the information processing device according to claim 2. (Appendix 4) The information processing device according to claim 2 or 3, wherein the estimation unit estimates a linear figure representing a dividing line of the road based on a line segment among the detected plurality of line segments that satisfies a predetermined angle condition.
  • the estimation unit determines, based on a direction in which the first line segment extends toward the second area, a second line segment that forms a linear figure representing the same lane marking as the first line segment, among the multiple line segments detected in the second area; 4.
  • the information processing device according to claim 3. the estimation unit calculates a reliability of estimation for a linear figure representing the estimated lane marking of the road, and identifies linear figures representing two lane markings forming both ends of a lane of the road based on the calculated reliability. 4.
  • the estimation unit calculates a width between linear figures representing the estimated lane lines of the road, and calculates the reliability based on a comparison between the calculated width and a predetermined reference value representing a width of a lane of the road; 7.
  • the information processing device according to claim 6.
  • (Appendix 9) Recognizing an area of a road division line from a first image obtained by capturing a road around a vehicle; generating a second image in which the recognized lane marking area of the road is represented on a coordinate plane having two axes corresponding to the longitudinal direction and the lateral direction of the vehicle;
  • a non-transitory computer-readable medium storing a program for causing a computer to execute a process of estimating a linear figure representing a dividing line of the road based on the second image.

Landscapes

  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

Provided is an information processing device (10) comprising: a recognition unit (11) that recognizes an area of a lane line of a road from a first image in which the road around a vehicle is captured; a generation unit (12) that generates a second image in which the recognized area of the lane line of the road is represented on a coordinate plane having the longitudinal direction and horizontal direction of the vehicle as two axes; and an estimation unit (13) that estimates a graphic of a line representing the lane line of the road on the basis of the second image.

Description

情報処理装置、情報処理方法、及びコンピュータ可読媒体Information processing device, information processing method, and computer-readable medium
 本開示は、情報処理装置、情報処理方法、及びプログラムが格納された非一時的なコンピュータ可読媒体に関する。 The present disclosure relates to an information processing device, an information processing method, and a non-transitory computer-readable medium on which a program is stored.
 特許文献1には、線候補であって輝度が輝度閾値以上であると判断された複数の区画候補がある場合に、複数の区画候補のうち車両に最も近い区画候補走行区画線として認識する技術が開示されている。これにより、白線及び黄線等といった輝度の異なる複数の線候補が検出された場合に、常に白線のみが認識されるおそれがあるという課題に対応できることが開示されている。 Patent Document 1 discloses a technology that, when there are multiple lane candidates that are line candidates and whose brightness is determined to be equal to or greater than a brightness threshold, recognizes the lane candidate that is closest to the vehicle among the multiple lane candidates as the lane marking line. This makes it possible to address the issue that, when multiple line candidates with different brightness such as white lines and yellow lines are detected, there is a risk that only the white lines will always be recognized.
特開2019-20957号公報JP 2019-20957 A
 しかしながら、特許文献1に記載の技術では、例えば、車両が走行している車線の一方側を示す区画線(例えば、中央線、車線境界線)が破線である場合等、区画線の形状等によっては、区画線を適切に検出できないことがある。 However, the technology described in Patent Document 1 may not be able to properly detect lane markings depending on their shape, for example, when the lane marking (e.g., center line, lane boundary line) indicating one side of the lane in which the vehicle is traveling is dashed.
 本開示の目的は、上述した課題を鑑み、車線の区画線を適切に検出できる情報処理装置、情報処理方法、及びプログラムが格納された非一時的なコンピュータ可読媒体を提供することにある。 In view of the above-mentioned problems, the objective of the present disclosure is to provide an information processing device, an information processing method, and a non-transitory computer-readable medium on which a program is stored that can properly detect lane markings.
 本開示に係る第1の態様では、車両の周囲の道路が撮影された第1画像から前記道路の区画線の領域を認識する認識部と、認識された前記道路の区画線の領域を、前記車両の縦方向と横方向とを2軸とする座標面に表した第2画像を生成する生成部と、前記第2画像に基づいて、前記道路の区画線を表す線状の図形を推定する推定部と、を備える情報処理装置が提供される。 In a first aspect of the present disclosure, an information processing device is provided that includes a recognition unit that recognizes an area of the road's dividing line from a first image of the road around a vehicle, a generation unit that generates a second image in which the recognized area of the road's dividing line is represented on a coordinate plane having the longitudinal and lateral directions of the vehicle as two axes, and an estimation unit that estimates linear figures representing the road's dividing line based on the second image.
 また、本開示に係る第2の態様では、車両の周囲の道路が撮影された第1画像から前記道路の区画線の領域を認識し、認識した前記道路の区画線の領域を、前記車両の縦方向と横方向とを2軸とする座標面に表した第2画像を生成し、前記第2画像に基づいて、前記道路の区画線を表す線状の図形を推定する、情報処理方法が提供される。 In addition, a second aspect of the present disclosure provides an information processing method that recognizes an area of the road's dividing line from a first image of the road around a vehicle, generates a second image that represents the recognized area of the road's dividing line on a coordinate plane having the longitudinal and lateral directions of the vehicle as two axes, and estimates a linear figure representing the road's dividing line based on the second image.
 また、本開示に係る第3の態様では、車両の周囲の道路が撮影された第1画像から前記道路の区画線の領域を認識し、認識した前記道路の区画線の領域を、前記車両の縦方向と横方向とを2軸とする座標面に表した第2画像を生成し、前記第2画像に基づいて、前記道路の区画線を表す線状の図形を推定する、処理をコンピュータに実行させるプログラムが格納された非一時的なコンピュータ可読媒体が提供される。 In addition, in a third aspect of the present disclosure, a non-transitory computer-readable medium is provided that stores a program for causing a computer to execute a process of recognizing an area of road dividing lines from a first image of a road around a vehicle, generating a second image in which the recognized area of road dividing lines is represented on a coordinate plane having the longitudinal and lateral directions of the vehicle as two axes, and estimating linear figures representing the road dividing lines based on the second image.
 一側面によれば、車線の区画線を適切に検出できる。 In one aspect, lane markings can be properly detected.
実施形態に係る情報処理装置の構成の一例を示す図である。FIG. 1 is a diagram illustrating an example of a configuration of an information processing device according to an embodiment. 実施形態に係る情報処理装置のハードウェア構成例を示す図である。FIG. 2 is a diagram illustrating an example of a hardware configuration of an information processing device according to an embodiment. 実施形態に係る情報処理装置の処理の一例を示すフローチャートである。10 is a flowchart illustrating an example of processing of the information processing device according to the embodiment. 実施形態に係る撮影画像の一例を示す図である。FIG. 2 is a diagram showing an example of a captured image according to the embodiment; 実施形態に係る閾値処理後の撮影画像の一例を示す図である。FIG. 13 is a diagram showing an example of a captured image after threshold processing according to the embodiment. 実施形態に係る区画線の破線部分が検出された二値化画像の一例を示す図である。11 is a diagram illustrating an example of a binarized image in which a dashed line portion of a demarcation line is detected according to the embodiment; FIG. 実施形態に係る鳥瞰図上の区画線の破線の一例を示す図である。FIG. 13 is a diagram showing an example of a dashed demarcation line on a bird's-eye view according to an embodiment; 実施形態に係る鳥瞰図上の車線の一例を示す図である。FIG. 2 is a diagram illustrating an example of lanes on a bird's-eye view according to the embodiment; 実施形態に係る出力画像の一例を示す図である。FIG. 4 is a diagram illustrating an example of an output image according to the embodiment.
 本開示の原理は、いくつかの例示的な実施形態を参照して説明される。これらの実施形態は、例示のみを目的として記載されており、本開示の範囲に関する制限を示唆することなく、当業者が本開示を理解および実施するのを助けることを理解されたい。本明細書で説明される開示は、以下で説明されるもの以外の様々な方法で実装される。
 以下の説明および特許請求の範囲において、他に定義されない限り、本明細書で使用されるすべての技術用語および科学用語は、本開示が属する技術分野の当業者によって一般に理解されるのと同じ意味を有する。
 以下、図面を参照して、本開示の実施形態を説明する。
The principles of the present disclosure are described with reference to some exemplary embodiments. It should be understood that these embodiments are set forth for illustrative purposes only, to aid those skilled in the art in understanding and practicing the present disclosure, without implying any limitation on the scope of the present disclosure. The disclosure described herein may be implemented in various ways other than those described below.
In the following description and claims, unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.
Hereinafter, embodiments of the present disclosure will be described with reference to the drawings.
 (実施の形態1)
 <構成>
 図1を参照し、実施形態に係る情報処理装置10の構成について説明する。図1は、実施形態に係る情報処理装置10の構成の一例を示す図である。情報処理装置10は、認識部11、生成部12、及び推定部13を有する。これら各部は、情報処理装置10にインストールされた1以上のプログラムと、情報処理装置10のプロセッサ、及びメモリ等のハードウェアとの協働により実現されてもよい。
(Embodiment 1)
<Configuration>
The configuration of an information processing device 10 according to an embodiment will be described with reference to Fig. 1. Fig. 1 is a diagram showing an example of the configuration of the information processing device 10 according to an embodiment. The information processing device 10 has a recognition unit 11, a generation unit 12, and an estimation unit 13. Each of these units may be realized by cooperation between one or more programs installed in the information processing device 10 and hardware such as a processor and a memory of the information processing device 10.
 認識部11は、車両の周囲の道路が撮影された第1画像から道路の区画線の領域を認識する。生成部12は、認識部11により認識された道路の区画線の領域を、車両の縦方向(例えば、前方方向)と横方向とを2軸とする座標面に表した第2画像を生成する。推定部13は、第2画像に基づいて、道路の区画線を表す線状の図形を推定する。 The recognition unit 11 recognizes the area of the road dividing lines from a first image of the road around the vehicle. The generation unit 12 generates a second image in which the area of the road dividing lines recognized by the recognition unit 11 is represented on a coordinate plane with the vehicle's vertical direction (e.g., forward direction) and horizontal direction as two axes. The estimation unit 13 estimates linear figures representing the road dividing lines based on the second image.
 (実施の形態2)
 <ハードウェア構成>
 図2は、実施形態に係る情報処理装置10のハードウェア構成例を示す図である。図2の例では、情報処理装置10(コンピュータ100)は、プロセッサ101、メモリ102、通信インターフェイス103を含む。これら各部は、バス等により接続されてもよい。メモリ102は、プログラム104の少なくとも一部を格納する。通信インターフェイス103は、他のネットワーク要素との通信に必要なインターフェイスを含む。
(Embodiment 2)
<Hardware Configuration>
Fig. 2 is a diagram showing an example of a hardware configuration of an information processing device 10 according to an embodiment. In the example of Fig. 2, the information processing device 10 (computer 100) includes a processor 101, a memory 102, and a communication interface 103. These units may be connected by a bus or the like. The memory 102 stores at least a part of a program 104. The communication interface 103 includes an interface required for communication with other network elements.
 プログラム104が、プロセッサ101及びメモリ102等の協働により実行されると、コンピュータ100により本開示の実施形態の少なくとも一部の処理が行われる。メモリ102は、任意のタイプのものであってもよい。メモリ102は、非限定的な例として、非一時的なコンピュータ可読記憶媒体でもよい。また、メモリ102は、半導体ベースのメモリデバイス、磁気メモリデバイスおよびシステム、光学メモリデバイスおよびシステム、固定メモリおよびリムーバブルメモリなどの任意の適切なデータストレージ技術を使用して実装されてもよい。コンピュータ100には1つのメモリ102のみが示されているが、コンピュータ100にはいくつかの物理的に異なるメモリモジュールが存在してもよい。プロセッサ101は、任意のタイプのものであってよい。プロセッサ101は、汎用コンピュータ、専用コンピュータ、マイクロプロセッサ、デジタル信号プロセッサ(DSP:Digital Signal Processor)、および非限定的な例としてマルチコアプロセッサアーキテクチャに基づくプロセッサの1つ以上を含んでよい。コンピュータ100は、メインプロセッサを同期させるクロックに時間的に従属する特定用途向け集積回路チップなどの複数のプロセッサを有してもよい。 When the program 104 is executed by the processor 101, the memory 102, etc. in cooperation with each other, the computer 100 performs at least some of the processing of the embodiments of the present disclosure. The memory 102 may be of any type. The memory 102 may be, as a non-limiting example, a non-transitory computer-readable storage medium. The memory 102 may also be implemented using any suitable data storage technology, such as semiconductor-based memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory. Although only one memory 102 is shown in the computer 100, there may be several physically different memory modules in the computer 100. The processor 101 may be of any type. The processor 101 may include one or more of a general-purpose computer, a special-purpose computer, a microprocessor, a digital signal processor (DSP), and a processor based on a multi-core processor architecture, as a non-limiting example. The computer 100 may have multiple processors, such as application-specific integrated circuit chips that are time-slaved to a clock that synchronizes the main processor.
 本開示の実施形態は、ハードウェアまたは専用回路、ソフトウェア、ロジックまたはそれらの任意の組み合わせで実装され得る。いくつかの態様はハードウェアで実装されてもよく、一方、他の態様はコントローラ、マイクロプロセッサまたは他のコンピューティングデバイスによって実行され得るファームウェアまたはソフトウェアで実装されてもよい。 Embodiments of the present disclosure may be implemented in hardware or special purpose circuits, software, logic, or any combination thereof. Some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software that may be executed by a controller, microprocessor, or other computing device.
 本開示はまた、非一時的なコンピュータ可読記憶媒体に有形に記憶された少なくとも1つのコンピュータプログラム製品を提供する。コンピュータプログラム製品は、プログラムモジュールに含まれる命令などのコンピュータ実行可能命令を含み、対象の実プロセッサまたは仮想プロセッサ上のデバイスで実行され、本開示のプロセスまたは方法を実行する。プログラムモジュールには、特定のタスクを実行したり、特定の抽象データ型を実装したりするルーチン、プログラム、ライブラリ、オブジェクト、クラス、コンポーネント、データ構造などが含まれる。プログラムモジュールの機能は、様々な実施形態で望まれるようにプログラムモジュール間で結合または分割されてもよい。プログラムモジュールのマシン実行可能命令は、ローカルまたは分散デバイス内で実行できる。分散デバイスでは、プログラムモジュールはローカルとリモートの両方のストレージメディアに配置できる。 The present disclosure also provides at least one computer program product tangibly stored on a non-transitory computer-readable storage medium. The computer program product includes computer-executable instructions, such as instructions included in program modules, that execute on a target real or virtual processor device to perform the processes or methods of the present disclosure. Program modules include routines, programs, libraries, objects, classes, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The functionality of the program modules may be combined or divided among program modules as desired in various embodiments. The machine-executable instructions of the program modules may be executed in local or distributed devices. In a distributed device, the program modules may be located in both local and remote storage media.
 本開示の方法を実行するためのプログラムコードは、1つ以上のプログラミング言語の任意の組み合わせで書かれてもよい。これらのプログラムコードは、汎用コンピュータ、専用コンピュータ、またはその他のプログラム可能なデータ処理装置のプロセッサまたはコントローラに提供される。プログラムコードがプロセッサまたはコントローラによって実行されると、フローチャートおよび/または実装するブロック図内の機能/動作が実行される。プログラムコードは、完全にマシン上で実行され、一部はマシン上で、スタンドアロンソフトウェアパッケージとして、一部はマシン上で、一部はリモートマシン上で、または完全にリモートマシンまたはサーバ上で実行される。 Program codes for carrying out the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes are provided to a processor or controller of a general purpose computer, a special purpose computer, or other programmable data processing apparatus. When the program code is executed by the processor or controller, the functions/operations in the flowcharts and/or implementing block diagrams are performed. The program code may be executed entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine, or entirely on a remote machine or server.
 プログラムは、様々なタイプの非一時的なコンピュータ可読媒体を用いて格納され、コンピュータに供給することができる。非一時的なコンピュータ可読媒体は、様々なタイプの実体のある記録媒体を含む。非一時的なコンピュータ可読媒体の例には、磁気記録媒体、光磁気記録媒体、光ディスク媒体、半導体メモリ等が含まれる。磁気記録媒体には、例えば、フレキシブルディスク、磁気テープ、ハードディスクドライブ等が含まれる。光磁気記録媒体には、例えば、光磁気ディスク等が含まれる。光ディスク媒体には、例えば、ブルーレイディスク、CD(Compact Disc)-ROM(Read Only Memory)、CD-R(Recordable)、CD-RW(ReWritable)等が含まれる。半導体メモリには、例えば、ソリッドステートドライブ、マスクROM、PROM(Programmable ROM)、EPROM(Erasable PROM)、フラッシュROM、RAM(random access memory)等が含まれる。また、プログラムは、様々なタイプの一時的なコンピュータ可読媒体によってコンピュータに供給されてもよい。一時的なコンピュータ可読媒体の例は、電気信号、光信号、及び電磁波を含む。一時的なコンピュータ可読媒体は、電線及び光ファイバ等の有線通信路、又は無線通信路を介して、プログラムをコンピュータに供給できる。 The program may be stored and supplied to the computer using various types of non-transitory computer-readable media. Non-transitory computer-readable media include various types of tangible recording media. Examples of non-transitory computer-readable media include magnetic recording media, magneto-optical recording media, optical disk media, semiconductor memory, etc. Magnetic recording media include, for example, flexible disks, magnetic tapes, hard disk drives, etc. Magneto-optical recording media include, for example, magneto-optical disks, etc. Optical disk media include, for example, Blu-ray disks, CD (Compact Disc)-ROM (Read Only Memory), CD-R (Recordable), CD-RW (ReWritable), etc. Semiconductor memories include, for example, solid-state drives, mask ROM, PROM (Programmable ROM), EPROM (Erasable PROM), flash ROM, RAM (random access memory), etc. The program may also be supplied to the computer by various types of temporary computer-readable media. Examples of temporary computer-readable media include electrical signals, optical signals, and electromagnetic waves. The temporary computer-readable medium can provide the program to the computer via a wired communication path, such as an electric wire or optical fiber, or via a wireless communication path.
 <処理>
 次に、図3から図9を参照し、実施形態に係る情報処理装置10の処理の一例について説明する。図3は、実施形態に係る情報処理装置10の処理の一例を示すフローチャートである。図4は、実施形態に係る撮影画像の一例を示す図である。図5は、実施形態に係る閾値処理後の撮影画像の一例を示す図である。図6は、実施形態に係る区画線の破線部分が検出された二値化画像の一例を示す図である。図7は、実施形態に係る鳥瞰図上の区画線の破線の一例を示す図である。図8は、実施形態に係る鳥瞰図上の車線の一例を示す図である。図9は、実施形態に係る出力画像の一例を示す図である。
<Processing>
Next, an example of processing of the information processing device 10 according to the embodiment will be described with reference to Figs. 3 to 9. Fig. 3 is a flowchart showing an example of processing of the information processing device 10 according to the embodiment. Fig. 4 is a diagram showing an example of a captured image according to the embodiment. Fig. 5 is a diagram showing an example of a captured image after threshold processing according to the embodiment. Fig. 6 is a diagram showing an example of a binarized image in which dashed line portions of a demarcation line according to the embodiment are detected. Fig. 7 is a diagram showing an example of dashed lines of a demarcation line on a bird's-eye view according to the embodiment. Fig. 8 is a diagram showing an example of lanes on a bird's-eye view according to the embodiment. Fig. 9 is a diagram showing an example of an output image according to the embodiment.
 ステップS101において、認識部11は、カメラで撮影された静止画像(第1画像)を取得する。ここで、認識部11は、例えば、車両に搭載されている単眼カメラで撮影された動画の一のフレームを取得してもよい。図4の例では、認識部11は、カメラが搭載されている車両(以下で、適宜「自車両」とも称する)の前方方向(直進時の進行方向)の道路が撮影された撮影画像401を取得している。 In step S101, the recognition unit 11 acquires a still image (first image) captured by a camera. Here, the recognition unit 11 may acquire, for example, one frame of a video captured by a monocular camera mounted on a vehicle. In the example of FIG. 4, the recognition unit 11 acquires a captured image 401 of the road in the forward direction (the direction of travel when traveling straight) of the vehicle (hereinafter also referred to as "own vehicle" as appropriate) on which the camera is mounted.
 続いて、認識部11は、自車両の前方の道路が撮影された第1画像内の領域のうち道路の車線の一方側を示す区画線(例えば、中央線、車線境界線)である破線の領域を認識する(ステップS102)。ここで、認識部11は、例えば、各画素の輝度に基づいて閾値処理を行ってもよい。この場合、認識部11は、例えば、特定範囲の輝度の画素の値を、特定の色に変換してもよい。これにより、例えば、不要な背景の情報を削除し、輪郭を強調することができる。そして、認識部11は、例えば、画像中の領域の輪郭等に基づいて、区画線である破線の領域を認識してもよい。 Then, the recognition unit 11 recognizes dashed areas that are dividing lines (e.g., center lines, lane boundaries) that indicate one side of the road lanes within the area in the first image in which the road ahead of the vehicle is photographed (step S102). Here, the recognition unit 11 may perform threshold processing based on the brightness of each pixel, for example. In this case, the recognition unit 11 may convert pixel values in a specific range of brightness into a specific color, for example. This makes it possible to delete unnecessary background information and emphasize contours, for example. The recognition unit 11 may then recognize dashed areas that are dividing lines, for example, based on the contours of areas in the image.
 図5の例では、図4の撮影画像401が閾値処理により、区画線である破線の領域の輪郭が強調された画像501が生成されている。図6の例では、図5の画像501に基づいて、区画線の破線の領域と、他の領域とが区別された二値化画像601が生成されている。 In the example of Fig. 5, image 501 is generated by subjecting captured image 401 of Fig. 4 to threshold processing, in which the contours of the dashed line areas that are demarcation lines are emphasized. In the example of Fig. 6, a binarized image 601 is generated based on image 501 of Fig. 5, in which the dashed line areas of the demarcation lines are distinguished from other areas.
 続いて、生成部12は、認識部11による認識結果に基づいて、自車両の前方方向と水平方向とを2軸として、認識した破線がマッピングされた鳥瞰図(鳥観図。「第2画像」)を生成する(ステップS103)。ここで、生成部12は、当該2軸のそれぞれの値を、実空間における当該2軸のそれぞれの方向での自車両からの距離に応じた値とした鳥瞰図を生成する。当該鳥瞰図は、例えば、鉛直下向きに見下ろした図でもよいし、車両の上空から斜めに見下ろした図でもよい。 Then, based on the recognition result by the recognition unit 11, the generation unit 12 generates a bird's-eye view (bird's-eye view; "second image") in which the recognized dashed lines are mapped along two axes, the forward direction and the horizontal direction of the vehicle (step S103). Here, the generation unit 12 generates a bird's-eye view in which the values of the two axes correspond to the distance from the vehicle in each of the directions of the two axes in real space. The bird's-eye view may be, for example, a view looking down vertically, or a view looking down diagonally from above the vehicle.
 ここで、生成部12は、例えば、区画線の破線の領域内の画素の撮影画像における座標を、鳥瞰図での座標に変換することにより、当該鳥瞰図を生成してもよい。この場合、例えば、カメラは、オペレータにより自車両に設置される際に、カメラの光軸方向(画像の中心)と自車両の前方方向とが一致するように取り付けられてもよい。また、カメラの画角等の撮影条件の情報が、オペレータにより情報処理装置に登録されてもよい。そして、生成部12は、例えば、カメラの画角等の撮影条件の情報に基づいて、撮影画像内の各画素の座標を、実空間における自車両の前方方向の位置及び水平方向の位置に変換する数式を決定してもよい。 Here, the generation unit 12 may generate the bird's-eye view by, for example, converting the coordinates in the captured image of the pixels in the dashed area of the demarcation line into coordinates in the bird's-eye view. In this case, for example, when the operator installs the camera on the vehicle, the camera may be attached so that the optical axis direction of the camera (center of the image) coincides with the forward direction of the vehicle. In addition, information on the shooting conditions such as the camera's angle of view may be registered in the information processing device by the operator. Then, the generation unit 12 may determine a mathematical formula for converting the coordinates of each pixel in the captured image into the forward position and horizontal position of the vehicle in real space based on the information on the shooting conditions such as the camera's angle of view.
 また、オペレータにより自車両に設置される際に、例えば、撮影画像内の複数点の座標毎に、実空間における自車両の前方方向の位置及び水平方向の位置が登録されてもよい。この場合、当該複数点は、例えば、車両の前方方向の遠方を上底、近方を下底とした場合に上底の方が下底よりも短い台形の4隅の点でもよい。そして、生成部12は、例えば、撮影画像内の各点の座標と、実空間における自車両の前方方向の位置及び水平方向の位置とが対応付けられた情報に基づいて、撮影画像内の各画素の座標を、実空間における自車両の前方方向の位置及び水平方向の位置に変換する数式を決定してもよい。 Furthermore, when the system is installed on the vehicle by the operator, the forward and horizontal positions of the vehicle in real space may be registered for each of the coordinates of multiple points in the captured image. In this case, the multiple points may be, for example, the four corner points of a trapezoid whose upper base is shorter than its lower base when the far side in the forward direction of the vehicle is taken as the upper base and the near side as the lower base. The generation unit 12 may then determine a mathematical formula for converting the coordinates of each pixel in the captured image into the forward and horizontal positions of the vehicle in real space, based on information that associates the coordinates of each point in the captured image with the forward and horizontal positions of the vehicle in real space.
 例えば、図6に示すような撮影画像での座標で区画線の線分を検出する場合、自車両から比較的遠い位置の区画線の線分(例えば、線分611)は、点に近い形状に写されている。そのため、撮影画像での座標で例えばハフ変換により線分を検出する場合は、自車両から比較的遠い位置の区画線の線分の検出精度に問題が生じる場合がある。一方、本開示によれば、撮影画像での座標を鳥瞰図での座標に変換してからハフ変換により線分を検出するため、例えば、自車両から比較的遠い位置の区画線の線分の検出精度を向上できる。 For example, when detecting a lane line segment using coordinates in a captured image as shown in FIG. 6, a lane line segment that is relatively far from the vehicle (e.g., line segment 611) is captured as a shape close to a point. Therefore, when detecting lines using coordinates in a captured image, for example, by a Hough transform, problems may arise with the detection accuracy of lane lines that are relatively far from the vehicle. On the other hand, according to the present disclosure, the coordinates in the captured image are converted to coordinates in a bird's-eye view, and then the lines are detected using a Hough transform, so that, for example, the detection accuracy of lane lines that are relatively far from the vehicle can be improved.
 続いて、推定部13は、前記車両の前方方向の距離に応じて、鳥瞰図を複数のエリアに分割する(ステップS104)。ここで、推定部13は、例えば、鳥瞰図を、自車両の前方方向の特定距離未満の範囲を含む第1エリアと、自車両の前方方向の当該特定距離以上の範囲を含む第2エリアとに分割してもよい Then, the estimation unit 13 divides the bird's-eye view into a plurality of areas according to the distance in the forward direction of the vehicle (step S104). Here, the estimation unit 13 may divide the bird's-eye view into, for example, a first area including a range less than a specific distance in the forward direction of the vehicle, and a second area including a range equal to or greater than the specific distance in the forward direction of the vehicle.
 図7の例では、推定部13は、自車両の前方方向での距離に応じて、特定距離(例えば、10m)毎に、自車両から近い順に各エリア711、712、713、714、715に鳥瞰図701を分割している。 In the example of FIG. 7, the estimation unit 13 divides the bird's-eye view 701 into areas 711, 712, 713, 714, and 715 in order of proximity to the vehicle, at specific distances (e.g., 10 m) in accordance with the distance ahead of the vehicle.
 続いて、生成部12は、鳥瞰図のエリア毎に、区画線の線分を検出する(ステップS105)。ここで、生成部12は、例えば、エリア毎にハフ(Hough)変換を実行することにより、エリア毎に区画線の線分を検出してもよい。なお、生成部12は、ハフ変換に限定されず、他の公知の手法を用いて線分を検出してもよい。 Then, the generating unit 12 detects the line segments of the demarcation lines for each area of the bird's-eye view (step S105). Here, the generating unit 12 may detect the line segments of the demarcation lines for each area, for example, by performing a Hough transform for each area. Note that the generating unit 12 is not limited to using the Hough transform, and may detect the line segments using other known methods.
 続いて、推定部13は、各エリアで検出した線分に基づいて、道路の区画線を表す線状の図形を推定する(ステップS106)。ここで、推定部13は、例えば、第1エリアで検出した第1線分上の位置と第2エリアで検出した第2線分上の位置とを通る線を、道路における車線の一方側として推定してもよい。これにより、例えば、自車両の前方がカーブしているような場合でも、カーブしている区画線を表す線状の図形の検出精度を向上できる。 Then, the estimation unit 13 estimates a linear figure representing the road dividing line based on the line segments detected in each area (step S106). Here, the estimation unit 13 may estimate, for example, a line passing through a position on a first line segment detected in the first area and a position on a second line segment detected in the second area as one side of a lane on the road. This can improve the detection accuracy of a linear figure representing a curved dividing line, for example, even when the road ahead of the vehicle is curved.
 この場合、推定部13は、例えば、各エリアで検出した線分における特定の点を通る直線を、道路における車線の一方側として検出してもよい。当該特定の点は、例えば、各エリアでの自車両の前方方向の中間に最も近い位置での当該線分上の点でもよい。この場合、図7の例では、各エリア711、712、713、714、715の自車両の前方方向(図7の縦方向)の中間に最も近い位置での当該線分上の各点721、722、723、724、725を通る直線が、道路における車線の一方側として検出される。これにより、図8の鳥瞰図801に示すように、例えば、自車両の走行車線の左端の線811及び右端の線812が検出される。 In this case, the estimation unit 13 may detect, for example, a straight line passing through a specific point on the line segment detected in each area as one side of the lane on the road. The specific point may be, for example, a point on the line segment at a position closest to the midpoint in the forward direction of the vehicle in each area. In this case, in the example of FIG. 7, straight lines passing through points 721, 722, 723, 724, and 725 on the line segment at positions closest to the midpoint in the forward direction of the vehicle (vertical direction in FIG. 7) in each of areas 711, 712, 713, 714, and 715 are detected as one side of the lane on the road. As a result, for example, a left edge line 811 and a right edge line 812 of the lane in which the vehicle is traveling are detected, as shown in the bird's-eye view 801 of FIG. 8.
 推定部13は、検出された複数の線分のうち所定の角度条件を満たす線分に基づいて、道路の区画線を表す線状の図形を推定してもよい。この場合、推定部13は、第2エリアで検出された複数の線分のうち、第1エリアで検出した第1線分が第2エリアに向かって延びる方向から所定の角度条件を満たす線分を、第1線分と同一の車線の一方側を形成する第2線分として決定してもよい。これにより、例えば、自車両の前方の道路がカーブしている等の場合でも、カーブしている区画線の線分の検出精度を向上できる。この場合、推定部13は、第2エリアで検出された複数の線分のうち、当該第1線分の両端部のうち第2エリアに近い方の端部から、当該第1線分が第2エリアに向かって延びる方向を中心とした特定角度(例えば、15°)以内に存在する線分を、当該第2線分として決定してもよい。また、推定部13は、例えば、特定の線分に対して、所定の角度条件を満たす他の線分が検出されている場合、当該特定の線分に対する推定の信頼度をより高く決定してもよい。 The estimation unit 13 may estimate a linear figure representing a lane marking of a road based on a line segment that satisfies a predetermined angle condition among the multiple detected line segments. In this case, the estimation unit 13 may determine, among the multiple line segments detected in the second area, a line segment that satisfies a predetermined angle condition from the direction in which the first line segment detected in the first area extends toward the second area, as a second line segment that forms one side of the same lane as the first line segment. This can improve the detection accuracy of the curved lane marking even when, for example, the road ahead of the vehicle is curved. In this case, the estimation unit 13 may determine, among the multiple line segments detected in the second area, a line segment that exists within a specific angle (for example, 15°) centered on the direction in which the first line segment extends toward the second area from the end of the first line segment that is closer to the second area, as the second line segment. In addition, for example, when another line segment that satisfies a predetermined angle condition is detected for a specific line segment, the estimation unit 13 may determine the reliability of the estimation for the specific line segment to be higher.
 また、推定部13は、検出した道路の車線の幅員が閾値未満である場合、当該車線の両側の線分のうち、ハフ(Hough)変換による線分の検出の信頼度が高い方の線分に基づく当該車線の一方側の線から特定の幅員の線を、当該車線の他方側の線として検出(補正、修正)してもよい。これにより、例えば、自車両から比較的遠い位置の車線の検出精度を向上できる。この場合、推定部13は、例えば、ハフ(Hough)変換において、まず、鳥瞰図における区画線の線分上の各画素の座標(x,y)を、距離ρと角度θの極座標二次元空間上の点(ρ,θ)に射影(座標変換)してもよい。ここで、距離ρは、例えば、座標(x,y)を通る直線に対し、原点から垂線を下ろしたときの長さでもよい。また、角度θは、例えば、座標(x,y)を通る直線に対し、原点から垂線を下ろしたときにx軸となす角度でもよい。そして、推定部13は、例えば、射影された点(ρ,θ)に基づいて、鳥瞰図でのxy座標系における線分を検出してもよい。そして、推定部13は、例えば、射影された点(ρ,θ)の数が多いほど、当該点(ρ,θ)に基づいて検出された、鳥瞰図でのxy座標系における線分の信頼度を高く決定してもよい。 In addition, when the width of the detected lane of the road is less than a threshold value, the estimation unit 13 may detect (correct, modify) a line of a specific width from one side of the lane based on the line segment on both sides of the lane that has a higher reliability of line segment detection by Hough transformation as the line on the other side of the lane. This can improve the detection accuracy of lanes at positions relatively far from the vehicle, for example. In this case, the estimation unit 13 may, for example, in the Hough transformation, first project (coordinate transformation) the coordinates (x, y) of each pixel on the line segment of the division line in the bird's-eye view to a point (ρ, θ) in a two-dimensional polar coordinate space of distance ρ and angle θ. Here, the distance ρ may be, for example, the length of a perpendicular line drawn from the origin to a straight line passing through the coordinates (x, y). In addition, the angle θ may be, for example, the angle between the x-axis and a perpendicular line drawn from the origin to a straight line passing through the coordinates (x, y). The estimation unit 13 may then detect line segments in the xy coordinate system in the bird's-eye view based on the projected points (ρ, θ), for example. The estimation unit 13 may then determine that the greater the number of projected points (ρ, θ), the higher the reliability of the line segments in the xy coordinate system in the bird's-eye view that are detected based on the points (ρ, θ).
 また、推定部13は、推定した道路の区画線を表す線状の図形について推定の信頼度を算出し、算出した信頼度に基づいて道路の車線の両端を形成する2つの区画線を表す線状の図形を特定してもよい。また、例えば、車両に搭載されたカメラが連続して画像を撮影する場合、推定部13は、過去フレームで推定された区画線を表す線状の図形と、現フレームで推定された線状の図形との距離が近いほど、現フレームで推定された線状の図形に対する信頼度をより高く決定してもよい。また、推定部13は、推定した道路の区画線を表す線状の図形間の幅を算出し、算出された幅と道路の車線の幅(幅員)を表す所定の基準値との比較に基づいて信頼度を算出してもよい。この場合、推定部13は、例えば、検出された特定の線分に対して当該所定の基準値の距離離れた位置に、並行する他の線分が検出されている場合、当該特定の線分に対する信頼度をより高く決定してもよい。そして、推定部13は、各エリアにおいて、第1の横方向の距離内(例えば、横方向の距離が自車両の中心から左に3m以内)である線状の図形のうち信頼度が最も高い線状の図形に基づいて、道路の車線の左端を形成する区画線を推定してもよい。また、同様に、推定部13は、各エリアにおいて、第2の横方向の距離内(例えば、横方向の距離が自車両の中心から右に3m以内)である線状の図形のうち信頼度が最も高い線状の図形に基づいて、道路の車線の右端を形成する区画線を推定してもよい。 The estimation unit 13 may also calculate the reliability of the estimation for the linear figure representing the estimated lane markings of the road, and identify the linear figures representing the two lane markings that form both ends of the lane markings of the road based on the calculated reliability. For example, when a camera mounted on a vehicle continuously captures images, the estimation unit 13 may determine the reliability of the linear figure estimated in the current frame to be higher the closer the distance between the linear figure representing the lane markings estimated in the past frame and the linear figure estimated in the current frame is. The estimation unit 13 may also calculate the width between the linear figures representing the estimated lane markings of the road, and calculate the reliability based on a comparison between the calculated width and a predetermined reference value representing the width (width) of the lane markings of the road. In this case, for example, when another parallel line segment is detected at a position away from the detected specific line segment by a distance of the predetermined reference value, the estimation unit 13 may determine the reliability of the specific line segment to be higher. The estimation unit 13 may estimate the dividing line that forms the left end of the lane of the road based on the linear figure that has the highest reliability among the linear figures that are within a first lateral distance (e.g., within 3 m to the left of the center of the vehicle) in each area. Similarly, the estimation unit 13 may estimate the dividing line that forms the right end of the lane of the road based on the linear figure that has the highest reliability among the linear figures that are within a second lateral distance (e.g., within 3 m to the right of the center of the vehicle) in each area.
 続いて、推定部13は、推定した道路の区画線に基づく情報を出力する(ステップS107)。ここで、推定部13は、例えば、自車両の走行車線の一端側から自車両までの距離を示す情報を出力してもよい。また、推定部13は、例えば、図9に示すように、自車両の走行車線の両側の線を撮影画像上に重畳させた画像を出力してもよい。図9の例では、図8の自車両の走行車線の左端の線811及び右端の線812のそれぞれが鳥瞰図の座標から撮影画像の座標に変換された線911、912が、図4の撮影画像401上に重畳された画像901が出力されている。 Then, the estimation unit 13 outputs information based on the estimated road dividing lines (step S107). Here, the estimation unit 13 may output, for example, information indicating the distance from one end of the lane in which the vehicle is traveling to the vehicle itself. The estimation unit 13 may also output, for example, an image in which the lines on both sides of the lane in which the vehicle is traveling are superimposed on the captured image, as shown in FIG. 9. In the example of FIG. 9, the left end line 811 and the right end line 812 of the lane in which the vehicle is traveling in FIG. 8 are converted from the coordinates of the bird's-eye view to the coordinates of the captured image, and lines 911 and 912 are superimposed on the captured image 401 in FIG. 4 to output an image 901.
 これにより、例えば、実際に走行された自車両のふらつきを模倣した自動運転システム、または運転支援システムの開発用のシミュレーションデータの生成を支援できる。また、例えば、自動運転システム、または運転支援システムを搭載した自車両が実際に走行された際の動作検証を支援できる。また、例えば、自動運転システム、または運転支援システムを搭載した自車両が実際に走行されている際に、自車両の走行車線をリアルタイムで検出することもできる。 This can, for example, assist in the generation of simulation data for the development of an autonomous driving system or a driving assistance system that mimics the swaying of a vehicle actually driven. It can also, for example, assist in the verification of the operation of a vehicle equipped with an autonomous driving system or a driving assistance system when it is actually driven. It can also, for example, detect the lane in which a vehicle equipped with an autonomous driving system or a driving assistance system is traveling in real time when the vehicle is actually driven.
 <変形例>
 情報処理装置10は、一つの筐体に含まれる装置でもよいが、本開示の情報処理装置10はこれに限定されない。情報処理装置10の各部は、例えば1以上のコンピュータにより構成されるクラウドコンピューティングにより実現されていてもよい。このような情報処理装置10についても、本開示の「情報処理装置」の一例に含まれる。
<Modification>
The information processing device 10 may be a device contained in one housing, but the information processing device 10 of the present disclosure is not limited to this. Each unit of the information processing device 10 may be realized by cloud computing configured by one or more computers, for example. Such an information processing device 10 is also included in an example of the "information processing device" of the present disclosure.
 なお、本開示は上記実施の形態に限られたものではなく、趣旨を逸脱しない範囲で適宜変更することが可能である。 Note that this disclosure is not limited to the above-described embodiment, and can be modified as appropriate without departing from the spirit and scope of the disclosure.
 上記の実施形態の一部又は全部は、以下の付記のようにも記載されうるが、以下には限られない。
 (付記1)
 車両の周囲の道路が撮影された第1画像から前記道路の区画線の領域を認識する認識部と、
 認識された前記道路の区画線の領域を、前記車両の縦方向と横方向とを2軸とする座標面に表した第2画像を生成する生成部と、
 前記第2画像に基づいて、前記道路の区画線を表す線状の図形を推定する推定部と、
を備える情報処理装置。
 (付記2)
 前記推定部は、前記第2画像に表された前記道路の区画線の領域に基づいて複数の線分を検出し、検出された前記複数の線分に基づいて前記道路の区画線を表す線状の図形を推定する、
付記1に記載の情報処理装置。
 (付記3)
 前記推定部は、前記第2画像を、前記車両の縦方向の所定距離未満の範囲を含む第1エリアと、前記車両の縦方向の前記所定距離以上の範囲を含む第2エリアとに分割し、前記第1エリアで検出された第1線分と前記第2エリアで検出された第2線分とに基づいて、前記道路の区画線を表す線状の図形を推定する、
付記2に記載の情報処理装置。
 (付記4)
 前記推定部は、検出された前記複数の線分のうち所定の角度条件を満たす線分に基づいて、前記道路の区画線を表す線状の図形を推定する
付記2または3に記載の情報処理装置。
 (付記5)
 前記推定部は、前記第1線分が前記第2エリアに向かって延びる方向に基づいて、前記第2エリアで検出された複数の線分のうち、前記第1線分と同一の区画線を表す線状の図形を形成する第2線分を決定する、
付記3に記載の情報処理装置。
 (付記6)
 前記推定部は、推定された前記道路の区画線を表す線状の図形について推定の信頼度を算出し、算出された信頼度に基づいて前記道路の車線の両端を形成する2つの区画線を表す線状の図形を特定する、
 付記2または3に記載の情報処理装置。
 (付記7)
 前記推定部は、推定された前記道路の区画線を表す線状の図形間の幅を算出し、算出された幅と前記道路の車線の幅を表す所定の基準値との比較に基づいて前記信頼度を算出する、
付記6に記載の情報処理装置。
 (付記8)
 車両の周囲の道路が撮影された第1画像から前記道路の区画線の領域を認識し、
 認識した前記道路の区画線の領域を、前記車両の縦方向と横方向とを2軸とする座標面に表した第2画像を生成し、
 前記第2画像に基づいて、前記道路の区画線を表す線状の図形を推定する、
情報処理方法。
 (付記9)
 車両の周囲の道路が撮影された第1画像から前記道路の区画線の領域を認識し、
 認識した前記道路の区画線の領域を、前記車両の縦方向と横方向とを2軸とする座標面に表した第2画像を生成し、
 前記第2画像に基づいて、前記道路の区画線を表す線状の図形を推定する、処理をコンピュータに実行させるプログラムが格納された非一時的なコンピュータ可読媒体。
A part or all of the above-described embodiments can be described as, but is not limited to, the following supplementary notes.
(Appendix 1)
a recognition unit that recognizes an area of a lane marking of a road from a first image obtained by capturing a road around a vehicle;
a generating unit that generates a second image in which the recognized lane marking area of the road is represented on a coordinate plane having two axes that are the longitudinal direction and the lateral direction of the vehicle;
an estimation unit that estimates a linear figure representing a dividing line of the road based on the second image;
An information processing device comprising:
(Appendix 2)
the estimation unit detects a plurality of line segments based on an area of the lane markings of the road represented in the second image, and estimates a linear figure representing the lane markings of the road based on the detected plurality of line segments.
2. The information processing device according to claim 1.
(Appendix 3)
the estimation unit divides the second image into a first area including a range less than a predetermined distance in the vertical direction of the vehicle and a second area including a range equal to or greater than the predetermined distance in the vertical direction of the vehicle, and estimates a linear figure representing a dividing line of the road based on a first line segment detected in the first area and a second line segment detected in the second area.
3. The information processing device according to claim 2.
(Appendix 4)
The information processing device according to claim 2 or 3, wherein the estimation unit estimates a linear figure representing a dividing line of the road based on a line segment among the detected plurality of line segments that satisfies a predetermined angle condition.
(Appendix 5)
the estimation unit determines, based on a direction in which the first line segment extends toward the second area, a second line segment that forms a linear figure representing the same lane marking as the first line segment, among the multiple line segments detected in the second area;
4. The information processing device according to claim 3.
(Appendix 6)
the estimation unit calculates a reliability of estimation for a linear figure representing the estimated lane marking of the road, and identifies linear figures representing two lane markings forming both ends of a lane of the road based on the calculated reliability.
4. The information processing device according to claim 2 or 3.
(Appendix 7)
the estimation unit calculates a width between linear figures representing the estimated lane lines of the road, and calculates the reliability based on a comparison between the calculated width and a predetermined reference value representing a width of a lane of the road;
7. The information processing device according to claim 6.
(Appendix 8)
Recognizing an area of a road division line from a first image obtained by capturing a road around a vehicle;
generating a second image in which the recognized lane marking area of the road is represented on a coordinate plane having two axes corresponding to the longitudinal direction and the lateral direction of the vehicle;
estimating a linear figure representing a dividing line of the road based on the second image;
Information processing methods.
(Appendix 9)
Recognizing an area of a road division line from a first image obtained by capturing a road around a vehicle;
generating a second image in which the recognized lane marking area of the road is represented on a coordinate plane having two axes corresponding to the longitudinal direction and the lateral direction of the vehicle;
A non-transitory computer-readable medium storing a program for causing a computer to execute a process of estimating a linear figure representing a dividing line of the road based on the second image.
1 監視システム
10 情報処理装置
11 認識部
12 生成部
13 推定部
1 Monitoring system 10 Information processing device 11 Recognition unit 12 Generation unit 13 Estimation unit

Claims (9)

  1.  車両の周囲の道路が撮影された第1画像から前記道路の区画線の領域を認識する認識部と、
     認識された前記道路の区画線の領域を、前記車両の縦方向と横方向とを2軸とする座標面に表した第2画像を生成する生成部と、
     前記第2画像に基づいて、前記道路の区画線を表す線状の図形を推定する推定部と、
    を備える情報処理装置。
    a recognition unit that recognizes an area of a lane marking of a road from a first image obtained by capturing a road around a vehicle;
    a generating unit that generates a second image in which the recognized lane marking area of the road is represented on a coordinate plane having two axes that are the longitudinal direction and the lateral direction of the vehicle;
    an estimation unit that estimates a linear figure representing a dividing line of the road based on the second image;
    An information processing device comprising:
  2.  前記推定部は、前記第2画像に表された前記道路の区画線の領域に基づいて複数の線分を検出し、検出された前記複数の線分に基づいて前記道路の区画線を表す線状の図形を推定する、
    請求項1に記載の情報処理装置。
    the estimation unit detects a plurality of line segments based on an area of the lane markings of the road represented in the second image, and estimates a linear figure representing the lane markings of the road based on the detected plurality of line segments.
    The information processing device according to claim 1 .
  3.  前記推定部は、前記第2画像を、前記車両の縦方向の所定距離未満の範囲を含む第1エリアと、前記車両の縦方向の前記所定距離以上の範囲を含む第2エリアとに分割し、前記第1エリアで検出された第1線分と前記第2エリアで検出された第2線分とに基づいて、前記道路の区画線を表す線状の図形を推定する、
    請求項2に記載の情報処理装置。
    the estimation unit divides the second image into a first area including a range less than a predetermined distance in the vertical direction of the vehicle and a second area including a range equal to or greater than the predetermined distance in the vertical direction of the vehicle, and estimates a linear figure representing a dividing line of the road based on a first line segment detected in the first area and a second line segment detected in the second area.
    The information processing device according to claim 2 .
  4.  前記推定部は、検出された前記複数の線分のうち所定の角度条件を満たす線分に基づいて、前記道路の区画線を表す線状の図形を推定する
    請求項2または3に記載の情報処理装置。
    The information processing device according to claim 2 , wherein the estimation unit estimates a linear figure representing a lane marking of the road based on a line segment that satisfies a predetermined angle condition among the plurality of detected line segments.
  5.  前記推定部は、前記第1線分が前記第2エリアに向かって延びる方向に基づいて、前記第2エリアで検出された複数の線分のうち、前記第1線分と同一の区画線を表す線状の図形を形成する第2線分を決定する、
    請求項3に記載の情報処理装置。
    the estimation unit determines, based on a direction in which the first line segment extends toward the second area, a second line segment that forms a linear figure representing the same lane marking as the first line segment, among the multiple line segments detected in the second area;
    The information processing device according to claim 3 .
  6.  前記推定部は、推定された前記道路の区画線を表す線状の図形について推定の信頼度を算出し、算出された信頼度に基づいて前記道路の車線の両端を形成する2つの区画線を表す線状の図形を特定する、
     請求項2または3に記載の情報処理装置。
    the estimation unit calculates a reliability of estimation for a linear figure representing the estimated lane marking of the road, and identifies linear figures representing two lane markings forming both ends of a lane of the road based on the calculated reliability.
    4. The information processing device according to claim 2 or 3.
  7.  前記推定部は、推定された前記道路の区画線を表す線状の図形間の幅を算出し、算出された幅と前記道路の車線の幅を表す所定の基準値との比較に基づいて前記信頼度を算出する、
    請求項6に記載の情報処理装置。
    the estimation unit calculates a width between linear figures representing the estimated lane lines of the road, and calculates the reliability based on a comparison between the calculated width and a predetermined reference value representing a width of a lane of the road;
    The information processing device according to claim 6.
  8.  車両の周囲の道路が撮影された第1画像から前記道路の区画線の領域を認識し、
     認識した前記道路の区画線の領域を、前記車両の縦方向と横方向とを2軸とする座標面に表した第2画像を生成し、
     前記第2画像に基づいて、前記道路の区画線を表す線状の図形を推定する、
    情報処理方法。
    Recognizing an area of a road division line from a first image obtained by capturing a road around a vehicle;
    generating a second image in which the recognized lane marking area of the road is represented on a coordinate plane having two axes corresponding to the longitudinal direction and the lateral direction of the vehicle;
    estimating a linear figure representing a dividing line of the road based on the second image;
    Information processing methods.
  9.  車両の周囲の道路が撮影された第1画像から前記道路の区画線の領域を認識し、
     認識した前記道路の区画線の領域を、前記車両の縦方向と横方向とを2軸とする座標面に表した第2画像を生成し、
     前記第2画像に基づいて、前記道路の区画線を表す線状の図形を推定する、処理をコンピュータに実行させるプログラムが格納された非一時的なコンピュータ可読媒体。
    Recognizing an area of a road division line from a first image obtained by capturing a road around a vehicle;
    generating a second image in which the recognized lane marking area of the road is represented on a coordinate plane having two axes corresponding to the longitudinal direction and the lateral direction of the vehicle;
    A non-transitory computer-readable medium storing a program for causing a computer to execute a process of estimating a linear figure representing the dividing line of the road based on the second image.
PCT/JP2022/044666 2022-12-05 2022-12-05 Information processing device, information processing method, and computer-readable medium WO2024121880A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/044666 WO2024121880A1 (en) 2022-12-05 2022-12-05 Information processing device, information processing method, and computer-readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/044666 WO2024121880A1 (en) 2022-12-05 2022-12-05 Information processing device, information processing method, and computer-readable medium

Publications (1)

Publication Number Publication Date
WO2024121880A1 true WO2024121880A1 (en) 2024-06-13

Family

ID=91378758

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/044666 WO2024121880A1 (en) 2022-12-05 2022-12-05 Information processing device, information processing method, and computer-readable medium

Country Status (1)

Country Link
WO (1) WO2024121880A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005346197A (en) * 2004-05-31 2005-12-15 Toyota Motor Corp Method and device for detecting lane boundary line, and method and device for controlling lane keeping
JP2009169510A (en) * 2008-01-11 2009-07-30 Nec Corp Lane recognition device, lane recognition method, and lane recognition program
JP2012175483A (en) * 2011-02-23 2012-09-10 Renesas Electronics Corp Device and method for traffic lane recognition
JP5466342B1 (en) * 2012-08-13 2014-04-09 本田技研工業株式会社 Road environment recognition device
JP2016081361A (en) * 2014-10-20 2016-05-16 株式会社日本自動車部品総合研究所 Travel compartment line recognition apparatus
JP2018097782A (en) * 2016-12-16 2018-06-21 クラリオン株式会社 Section line recognition device
WO2018216177A1 (en) * 2017-05-25 2018-11-29 本田技研工業株式会社 Vehicle control device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005346197A (en) * 2004-05-31 2005-12-15 Toyota Motor Corp Method and device for detecting lane boundary line, and method and device for controlling lane keeping
JP2009169510A (en) * 2008-01-11 2009-07-30 Nec Corp Lane recognition device, lane recognition method, and lane recognition program
JP2012175483A (en) * 2011-02-23 2012-09-10 Renesas Electronics Corp Device and method for traffic lane recognition
JP5466342B1 (en) * 2012-08-13 2014-04-09 本田技研工業株式会社 Road environment recognition device
JP2016081361A (en) * 2014-10-20 2016-05-16 株式会社日本自動車部品総合研究所 Travel compartment line recognition apparatus
JP2018097782A (en) * 2016-12-16 2018-06-21 クラリオン株式会社 Section line recognition device
WO2018216177A1 (en) * 2017-05-25 2018-11-29 本田技研工業株式会社 Vehicle control device

Similar Documents

Publication Publication Date Title
Teoh et al. Symmetry-based monocular vehicle detection system
EP3637313A1 (en) Distance estimating method and apparatus
US10580124B2 (en) Inspection device, control method and control apparatus for the same
JP4793324B2 (en) Vehicle monitoring apparatus and vehicle monitoring method
JP6226368B2 (en) Vehicle monitoring apparatus and vehicle monitoring method
JP6515704B2 (en) Lane detection device and lane detection method
US10672141B2 (en) Device, method, system and computer-readable medium for determining collision target object rejection
WO2014132490A1 (en) Vehicle specifications measurement processing device, vehicle specifications measuring method, and recording medium
EP3115933A1 (en) Image processing device, image capturing device, mobile body control system, image processing method, and computer-readable recording medium
JP2020067698A (en) Partition line detector and partition line detection method
KR20180098945A (en) Method and apparatus for measuring speed of vehicle by using fixed single camera
JP2018092596A (en) Information processing device, imaging device, apparatus control system, mobile body, information processing method, and program
CN110751040B (en) Three-dimensional object detection method and device, electronic equipment and storage medium
KR20170007596A (en) Improved method of lane recognition
EP4082867A1 (en) Automatic camera inspection system
JP6431299B2 (en) Vehicle periphery monitoring device
WO2024121880A1 (en) Information processing device, information processing method, and computer-readable medium
CN113536867B (en) Object identification method, device and system
JP2016162130A (en) Device and method for detecting pedestrian crossing and computer for pedestrian crossing detection
US11884303B2 (en) Apparatus and method for determining lane change of surrounding objects
JP4575315B2 (en) Object detection apparatus and method
CN115601435A (en) Vehicle attitude detection method, device, vehicle and storage medium
CN114943836A (en) Trailer angle detection method and device and electronic equipment
JPH10187974A (en) Physical distribution measuring instrument
CN117523914A (en) Collision early warning method, device, equipment, readable storage medium and program product