US20170330043A1 - Method and System for Synthesizing a Lane Image - Google Patents
Method and System for Synthesizing a Lane Image Download PDFInfo
- Publication number
- US20170330043A1 US20170330043A1 US15/152,222 US201615152222A US2017330043A1 US 20170330043 A1 US20170330043 A1 US 20170330043A1 US 201615152222 A US201615152222 A US 201615152222A US 2017330043 A1 US2017330043 A1 US 2017330043A1
- Authority
- US
- United States
- Prior art keywords
- image
- lane
- frames
- vehicle
- synthesizing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G06K9/00798—
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B62—LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
- B62D—MOTOR VEHICLES; TRAILERS
- B62D15/00—Steering not otherwise provided for
- B62D15/02—Steering position indicators ; Steering position determination; Steering aids
- B62D15/029—Steering assistants using warnings or proposing actions to the driver without influencing the steering system
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2625—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of images from a temporal image sequence, e.g. for a stroboscopic effect
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2625—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of images from a temporal image sequence, e.g. for a stroboscopic effect
- H04N5/2627—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of images from a temporal image sequence, e.g. for a stroboscopic effect for providing spin image effect, 3D stop motion effect or temporal freeze effect
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2628—Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/183—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/167—Driving aids for lane monitoring, lane changing, e.g. blind spot detection
Definitions
- the present invention is related to synthesizing a lane image and, more specifically, to deal with lane detection in a scenario of dashed lines of a lane.
- a Lane Departure Warning (LDW) System is a set of active safety-assistance-systems for a vehicle, in which a video image capture device shoots the scenes of roads and then detects the locations of the lane lines to determine whether the vehicle is shifting from the center of the lanes by a driver. Whenever the vehicle is judged to be shifting from the center of the lanes, the LDW system will pop up a warning message and suggest that the driver drive back to the center of the lanes.
- LDW Lane Departure Warning
- FIG. 1 illustrates two frames 13 and 15 taken from a video image capture device 10 installed on a moving vehicle 11 , which moves in an upward direction from the lower location P 11 to the upper location P 12 shown in this figure.
- frame 13 is captured; and when the moving vehicle 11 is located at the upper location P 12 , the frame 15 is captured.
- frame 14 includes a portion the same as that in frame 13 of the shot; frame 16 includes a portion the same as that in frame 15 of the shot; and frames 13 - 16 together illustrate the LDW system applied in the scenario of distances between dashed lines of lanes 17 - 19 .
- FIG. 2 illustrates 12 frames (Frames 2 A- 2 L) retrieved from a video image capture device in a given time under the condition that the 12 frames are captured with a frame rate of 30 frames per second (FPS) and the velocity of a vehicle is equal to 85 kilometers/hour (km/hr).
- FPS frames per second
- the lane departure warning system includes an image sensing unit, an edge extracting unit, a lane recognizing unit, a lane type determining unit, a lane color detecting unit, a lane pattern generating unit, and a lane departure determining unit.
- the image sensing unit senses a plurality of images.
- the edge extracting unit emphasizes the edge components necessary for lane recognition.
- the lane recognizing unit detects straight line components.
- the lane type determining unit determines a type of the lane.
- the lane color detecting unit detects a color of the lane from an image signal value.
- the lane pattern generating unit generates a lane pattern.
- the lane departure determining unit determines lane departure in consideration of the type and the color of the lane and a state of a turn signal lamp.
- Kim et al. can determine whether the lane line is a dashed line, it is only applied in the scenario of the lanes under a condition that the lane line feature occupies most of the area in the frame; that is to say, there need to be more than two dashed lines in the ROI.
- the scope of lane lines may be too narrow in the image to detect more than two dashed lines under some settings, such as the LDW system being installed on the front part of the vehicle with the hood blocking the essential information on the road. This could cause a misjudgment of the kind of the lane, or even fail to detect.
- Masato Imai et al. propose a lane departure warning apparatus capable of preventing false warnings and the absence of a warning regarding lane departure which is attributed to special road geometries such as junctions and tollgates.
- the lane departure warning apparatus for that outputs a warning signal when determining the departure of a vehicle from a lane, performing the steps of: when one dividing line in a vehicle width direction of the vehicle is non-detected, estimating a position of the one dividing line based on a position of the other dividing line as a first estimated dividing line; estimating a position of the non-detected dividing line based on a position of the one dividing line prior to non-detection as a second estimated dividing line; and comparing the first estimated dividing line with the second estimated dividing line to determine lane departure.
- Masato Imai et al. should determine whether the estimated dividing line is correct before a next detection happens, and one can not judge any displacement between the intervals of the two detections.
- this algorithm will fail in cases where the two sides near the vehicle are both dashed. If there is noise in the distance of dashed lines, this would impact the assessment the estimated dividing line and the problem of dashed lines will not be solved.
- this technology cannot be used in the case of the driving record configured on the front part of the vehicle and it will increase the system's loading.
- Kazuyuki Sakurai U.S. Pat. No. 8,655,081 discloses a method to improve the problem of lane line detection, especially for dashed line detection. This method can improve the lane recognition accuracy by suppressing noises that are likely to be generated respectively in an original image and a bird's-eye image.
- the lane recognition system recognizes a lane based on an image.
- the system includes: a synthesized bird's-eye image creation module which creates a synthesized bird's-eye image by connecting a plurality of bird's-eye images that are obtained by transforming respective partial regions of original images picked up at a plurality of different times into bird's-eye images; a lane line candidate extraction module which detects a lane line candidate by using information of the original images or the bird's-eye images created from the original images, and the synthesized bird's-eye image; and a lane line position estimation module which estimates a lane line position based on information of the lane line candidate.
- Kazuyuki Sakurai installs the system on the back part of the vehicle.
- this requires a more sophisticated theory, such as Bird's eye transformation algorithm.
- it also needs to detect two frames simultaneously to perform a lane detection and judgment, which also increases the system's loading.
- the present invention is related to a method for synthesizing a lane image.
- the method includes steps of: retrieving M continuous image frames at a frame rate from a video image capture device; determining a quantity N for image mapping based on a dash length of a dashed line and a distance between two dashes of the dashed lines; determining a frame interval for mapping image frames based on the dash length, the distance, the velocity, and the frame rate; fetching at least N image frames from the M continuous image frames at the frame interval; and synthesizing the at least N image frames to obtain the lane image using an image synthesizing device.
- a method for real-time image synthesis from a video image capture device installed on a vehicle includes steps of: retrieving M continuous image frames at a frame rate from the video image capture device built on the vehicle; determining a frame interval for mapping image frames based on a dash length of a dashed line, a distance between two dashes of the dashed lines, a real-time velocity v of the vehicle and the frame rate; determining a quantity N for image mapping at least based on the dash length and the distance; fetching at least N image frames from the M continuous image frames at the frame interval; and synthesizing the at least N image frames to obtain a lane image by an image synthesizing device.
- a lane image synthesizing system for a vehicle includes a database, and an image mapping module.
- the database contains a plurality of images.
- the image mapping module is configured to: determine a quantity N for image mapping; determine an interval based on parameters including at least one of a velocity of the vehicle and a sampling rate of the plurality of images; fetch at least N images from the plurality of images according to the interval; and synthesize the at least N images into a lane image.
- FIG. 1 illustrates two shots retrieved from a video image capture device and the results of lane detection
- FIG. 2 illustrates 12 frames retrieved from a video image capture device under the condition of 30 frames per second and 85 kilometers/hour as a velocity of a vehicle;
- FIG. 3 illustrates a scheme for synthesizing a lane image with three frames retrieved from a video image capture device according to the embodiments of the present invention
- FIG. 4 illustrates a dash length of a dashed line, a distance between two dashes of the dashed lines according to regulations for lane lines, as well as a moving distance of a vehicle between two frames retrieved from a video image capture device.
- FIG. 5 illustrates a plot regarding a velocity of a vehicle and a frame interval for mapping frames
- FIG. 6 illustrates a diagram of synthesizing a lane image with four frames to form a lane image according to the embodiments of the present invention
- FIG. 7 illustrates a flowchart of a method for synthesizing a lane image according to the embodiments of the present invention
- FIG. 8 illustrates a diagram of synthesizing a lane image with three binary frames to form a lane image, in which the frame interval for mapping frames equals 5;
- FIG. 9 illustrates a diagram of a lane image synthesizing system consisting of an image mapping module, an image processing module and a prompting module according to the embodiments of the present invention
- FIG. 10 illustrates a diagram of synthesizing a lane image with three gray scaled frames to form a lane image, in which the frame interval for mapping frames equals 5;
- FIG. 11 illustrates a diagram of a lane image synthesizing system consisting of an image processing module, an image mapping module, and a prompting module according to the embodiments of the present invention
- FIGS. 12(A)-12(C) illustrate the conditions of 50 kilometers/hour as a velocity of a vehicle and the frame interval for mapping frames equals 7 on the street in the daytime as the scenario of one embodiments of the present invention
- FIGS. 13(A)-13(C) illustrate the conditions of 56 kilometer/hour as a velocity of a vehicle and the frame interval for mapping frames equals 6 on a curve of the street at night as the scenario of one embodiments of the present invention.
- the invention is related to synthesizing images shot at different times based on a velocity of a vehicle and the regulations for lane lines to obtain an optimal image synthesizing condition as well as a stable lane detection.
- FIG. 3 illustrates a synthesizing scheme of the present invention by using an image processing method from the bird's-eye view.
- the method includes referring to a velocity of a vehicle 36 moving from left to right and locating at a left location P 31 , a middle location P 32 and a right location P 33 .
- the velocity can be measured by a global positioning system (GPS).
- GPS global positioning system
- the vehicle 36 passes the lanes 301 - 304 and a video image capture device 361 built or installed on the front part of the vehicle shot 3 frames with 45 degrees of view in the ROI and a depth of field illustrated by vertically dashed lines 351 - 352 , 353 - 354 and 355 - 356 in a time sequence.
- the scenes within the depth of field are shot by the video image capture device 361 built on the vehicle 36 .
- Variables d 1 and d 2 both demonstrate the differences of depths of field between frames 1 - 2 and frames 2 - 3 relative to the lane 301 shot in individual frames.
- the video image capture device 361 built on the vehicle 36 at the location P 31 shoots nearly two complete lanes 303 - 304 with 45 degrees of view in the ROI in a depth of field of frame 1 , ranged between dashed lines 351 - 352 .
- the video image capture device 361 can only shoot a portion of the lane 303 within 45 degrees of view in the ROI in a depth of field of frame 2 , which is also illustrated by d 1 of lane 303 ′.
- the method further includes steps of: computing an interval and a quantity for mapping images by referring to a dash length L of a dashed line and a distance S between two dashes of the dashed lines; retrieving ROIs of previous images from a video image capture device, such as frame number 1 - 3 shot in the upper part of this figure; and composing a number of images into a lane image as shown in the lower part of the figure.
- the present invention effectively improves the success rate for later lane detection.
- FIG. 4 It illustrates a dash length L of a dashed line, a distance S between two dashes of the dashed lines according to the regulations for lane lines, as well as moving distances d 41 and d 42 of a vehicle by referring to lane 403 between every two frames retrieved from a video image capture device, wherein the vehicle is passing through lanes 401 - 404 from left to right.
- L 4 meter
- S 6 meter based on regulations for lane lines in Taiwan
- L 3 meter
- S 9 meter for the corresponding standard in the United States.
- ceil(x) is a function of x which maps the least integer that is greater than or equal to x
- N least represents a least quantity for image mapping
- L represents a dash length of a dashed line
- S represents a distance between two dashes of the dashed lines, as well as the necessary count N shall be no less than the least quantity for image mapping N least , such as in formula II:
- a moving distance d of the vehicle between two frames should be within the following range between (S/(N ⁇ 1)) and L as the formula III:
- formula IV can be further formatted as formula V:
- floor(x) is a function of x which maps the greatest integer that is less than or equal to x
- N least is the minimal integer among all of the necessary count N for image mapping.
- the necessary count N and the frame interval m both satisfy formula II and inequality VI.
- the necessary count N and the frame interval m with less values are preferred in the embodiments.
- FIG. 5 illustrates a plot regarding a relationship between a velocity of a vehicle and a frame interval for mapping frames.
- the frame interval can be calculated at least based on the individual velocity of the vehicle.
- At least N image frames retrieved from certain continuous image frames at the frame interval m are fetched. If each of the image frames belongs to a binary image, one should take the union of the at least N image frames to form the lane image. If each of the image frames belongs to a gray scale image or a color image, one should consider a Max function or an addition algorithm for said image frames to form the lane image.
- FIG. 6 illustrates a diagram of synthesizing a lane image with four frames to form a lane image according to the embodiments of the present invention.
- a vehicle 66 is passing through lanes 601 - 603 from left to right, wherein the lane 603 belongs to a solid line and the lanes 601 - 602 are dashed lines.
- a video image capture device in the vehicle 66 shots frames within a depth of field defined by two vertically dashed lines. For example, the video image capture device shoots a complete lane 603 and a fragment of the lane 602 in a ROI of the depth of field of frame number equaling F.
- the lanes 601 - 602 shot in the individual frames (frame number F, F-m, F- 2 m and F- 3 m ) are all superimposed and rendered in this figure by referring to the relative position among the lanes 601 - 603 , the vehicle 66 and a sun 600 in the sky.
- FIG. 7 illustrates a flowchart of a method for synthesizing a lane image according to the embodiments of the present invention.
- a video image capture device shoots the scenes of the road as a source image (step S 701 ), and stores each image frame in a memory buffer (step S 702 ), wherein each image frame has an image being selected from one of the group consisting of a binary image, a gray scale image and a color image depending on the type of video image capture device.
- an optimal calculator for image mapping and another optimal calculator for a frame interval using in mapping image frames are applied to generate a quantity N for image mapping and a frame interval m for mapping image frames according to regulations for lane lines, a frame rate f and a real-time velocity v of a vehicle (step S 703 -S 705 ).
- At least N image frames are fetched from a number of image frames retrieved from the memory buffer; and the at least N image frames are used to obtain the lane image using an image synthesizing device (step S 706 ).
- a further step of taking the union of the at least N image frames to form the lane image will be performed.
- a Max function for said image frames to form the lane image would be chosen.
- other image operators could be used in said image frames with gray scale pixels, such as a Sobel filter.
- the image synthesizing device can be built on an embedded system or any other portable information platform.
- portable information platforms such as mobile phones, PDAs, pagers, etc.
- portable information platforms are typically based on an embedded controller that integrates a microprocessor and a set of system and application programs in the same device.
- a virtual machine such as Java Virtual Machine (JVM) or Microsoft Virtual Machine (MVM) is integrated to the embedded system as a cross-platform foundation for the running of application programs on the information platform.
- JVM Java Virtual Machine
- MVM Microsoft Virtual Machine
- step S 707 once the lane image is completed, an image processing or prompting can be conducted based on a well-defined lane image as a destination image.
- FIG. 8 illustrates a diagram of synthesizing a lane image with three binary frames (frame number F, F- 5 , and F- 10 ) to form a lane image as a mapping result, and the frame interval for mapping frames equals 5. It can be seen that fragments 81 - 83 of a lane can be combined into a complete lane 84 shown in this figure.
- FIG. 9 illustrates a diagram of a lane image synthesizing system according to the embodiments of the present invention.
- the lane image synthesizing system includes an image mapping module 901 , an image processing module 902 , a prompting module 903 and a message generation module 904 , wherein a lane image is formed before applying the image processing module 902 and the prompting module 903 .
- the idea of the image mapping module 901 is similar to the embodiment in FIG. 7 , which can be viewed as another image synthesizing device used in step S 706 .
- the image mapping module 901 includes an image mapping calculator 9011 , a frame interval calculator 9012 , an image register 9013 and an image composer module 9014 , and
- the image register 9013 is responsible for storing a plurality of images 900 from a video image capture device and the image register 9013 plays the role as an image database.
- a lane image can be composed by referring to the necessary count for image mapping, a frame interval and a specific velocity. This velocity v can be measured from a global positioning system, a radar speed measuring device (RSMD), a laser speed measuring device (LSMD), an Average Speed Calculator (ASC) or any other speed measuring device.
- RSMD radar speed measuring device
- LSMD laser speed measuring device
- ASC Average Speed Calculator
- a necessary count for image mapping and a frame interval corresponding to a specific velocity of a vehicle are calculated via the image mapping calculator 9011 and the frame interval calculator 9012 .
- the image mapping calculator 9011 determines a least quantity N least for image mapping while the frame interval calculator 9012 determines a quantity Nand the frame interval based on parameters including at least one of the velocity of the vehicle and a sampling rate of the plurality of images 900 , said 30 frames per second among these continuous images.
- the image mapping module 901 can obtain a velocity value, a length value, a distance value and a sampling rate value.
- the velocity value, the length value, the distance value and the sampling rate value respectively represent the velocity v, the dash length L, and the distance S and the sampling rate (or a frame rate).
- the frame interval is determined based on the velocity value, the length value, the distance value and the sampling rate value.
- the plurality of images 900 could be stored in frames. However the plurality of images 900 could also be viewed as a stream and be stored in a multidimensional way.
- the parameters used in the frame interval calculator 9012 may further include a length of a dashed line and a distance between two dashes of the dashed lines.
- the image mapping module 901 in FIG. 9 then fetches at least N images from the plurality of images according to the frame interval.
- An image composer module 9014 finally synthesizes the at least N images into a lane image by means of a max filter.
- the image processing module 902 includes a ROI cropping and scaling module 9021 , a contrast enhancement module 9022 , an edge extraction module 9023 and a noise reduction module 9024 .
- the image processing module 902 is configured to perform at least a procedure selected from a group consisting of regions of interest (ROI) cropping and scaling implemented by the ROI cropping and scaling module 9021 , a contrast enhancement implemented by the contrast enhancement module 9022 , an edge extraction implemented by the edge extraction module 9023 , a noise reduction implemented by the noise reduction module 9024 and a combination thereof for producing the lane image.
- ROI regions of interest
- the ROI cropping and scaling module 9021 can change the image shape while scaling maintains the morphology of the object in the image and does not change the image pixels in any way.
- the contrast enhancement module 9022 changes the image value distribution to cover a wide range for the ease of human vision.
- An edge extraction technique is to extract the skeleton of the object in the image, such as the lines of the lane.
- the prompting module 903 includes a line detection module 9031 , a lane determinant module 9032 and a lane departure determinant module 9033 .
- the prompting module 903 is configured to perform: a line detection to generate a set of candidate lines implemented by the line detection module module 9031 ; a lane determinant based on a characteristic of each of the candidate lines to identify two lane lines of the lane, such as the distribution of the lines in the image implemented by the lane determinant module 9032 .
- the prompting module 903 can further take a lane departure determinant based on a reference line of the vehicle and the two lane lines implemented by the lane departure determinant module 9033 .
- the message generation module 904 can pop up a warning message when the vehicle deviates from one of the reference line and the lane.
- the image mapping module 901 , the image processing module 902 and the prompting module 903 can be implemented by an embedded system or another kind of electron device if necessary.
- fragments of lanes 1001 - 1003 are composed into a lane image 1004 .
- the image processing module 902 and the prompting module 903 could be conducted, so that a well-defined lane image is formed.
- FIG. 11 illustrates a diagram of a lane image synthesizing system according to the embodiments of the present invention.
- the lane image synthesizing system includes an image processing module 1101 taking a source image 1100 as the input, an image mapping module 1102 , and a prompting module 1103 used to generate a warning message 1104 .
- the image processing module 1101 includes a ROI cropping and scaling module 11011 , a contrast enhancement module 11012 , an edge extraction module 11013 and a noise reduction module 11014 .
- the prompting module 1103 includes a lane detection module 11031 , a lane determinant module 11032 and a lane departure determinant module 11033 .
- the image processing module 1101 and the prompting module 1103 can be also implemented as the image processing module 902 and the prompting module 903 respectively.
- the image mapping module 1102 utilizes the output of the image processing module 1101 as the input image stored in an image register 11023 .
- a frame interval calculator 11022 is responsible for another process for a table NLUT and a table mLUT corresponding to different velocities of a vehicle.
- the table NLUT includes a list of possible quantities for image mapping.
- the table mLUT is established according to a plurality of velocity values, a quantity N for image mapping and a plurality of intervals for mapping image, wherein the plurality of intervals are calculated based on the quantity N, a dash length L of a dashed line, a distance S between two dashes of the dashed line and the plurality of velocity values.
- the at least N image frames with an interval between two continuous frames at the time scale are used to obtain a lane image using an image composer 11024 .
- FIGS. 12(A)-12(C) illustrate the conditions of 50 kilometers/hour as a velocity of a vehicle, and the frame interval for mapping frames equals 7 on the street in the daytime as the scenario of one embodiments of the present invention.
- FIG. 12(A) illustrates the scene of on the street in the daytime.
- FIG. 12(B) renders the lane image after using an image synthesizing device.
- FIG. 12(C) is the result of a superimposed image combining FIG. 12(A) and FIG. 12(B) to evaluate the accuracy of the image synthesizing device by naked eyes.
- FIGS. 13(A)-13(C) illustrate the conditions of 56 kilometers/hour as a velocity of a vehicle and the frame interval for mapping frames equals 6 on the curve of the street at night as the scenario of one embodiments of the present invention.
- FIG. 13(A) illustrates the scene of the curve of the street at night.
- FIG. 13(B) renders the lane image after using an image synthesizing device.
- FIG. 13(C) is the result of a super-imposed image combining FIG. 13(A) and FIG. 13(B) to evaluate the accuracy of the image synthesizing device by naked eyes.
- the present invention is related to a process of connecting dashed lines with a number of image frames separated by a frame interval.
- dashed lane lines can be connected, followed by a lane detection, especially when it deals with the problem of dashed lines.
- this invention can be applied in a driving record, applicable to images configured to the front part of the vehicle. It is simple and more reliable without complex algorithms, and it will not require a substantial amount of system memory.
Abstract
A method for synthesizing a lane image is proposed in the present application. This method includes the following steps. M continuous image frames are retrieved at a frame rate f from a video image capture device. A quantity N for image mapping is determined based on a dash length L of a dashed line and a distance S between two dashes of the dashed lines. A frame interval for mapping image frames is determined based on the dash length L, the distance S, the velocity v, and the frame rate f. At least N image frames are retrieved from the M continuous image frames at the frame interval. The at least N image frames are synthesized to obtain the lane image using an image synthesizing device.
Description
- The present invention is related to synthesizing a lane image and, more specifically, to deal with lane detection in a scenario of dashed lines of a lane.
- Traditionally, a Lane Departure Warning (LDW) System is a set of active safety-assistance-systems for a vehicle, in which a video image capture device shoots the scenes of roads and then detects the locations of the lane lines to determine whether the vehicle is shifting from the center of the lanes by a driver. Whenever the vehicle is judged to be shifting from the center of the lanes, the LDW system will pop up a warning message and suggest that the driver drive back to the center of the lanes.
- But in most road scenes, there are often cases with dashed lines of the lane in an image. In addition, if the LDW system is installed on a driving record for the vehicle, its hood often blocks a part of the road in the image, and thus it makes the lane lines more narrowly rendered in a frame. In these cases, the distance of dashed lines would cause a poor success rate for lane detection.
- However, many lane lines in the road are not completely straight lines or solid lines. Please refer to
FIG. 1 , which illustrates twoframes image capture device 10 installed on a movingvehicle 11, which moves in an upward direction from the lower location P11 to the upper location P12 shown in this figure. When the movingvehicle 11 is located at the lower location P11,frame 13 is captured; and when the movingvehicle 11 is located at the upper location P12, theframe 15 is captured. - Please focus on the region of interest (ROI) within the rectangles surrounded by pairs of dashed lines shown in frames 13-16, which are all labeled with two dashed lines. In addition,
frame 14 includes a portion the same as that inframe 13 of the shot;frame 16 includes a portion the same as that inframe 15 of the shot; and frames 13-16 together illustrate the LDW system applied in the scenario of distances between dashed lines of lanes 17-19. Obviously, it would be difficult to detect alane line 19′ in the ROI offrame 16 rather than detecting alane line 17′ in the ROI offrame 14. - Please refer to
FIG. 2 , which illustrates 12 frames (Frames 2A-2L) retrieved from a video image capture device in a given time under the condition that the 12 frames are captured with a frame rate of 30 frames per second (FPS) and the velocity of a vehicle is equal to 85 kilometers/hour (km/hr). It can be seen that there are spaces between the dashed lines shown in individual frames during the period. For example, white spaces lie between asolid line 21 and adashed line 22 and thedashed line 22 and anothersolid line 23 inFrame 2A. Thus there is less than a 33% possibility to detect a well-defined lane among these frames (referring toFrames - By referring to the lane detection in the prior art, Kim et al. (US patent application No. 20120154588) disclose a method for detecting different kinds of lanes, say a solid line, a dashed line and the colors of the lane, and then determining whether to prompt a warning or not based on the lane type and its color. More specifically, the lane departure warning system includes an image sensing unit, an edge extracting unit, a lane recognizing unit, a lane type determining unit, a lane color detecting unit, a lane pattern generating unit, and a lane departure determining unit. The image sensing unit senses a plurality of images. The edge extracting unit emphasizes the edge components necessary for lane recognition. The lane recognizing unit detects straight line components. The lane type determining unit determines a type of the lane. The lane color detecting unit detects a color of the lane from an image signal value. The lane pattern generating unit generates a lane pattern. The lane departure determining unit determines lane departure in consideration of the type and the color of the lane and a state of a turn signal lamp.
- Although Kim et al. can determine whether the lane line is a dashed line, it is only applied in the scenario of the lanes under a condition that the lane line feature occupies most of the area in the frame; that is to say, there need to be more than two dashed lines in the ROI.
- Although one can still determine the kinds of the lane, the scope of lane lines may be too narrow in the image to detect more than two dashed lines under some settings, such as the LDW system being installed on the front part of the vehicle with the hood blocking the essential information on the road. This could cause a misjudgment of the kind of the lane, or even fail to detect.
- Masato Imai et al. (US patent application No. 20120212612) propose a lane departure warning apparatus capable of preventing false warnings and the absence of a warning regarding lane departure which is attributed to special road geometries such as junctions and tollgates. The lane departure warning apparatus for that outputs a warning signal when determining the departure of a vehicle from a lane, performing the steps of: when one dividing line in a vehicle width direction of the vehicle is non-detected, estimating a position of the one dividing line based on a position of the other dividing line as a first estimated dividing line; estimating a position of the non-detected dividing line based on a position of the one dividing line prior to non-detection as a second estimated dividing line; and comparing the first estimated dividing line with the second estimated dividing line to determine lane departure.
- However, Masato Imai et al. should determine whether the estimated dividing line is correct before a next detection happens, and one can not judge any displacement between the intervals of the two detections. In addition, this algorithm will fail in cases where the two sides near the vehicle are both dashed. If there is noise in the distance of dashed lines, this would impact the assessment the estimated dividing line and the problem of dashed lines will not be solved. In addition, this technology cannot be used in the case of the driving record configured on the front part of the vehicle and it will increase the system's loading.
- Kazuyuki Sakurai (U.S. Pat. No. 8,655,081) discloses a method to improve the problem of lane line detection, especially for dashed line detection. This method can improve the lane recognition accuracy by suppressing noises that are likely to be generated respectively in an original image and a bird's-eye image. The lane recognition system recognizes a lane based on an image. The system includes: a synthesized bird's-eye image creation module which creates a synthesized bird's-eye image by connecting a plurality of bird's-eye images that are obtained by transforming respective partial regions of original images picked up at a plurality of different times into bird's-eye images; a lane line candidate extraction module which detects a lane line candidate by using information of the original images or the bird's-eye images created from the original images, and the synthesized bird's-eye image; and a lane line position estimation module which estimates a lane line position based on information of the lane line candidate.
- Kazuyuki Sakurai installs the system on the back part of the vehicle. However, this requires a more sophisticated theory, such as Bird's eye transformation algorithm. In addition, it also needs to detect two frames simultaneously to perform a lane detection and judgment, which also increases the system's loading.
- The present invention is related to a method for synthesizing a lane image. The method includes steps of: retrieving M continuous image frames at a frame rate from a video image capture device; determining a quantity N for image mapping based on a dash length of a dashed line and a distance between two dashes of the dashed lines; determining a frame interval for mapping image frames based on the dash length, the distance, the velocity, and the frame rate; fetching at least N image frames from the M continuous image frames at the frame interval; and synthesizing the at least N image frames to obtain the lane image using an image synthesizing device.
- In accordance with one aspect of the present invention, a method for real-time image synthesis from a video image capture device installed on a vehicle is disclosed. The method includes steps of: retrieving M continuous image frames at a frame rate from the video image capture device built on the vehicle; determining a frame interval for mapping image frames based on a dash length of a dashed line, a distance between two dashes of the dashed lines, a real-time velocity v of the vehicle and the frame rate; determining a quantity N for image mapping at least based on the dash length and the distance; fetching at least N image frames from the M continuous image frames at the frame interval; and synthesizing the at least N image frames to obtain a lane image by an image synthesizing device.
- In accordance with one aspect of the present invention, a lane image synthesizing system for a vehicle is disclosed. The system includes a database, and an image mapping module. The database contains a plurality of images. The image mapping module is configured to: determine a quantity N for image mapping; determine an interval based on parameters including at least one of a velocity of the vehicle and a sampling rate of the plurality of images; fetch at least N images from the plurality of images according to the interval; and synthesize the at least N images into a lane image.
- The above objectives and advantages of the present invention will become more readily apparent to those ordinarily skilled in the art after reviewing the following detailed descriptions and accompanying drawings, in which:
-
FIG. 1 illustrates two shots retrieved from a video image capture device and the results of lane detection; -
FIG. 2 illustrates 12 frames retrieved from a video image capture device under the condition of 30 frames per second and 85 kilometers/hour as a velocity of a vehicle; -
FIG. 3 illustrates a scheme for synthesizing a lane image with three frames retrieved from a video image capture device according to the embodiments of the present invention; -
FIG. 4 illustrates a dash length of a dashed line, a distance between two dashes of the dashed lines according to regulations for lane lines, as well as a moving distance of a vehicle between two frames retrieved from a video image capture device. -
FIG. 5 illustrates a plot regarding a velocity of a vehicle and a frame interval for mapping frames; -
FIG. 6 illustrates a diagram of synthesizing a lane image with four frames to form a lane image according to the embodiments of the present invention; -
FIG. 7 illustrates a flowchart of a method for synthesizing a lane image according to the embodiments of the present invention; -
FIG. 8 illustrates a diagram of synthesizing a lane image with three binary frames to form a lane image, in which the frame interval for mapping frames equals 5; -
FIG. 9 illustrates a diagram of a lane image synthesizing system consisting of an image mapping module, an image processing module and a prompting module according to the embodiments of the present invention; -
FIG. 10 illustrates a diagram of synthesizing a lane image with three gray scaled frames to form a lane image, in which the frame interval for mapping frames equals 5; -
FIG. 11 illustrates a diagram of a lane image synthesizing system consisting of an image processing module, an image mapping module, and a prompting module according to the embodiments of the present invention; -
FIGS. 12(A)-12(C) illustrate the conditions of 50 kilometers/hour as a velocity of a vehicle and the frame interval for mapping frames equals 7 on the street in the daytime as the scenario of one embodiments of the present invention; -
FIGS. 13(A)-13(C) illustrate the conditions of 56 kilometer/hour as a velocity of a vehicle and the frame interval for mapping frames equals 6 on a curve of the street at night as the scenario of one embodiments of the present invention. - The present invention will now be described more specifically with reference to the following embodiments. It is to be noted that the following descriptions of preferred embodiments of this invention are presented herein for the purposes of illustration and description only; it is not intended to be exhaustive or to be limited to the precise form disclosed.
- The invention is related to synthesizing images shot at different times based on a velocity of a vehicle and the regulations for lane lines to obtain an optimal image synthesizing condition as well as a stable lane detection.
- Please refer to
FIG. 3 , which illustrates a synthesizing scheme of the present invention by using an image processing method from the bird's-eye view. The method includes referring to a velocity of avehicle 36 moving from left to right and locating at a left location P31, a middle location P32 and a right location P33. The velocity can be measured by a global positioning system (GPS). As one can observe from the figure, thevehicle 36 passes the lanes 301-304 and a videoimage capture device 361 built or installed on the front part of the vehicle shot 3 frames with 45 degrees of view in the ROI and a depth of field illustrated by vertically dashed lines 351-352, 353-354 and 355-356 in a time sequence. The scenes within the depth of field are shot by the videoimage capture device 361 built on thevehicle 36. - Variables d1 and d2 both demonstrate the differences of depths of field between frames 1-2 and frames 2-3 relative to the
lane 301 shot in individual frames. - One could easily find that the video
image capture device 361 built on thevehicle 36 at the location P31 shoots nearly two complete lanes 303-304 with 45 degrees of view in the ROI in a depth of field of frame 1, ranged between dashed lines 351-352. However as thevehicle 36 moves to the location P32, the videoimage capture device 361 can only shoot a portion of thelane 303 within 45 degrees of view in the ROI in a depth of field offrame 2, which is also illustrated by d1 oflane 303′. - The method further includes steps of: computing an interval and a quantity for mapping images by referring to a dash length L of a dashed line and a distance S between two dashes of the dashed lines; retrieving ROIs of previous images from a video image capture device, such as frame number 1-3 shot in the upper part of this figure; and composing a number of images into a lane image as shown in the lower part of the figure. With the composed lane image, the present invention effectively improves the success rate for later lane detection.
- Calculating a necessary count N for image mapping:
- Please refer to
FIG. 4 . It illustrates a dash length L of a dashed line, a distance S between two dashes of the dashed lines according to the regulations for lane lines, as well as moving distances d41 and d42 of a vehicle by referring tolane 403 between every two frames retrieved from a video image capture device, wherein the vehicle is passing through lanes 401-404 from left to right. For example, L=4 meter, S=6 meter based on regulations for lane lines in Taiwan, and L=3 meter, S=9 meter for the corresponding standard in the United States. By synthesizing ROIs of images at different positions on the time scale, one can fulfill the moving distance d between two dashes of the dashed lines in a current frame with a dash length of the dashed line in a prior frame. Therefore one can calculate a necessary count N to fulfill the distance S between two dashes of the dashed lines, such as in formula I: -
N least=ceil(S/L)+1 (formula I) - where ceil(x) is a function of x which maps the least integer that is greater than or equal to x, Nleast represents a least quantity for image mapping, L represents a dash length of a dashed line and S represents a distance between two dashes of the dashed lines, as well as the necessary count N shall be no less than the least quantity for image mapping Nleast, such as in formula II:
-
N≧Nleast (formula II) - Calculating a frame interval for mapping image frames:
- In order to compose dashed lines cropped from the ROI of the frames into a straight line, a moving distance d of the vehicle between two frames should be within the following range between (S/(N−1)) and L as the formula III:
-
- In addition, it is found that there is a relationship among the time t, the velocity v of the vehicle, the distance d between dashed lines of the lane, the frame interval m for mapping image frames and a frame rate (sampling rate) f for a number of continuous image frames, such as frame 1,
frame 2, frame 3 . . . , as in formula IV: -
- The formula IV can be further formatted as formula V:
-
- Because the frame interval for mapping image frames must be an integer, the functions of floor and ceiling of the frame interval m can be calculated as in the following inequality VI:
-
- where floor(x) is a function of x which maps the greatest integer that is less than or equal to x, Nleast is the minimal integer among all of the necessary count N for image mapping.
- Please note that there may be a variety of combination as the necessary count N and the frame interval m both satisfy formula II and inequality VI. However for the sake of reduced noise in the further steps for image mapping, the necessary count N and the frame interval m with less values are preferred in the embodiments.
- Please refer to
FIG. 5 , which illustrates a plot regarding a relationship between a velocity of a vehicle and a frame interval for mapping frames. The frame interval can be calculated at least based on the individual velocity of the vehicle. - Lane image synthesis:
- After the necessary count N and frame interval m for mapping image frames are calculated, at least N image frames retrieved from certain continuous image frames at the frame interval m are fetched. If each of the image frames belongs to a binary image, one should take the union of the at least N image frames to form the lane image. If each of the image frames belongs to a gray scale image or a color image, one should consider a Max function or an addition algorithm for said image frames to form the lane image.
- Please refer to
FIG. 6 , which illustrates a diagram of synthesizing a lane image with four frames to form a lane image according to the embodiments of the present invention. Avehicle 66 is passing through lanes 601-603 from left to right, wherein thelane 603 belongs to a solid line and the lanes 601-602 are dashed lines. A video image capture device in thevehicle 66 shots frames within a depth of field defined by two vertically dashed lines. For example, the video image capture device shoots acomplete lane 603 and a fragment of thelane 602 in a ROI of the depth of field of frame number equaling F. - Thus a lane image synthesizing system implemented with this invention would fetch at least N=4 images from the plurality of frames according to the frame interval m, say frame numbers as F, F-1 m, F-2 m and F-3 m. The lanes 601-602 shot in the individual frames (frame number F, F-m, F-2 m and F-3 m) are all superimposed and rendered in this figure by referring to the relative position among the lanes 601-603, the
vehicle 66 and asun 600 in the sky. - Afterwards, the video image capture device synthesizes the at least N=4 images illustrated in the left four squares in the lower part of the figure. And then the fragments of
lanes - More specifically, whenever the
vehicle 66 deviates from one of the reference line and the lane, there will be a warning message pop-up for the driver. - Please refer to
FIG. 7 , which illustrates a flowchart of a method for synthesizing a lane image according to the embodiments of the present invention. - A video image capture device shoots the scenes of the road as a source image (step S701), and stores each image frame in a memory buffer (step S702), wherein each image frame has an image being selected from one of the group consisting of a binary image, a gray scale image and a color image depending on the type of video image capture device.
- Afterwards, an optimal calculator for image mapping and another optimal calculator for a frame interval using in mapping image frames are applied to generate a quantity N for image mapping and a frame interval m for mapping image frames according to regulations for lane lines, a frame rate f and a real-time velocity v of a vehicle (step S703-S705).
- Afterwards, at least N image frames are fetched from a number of image frames retrieved from the memory buffer; and the at least N image frames are used to obtain the lane image using an image synthesizing device (step S706). Whenever the source image belongs to a binary image, a further step of taking the union of the at least N image frames to form the lane image will be performed. In another example, if the source image belongs to a gray scale image or a color image, a Max function for said image frames to form the lane image would be chosen. In addition, other image operators could be used in said image frames with gray scale pixels, such as a Sobel filter.
- The image synthesizing device can be built on an embedded system or any other portable information platform. These portable information platforms, such as mobile phones, PDAs, pagers, etc., are typically based on an embedded controller that integrates a microprocessor and a set of system and application programs in the same device. Presently, a virtual machine, such as Java Virtual Machine (JVM) or Microsoft Virtual Machine (MVM) is integrated to the embedded system as a cross-platform foundation for the running of application programs on the information platform.
- In step S707, once the lane image is completed, an image processing or prompting can be conducted based on a well-defined lane image as a destination image.
- Please refer to
FIG. 8 , which illustrates a diagram of synthesizing a lane image with three binary frames (frame number F, F-5, and F-10) to form a lane image as a mapping result, and the frame interval for mapping frames equals 5. It can be seen that fragments 81-83 of a lane can be combined into acomplete lane 84 shown in this figure. - Please refer to
FIG. 9 , which illustrates a diagram of a lane image synthesizing system according to the embodiments of the present invention. The lane image synthesizing system includes animage mapping module 901, animage processing module 902, a promptingmodule 903 and amessage generation module 904, wherein a lane image is formed before applying theimage processing module 902 and the promptingmodule 903. - The idea of the
image mapping module 901 is similar to the embodiment inFIG. 7 , which can be viewed as another image synthesizing device used in step S706. Theimage mapping module 901 includes animage mapping calculator 9011, aframe interval calculator 9012, animage register 9013 and animage composer module 9014, and In this example, theimage register 9013 is responsible for storing a plurality ofimages 900 from a video image capture device and theimage register 9013 plays the role as an image database. A lane image can be composed by referring to the necessary count for image mapping, a frame interval and a specific velocity. This velocity v can be measured from a global positioning system, a radar speed measuring device (RSMD), a laser speed measuring device (LSMD), an Average Speed Calculator (ASC) or any other speed measuring device. - A necessary count for image mapping and a frame interval corresponding to a specific velocity of a vehicle are calculated via the
image mapping calculator 9011 and theframe interval calculator 9012. Thus theimage mapping calculator 9011 determines a least quantity Nleast for image mapping while theframe interval calculator 9012 determines a quantity Nand the frame interval based on parameters including at least one of the velocity of the vehicle and a sampling rate of the plurality ofimages 900, said 30 frames per second among these continuous images. Theimage mapping module 901 can obtain a velocity value, a length value, a distance value and a sampling rate value. The velocity value, the length value, the distance value and the sampling rate value respectively represent the velocity v, the dash length L, and the distance S and the sampling rate (or a frame rate). For example, the frame interval is determined based on the velocity value, the length value, the distance value and the sampling rate value. - In this example, the plurality of
images 900 could be stored in frames. However the plurality ofimages 900 could also be viewed as a stream and be stored in a multidimensional way. - In another example, the parameters used in the
frame interval calculator 9012 may further include a length of a dashed line and a distance between two dashes of the dashed lines. - The
image mapping module 901 inFIG. 9 then fetches at least N images from the plurality of images according to the frame interval. Animage composer module 9014 finally synthesizes the at least N images into a lane image by means of a max filter. - The
image processing module 902 includes a ROI cropping andscaling module 9021, acontrast enhancement module 9022, anedge extraction module 9023 and anoise reduction module 9024. Theimage processing module 902 is configured to perform at least a procedure selected from a group consisting of regions of interest (ROI) cropping and scaling implemented by the ROI cropping andscaling module 9021, a contrast enhancement implemented by thecontrast enhancement module 9022, an edge extraction implemented by theedge extraction module 9023, a noise reduction implemented by thenoise reduction module 9024 and a combination thereof for producing the lane image. - For example, the ROI cropping and
scaling module 9021 can change the image shape while scaling maintains the morphology of the object in the image and does not change the image pixels in any way. Thecontrast enhancement module 9022 changes the image value distribution to cover a wide range for the ease of human vision. An edge extraction technique is to extract the skeleton of the object in the image, such as the lines of the lane. - The prompting
module 903 includes aline detection module 9031, alane determinant module 9032 and a lane departure determinant module 9033. The promptingmodule 903 is configured to perform: a line detection to generate a set of candidate lines implemented by the linedetection module module 9031; a lane determinant based on a characteristic of each of the candidate lines to identify two lane lines of the lane, such as the distribution of the lines in the image implemented by thelane determinant module 9032. - The prompting
module 903 can further take a lane departure determinant based on a reference line of the vehicle and the two lane lines implemented by the lane departure determinant module 9033. - The
message generation module 904 can pop up a warning message when the vehicle deviates from one of the reference line and the lane. - The
image mapping module 901, theimage processing module 902 and the promptingmodule 903 can be implemented by an embedded system or another kind of electron device if necessary. - Please refer to
FIG. 10 , which illustrates a diagram of synthesizing a lane image with three gray scale frames (frame number=F, F-5 and F-10) with a Max function to form a lane image F′ as a mapping result, in which the frame interval for mapping frames equals 5 according to the embodiment shown inFIG. 9 . As one can see that fragments of lanes 1001-1003 are composed into alane image 1004. - The
image processing module 902 and the promptingmodule 903 could be conducted, so that a well-defined lane image is formed. - Please refer to
FIG. 9 andFIG. 11 .FIG. 11 illustrates a diagram of a lane image synthesizing system according to the embodiments of the present invention. The lane image synthesizing system includes animage processing module 1101 taking asource image 1100 as the input, animage mapping module 1102, and aprompting module 1103 used to generate awarning message 1104. Theimage processing module 1101 includes a ROI cropping andscaling module 11011, acontrast enhancement module 11012, anedge extraction module 11013 and anoise reduction module 11014. Theprompting module 1103 includes alane detection module 11031, alane determinant module 11032 and a lane departure determinant module 11033. Theimage processing module 1101 and theprompting module 1103 can be also implemented as theimage processing module 902 and the promptingmodule 903 respectively. In addition, theimage mapping module 1102 utilizes the output of theimage processing module 1101 as the input image stored in animage register 11023. - In the
image mapping module 1102, there is a process to calculate a least quantity N least for image mapping using animage mapping calculator 11021, and aframe interval calculator 11022 is responsible for another process for a table NLUT and a table mLUT corresponding to different velocities of a vehicle. The table NLUT includes a list of possible quantities for image mapping. The table mLUT is established according to a plurality of velocity values, a quantity N for image mapping and a plurality of intervals for mapping image, wherein the plurality of intervals are calculated based on the quantity N, a dash length L of a dashed line, a distance S between two dashes of the dashed line and the plurality of velocity values. And the at least N image frames with an interval between two continuous frames at the time scale are used to obtain a lane image using animage composer 11024. These two processes can be conducted only once and calculated in advance, thus it can increase the efficiency of the present invention. - Please refer to
FIGS. 12(A)-12(C) , which illustrate the conditions of 50 kilometers/hour as a velocity of a vehicle, and the frame interval for mapping frames equals 7 on the street in the daytime as the scenario of one embodiments of the present invention. -
FIG. 12(A) illustrates the scene of on the street in the daytime.FIG. 12(B) renders the lane image after using an image synthesizing device.FIG. 12(C) is the result of a superimposed image combiningFIG. 12(A) andFIG. 12(B) to evaluate the accuracy of the image synthesizing device by naked eyes. - Please refer to
FIGS. 13(A)-13(C) , which illustrate the conditions of 56 kilometers/hour as a velocity of a vehicle and the frame interval for mapping frames equals 6 on the curve of the street at night as the scenario of one embodiments of the present invention. -
FIG. 13(A) illustrates the scene of the curve of the street at night.FIG. 13(B) renders the lane image after using an image synthesizing device. WhileFIG. 13(C) is the result of a super-imposed image combiningFIG. 13(A) andFIG. 13(B) to evaluate the accuracy of the image synthesizing device by naked eyes. - In short, the present invention is related to a process of connecting dashed lines with a number of image frames separated by a frame interval. Thus dashed lane lines can be connected, followed by a lane detection, especially when it deals with the problem of dashed lines.
- In order to effectively detect the dashed lines, the following information about a velocity of a vehicle, a quantity for image mapping and a frame interval for mapping image frames is needed. In contrast to the prior art, this invention can be applied in a driving record, applicable to images configured to the front part of the vehicle. It is simple and more reliable without complex algorithms, and it will not require a substantial amount of system memory.
Claims (20)
1. A method for synthesizing a lane image, comprising:
retrieving M continuous image frames at a frame rate f from a video image capture device;
determining a quantity N for image mapping based on a dash length L of a dashed line and a distance S between two dashes of the dashed lines;
determining a frame interval for mapping image frames based on the dash length L, the distance S, the velocity v, and the frame rate f;
fetching at least N image frames from the M continuous image frames at the frame interval; and
synthesizing the at least N image frames to obtain the lane image by an image synthesizing device.
2. The method as claimed in claim 1 , wherein N=ceil(S/L)+1.
3. The method as claimed in claim 1 , wherein the frame interval has a value ranged between ceil((f/v)(S/(N−1))) and floor((f/v)L).
4. The method as claimed in claim 1 , wherein the step of synthesizing the at least N image frames to obtain the lane image includes: using an image addition algorithm to form the lane image.
5. The method as claimed in claim 1 , wherein the M continuous image frames are configured to be saved in a memory buffer built in an embedded system.
6. The method as claimed in claim 1 , wherein each of the M continuous image frames has an image being selected from one of the group consisting of a binary image, a gray scale image and a color image.
7. The method as claimed in claim 6 , further comprising a step of:
taking the union of the at least N image frames to form the lane image.
8. The method as claimed in claim 6 , further comprising a step of:
processing each of the at least N image frames with a max filter to form the lane image.
9. A method for real-time image synthesis from a video image capture device built on a vehicle, comprising:
retrieving M continuous image frames at a frame rate f from the video image capture device built on the vehicle;
determining a frame interval for mapping image frames based on a dash length L of a dashed line, a distance S between two dashes of the dashed lines, a real-time velocity v of the vehicle and the frame rate f,
determining a quantity N for image mapping at least based on the dash length L and the distance S;
fetching at least N image frames from the M continuous image frames at the frame interval; and
synthesizing the at least N image frames to obtain a lane image by an image synthesizing device.
10. The method as claimed in claim 9 , wherein N=ceil(S/L)+1.
11. The method as claimed in claim 9 , wherein the frame interval has a value ranged between ceil((f/v)(S/(N−1))) and floor((f/v)L).
12. The method as claimed in claim 9 , wherein each of the M continuous image frames has an image being selected from one of the group consisting of a binary image, a gray scale image and a color image.
13. The method as claimed in claim 12 , further comprising a step of:
taking the union of the at least N image frames to form the lane image.
14. The method as claimed in claim 12 , further comprising a step of:
processing each of the at least N image frames with a max filter to form the lane image.
15. A lane image synthesizing system of a vehicle, comprising:
a database containing a plurality of images; and
an image mapping module configured to:
determine a quantity N for image mapping;
determine an interval based on parameters including at least one of a velocity of the vehicle and a sampling rate of the plurality of images;
fetch at least N images from the plurality of images according to the interval; and
synthesize the at least N images into a lane image.
16. The lane image synthesizing system as claimed in claim 15 , wherein the images are stored in frames.
17. The lane image synthesizing system as claimed in claim 15 , wherein the parameters include a length of a dashed line and a distance between two dashes of the dashed lines.
18. The lane image synthesizing system as claimed in claim 15 , further comprising an image processing module configured to take a procedure selected from a group consisting of regions of interest cropping and scaling, a contrast enhancement, an edge extraction, a noise reduction and a combination thereof for producing the lane image.
19. The lane image synthesizing system as claimed in claim 18 , further comprising a prompting module configured to proceed:
a line detection to generate a set of candidate lines;
a lane determinant based on a characteristic of each of the candidate lines to identify two lane lines of the lane;
a lane departure detection based on a reference line of the vehicle and the two lane lines; and
popping up a warning message when the vehicle deviates from one of the reference line and the lane.
20. The lane image synthesizing system as claimed in claim 19 , wherein the reference line is a side of a central area of the lane in which the center line of the vehicle is located.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/152,222 US20170330043A1 (en) | 2016-05-11 | 2016-05-11 | Method and System for Synthesizing a Lane Image |
US16/560,861 US10970567B2 (en) | 2016-05-11 | 2019-09-04 | Method and system for synthesizing a lane image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/152,222 US20170330043A1 (en) | 2016-05-11 | 2016-05-11 | Method and System for Synthesizing a Lane Image |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/560,861 Continuation US10970567B2 (en) | 2016-05-11 | 2019-09-04 | Method and system for synthesizing a lane image |
Publications (1)
Publication Number | Publication Date |
---|---|
US20170330043A1 true US20170330043A1 (en) | 2017-11-16 |
Family
ID=60294831
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/152,222 Abandoned US20170330043A1 (en) | 2016-05-11 | 2016-05-11 | Method and System for Synthesizing a Lane Image |
US16/560,861 Active 2036-08-17 US10970567B2 (en) | 2016-05-11 | 2019-09-04 | Method and system for synthesizing a lane image |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/560,861 Active 2036-08-17 US10970567B2 (en) | 2016-05-11 | 2019-09-04 | Method and system for synthesizing a lane image |
Country Status (1)
Country | Link |
---|---|
US (2) | US20170330043A1 (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10685241B2 (en) * | 2016-11-07 | 2020-06-16 | Samsung Electronics Co., Ltd. | Method and apparatus for indicating lane |
CN111460072A (en) * | 2020-04-01 | 2020-07-28 | 北京百度网讯科技有限公司 | Lane line detection method, apparatus, device, and storage medium |
CN113066106A (en) * | 2021-04-16 | 2021-07-02 | 西北工业大学 | Vehicle speed measuring method based on aerial robot mobile vision |
CN113095283A (en) * | 2021-04-30 | 2021-07-09 | 南京工程学院 | Lane line extraction method based on dynamic ROI and improved firefly algorithm |
US11068724B2 (en) * | 2018-10-11 | 2021-07-20 | Baidu Usa Llc | Deep learning continuous lane lines detection system for autonomous vehicles |
CN113449629A (en) * | 2021-06-25 | 2021-09-28 | 重庆卡佐科技有限公司 | Lane line false and true identification device, method, equipment and medium based on driving video |
CN113591565A (en) * | 2021-06-25 | 2021-11-02 | 江苏理工学院 | Machine vision-based lane line detection method, detection system and detection device |
US20220292846A1 (en) * | 2019-08-28 | 2022-09-15 | Toyota Motor Europe | Method and system for processing a plurality of images so as to detect lanes on a road |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI759931B (en) * | 2020-10-30 | 2022-04-01 | 朝陽科技大學 | High-speed photogrammetry system |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100266161A1 (en) * | 2007-11-16 | 2010-10-21 | Marcin Michal Kmiecik | Method and apparatus for producing lane information |
US20130028473A1 (en) * | 2011-07-27 | 2013-01-31 | Hilldore Benjamin B | System and method for periodic lane marker identification and tracking |
US20130293717A1 (en) * | 2012-05-02 | 2013-11-07 | GM Global Technology Operations LLC | Full speed lane sensing with a surrounding view system |
US20140300743A1 (en) * | 2011-11-24 | 2014-10-09 | Toyota Jidosha Kabushiki Kaisha | Vehicle surroundings monitoring apparatus and vehicle surroundings monitoring method |
US20150302257A1 (en) * | 2012-11-27 | 2015-10-22 | Clarion Co., Ltd. | On-Vehicle Control Device |
US20150354976A1 (en) * | 2014-06-10 | 2015-12-10 | Mobileye Vision Technologies Ltd. | Top-down refinement in lane marking navigation |
US20160307054A1 (en) * | 2013-11-14 | 2016-10-20 | Clarion Co., Ltd | Surrounding Environment Recognition Device |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101163940B (en) * | 2005-04-25 | 2013-07-24 | 株式会社吉奥技术研究所 | Imaging position analyzing method |
JP5281664B2 (en) * | 2011-02-23 | 2013-09-04 | クラリオン株式会社 | Lane departure warning device and lane departure warning system |
US9256791B2 (en) * | 2012-12-04 | 2016-02-09 | Mobileye Vision Technologies Ltd. | Road vertical contour detection |
US10686976B2 (en) * | 2014-08-18 | 2020-06-16 | Trimble Inc. | System and method for modifying onboard event detection and/or image capture strategy using external source data |
JP6537876B2 (en) * | 2015-04-23 | 2019-07-03 | 本田技研工業株式会社 | Driving support system and driving support method |
-
2016
- 2016-05-11 US US15/152,222 patent/US20170330043A1/en not_active Abandoned
-
2019
- 2019-09-04 US US16/560,861 patent/US10970567B2/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100266161A1 (en) * | 2007-11-16 | 2010-10-21 | Marcin Michal Kmiecik | Method and apparatus for producing lane information |
US20130028473A1 (en) * | 2011-07-27 | 2013-01-31 | Hilldore Benjamin B | System and method for periodic lane marker identification and tracking |
US20140300743A1 (en) * | 2011-11-24 | 2014-10-09 | Toyota Jidosha Kabushiki Kaisha | Vehicle surroundings monitoring apparatus and vehicle surroundings monitoring method |
US20130293717A1 (en) * | 2012-05-02 | 2013-11-07 | GM Global Technology Operations LLC | Full speed lane sensing with a surrounding view system |
US20150302257A1 (en) * | 2012-11-27 | 2015-10-22 | Clarion Co., Ltd. | On-Vehicle Control Device |
US20160307054A1 (en) * | 2013-11-14 | 2016-10-20 | Clarion Co., Ltd | Surrounding Environment Recognition Device |
US20150354976A1 (en) * | 2014-06-10 | 2015-12-10 | Mobileye Vision Technologies Ltd. | Top-down refinement in lane marking navigation |
Non-Patent Citations (1)
Title |
---|
Wirth, "Image Processing II," Computing and Information Science, Image Processing Group, 2004. * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10685241B2 (en) * | 2016-11-07 | 2020-06-16 | Samsung Electronics Co., Ltd. | Method and apparatus for indicating lane |
US11068724B2 (en) * | 2018-10-11 | 2021-07-20 | Baidu Usa Llc | Deep learning continuous lane lines detection system for autonomous vehicles |
US20220292846A1 (en) * | 2019-08-28 | 2022-09-15 | Toyota Motor Europe | Method and system for processing a plurality of images so as to detect lanes on a road |
US11900696B2 (en) * | 2019-08-28 | 2024-02-13 | Toyota Motor Europe | Method and system for processing a plurality of images so as to detect lanes on a road |
CN111460072A (en) * | 2020-04-01 | 2020-07-28 | 北京百度网讯科技有限公司 | Lane line detection method, apparatus, device, and storage medium |
CN113066106A (en) * | 2021-04-16 | 2021-07-02 | 西北工业大学 | Vehicle speed measuring method based on aerial robot mobile vision |
CN113095283A (en) * | 2021-04-30 | 2021-07-09 | 南京工程学院 | Lane line extraction method based on dynamic ROI and improved firefly algorithm |
CN113449629A (en) * | 2021-06-25 | 2021-09-28 | 重庆卡佐科技有限公司 | Lane line false and true identification device, method, equipment and medium based on driving video |
CN113591565A (en) * | 2021-06-25 | 2021-11-02 | 江苏理工学院 | Machine vision-based lane line detection method, detection system and detection device |
Also Published As
Publication number | Publication date |
---|---|
US20190392227A1 (en) | 2019-12-26 |
US10970567B2 (en) | 2021-04-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10970567B2 (en) | Method and system for synthesizing a lane image | |
US8175806B2 (en) | Car navigation system | |
US7764808B2 (en) | System and method for vehicle detection and tracking | |
US8050459B2 (en) | System and method for detecting pedestrians | |
Tae-Hyun et al. | Detection of traffic lights for vision-based car navigation system | |
CN107953828B (en) | Pedestrian recognition method and pedestrian recognition system for vehicle | |
JP4246766B2 (en) | Method and apparatus for locating and tracking an object from a vehicle | |
Fossati et al. | Real-time vehicle tracking for driving assistance | |
CN112507862B (en) | Vehicle orientation detection method and system based on multitasking convolutional neural network | |
US9152887B2 (en) | Object detection device, object detection method, and object detection program | |
KR20030024857A (en) | Peripheral image processor of vehicle and recording medium | |
CN102997900A (en) | Vehicle systems, devices, and methods for recognizing external worlds | |
US20140002655A1 (en) | Lane departure warning system and lane departure warning method | |
US20140002658A1 (en) | Overtaking vehicle warning system and overtaking vehicle warning method | |
CN114419098A (en) | Moving target trajectory prediction method and device based on visual transformation | |
Zhang et al. | Automatic detection of road traffic signs from natural scene images based on pixel vector and central projected shape feature | |
US9824449B2 (en) | Object recognition and pedestrian alert apparatus for a vehicle | |
JP2003162798A (en) | Device and program for monitoring obstacle | |
CN111967396A (en) | Processing method, device and equipment for obstacle detection and storage medium | |
JP2007334511A (en) | Object detection device, vehicle, object detection method and program for object detection | |
CN110619653A (en) | Early warning control system and method for preventing collision between ship and bridge based on artificial intelligence | |
CN111332306A (en) | Traffic road perception auxiliary driving early warning device based on machine vision | |
JPH07302325A (en) | On-vehicle image recognizing device | |
JP4469980B2 (en) | Image processing method for tracking moving objects | |
Ćosić et al. | Time to collision estimation for vehicles coming from behind using in-vehicle camera |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: ICATCH TECHNOLOGY, INC., TAIWAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SHIH, CHIH-CHANG;REEL/FRAME:038552/0062 Effective date: 20160506 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |