KR20170007961A - Driver Assistance System And Method Thereof - Google Patents

Driver Assistance System And Method Thereof Download PDF

Info

Publication number
KR20170007961A
KR20170007961A KR1020150099051A KR20150099051A KR20170007961A KR 20170007961 A KR20170007961 A KR 20170007961A KR 1020150099051 A KR1020150099051 A KR 1020150099051A KR 20150099051 A KR20150099051 A KR 20150099051A KR 20170007961 A KR20170007961 A KR 20170007961A
Authority
KR
South Korea
Prior art keywords
lane
area
vehicle
image
unit
Prior art date
Application number
KR1020150099051A
Other languages
Korean (ko)
Other versions
KR101709402B1 (en
Inventor
김구현
Original Assignee
김구현
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 김구현 filed Critical 김구현
Priority to KR1020150099051A priority Critical patent/KR101709402B1/en
Publication of KR20170007961A publication Critical patent/KR20170007961A/en
Application granted granted Critical
Publication of KR101709402B1 publication Critical patent/KR101709402B1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/10Path keeping
    • B60W30/12Lane keeping
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • G06K9/00798
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/167Driving aids for lane monitoring, lane changing, e.g. blind spot detection
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo, light or radio wave sensitive means, e.g. infrared sensors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60YINDEXING SCHEME RELATING TO ASPECTS CROSS-CUTTING VEHICLE TECHNOLOGY
    • B60Y2300/00Purposes or special features of road vehicle drive control systems
    • B60Y2300/10Path keeping
    • B60Y2300/12Lane keeping

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)

Abstract

The present invention relates to a driving support system and method capable of detecting lanes more accurately and detecting all lanes regardless of the curvature of the roads to provide more accurate and safe driving support.

Description

Driver Assistance System and Method for Supporting Operation

An embodiment of the present invention relates to a driving support system and method capable of detecting lanes more accurately and detecting all lanes regardless of the curvature of the roads to provide more accurate and safe driving support.

BACKGROUND ART [0002] In recent years, there has been a sophisticated approach to vehicles that emphasizes driving convenience and safety. One of such trends is providing a system that supports driving of a vehicle in response to various situations that may occur during driving of the vehicle.

As such, the system supporting the driving of the vehicle is collectively referred to as Advanced Drever Assistance System (ADAS), which is called as various names depending on its function as advanced technology is developed and developed.

Examples of advanced driving assistance systems include a forward collision warning (FCW) system that sensibly warns the driver to avoid a collision with a forward vehicle, An Advanced Emergency Braking (AEB) system that automatically brakes the vehicle for avoidance, an Adaptive Cruise Control (ACC) system in which the vehicle autonomously drives at a speed set by the driver, A lane departure warning (LDW) system that provides a warning to the driver when leaving the lane, and a lane keeping assist (LKA) system that returns the lane to the original lane when the vehicle leaves the lane. And supports the driving of the vehicle through the respective functions, thereby performing the role of making the user comfortable and safe .

In such a state-of-the-art driving support system, sensing an image of a road from recognition means such as a camera or a sensor attached to the vehicle and recognizing a lane while driving is one of the important factors. Therefore, there is a need for a means of detecting a lane in a more accurate manner.

Conventional lane detection methods can be divided into three stages: feature extraction, outlier removal and post-processing, and lane tracking from the boundary lines that can be formed inside the color or image of the image.

However, the vision - based lane detection method for extracting features from image color is very vulnerable to the change of illumination, which makes it difficult to apply it to road environments with frequent illumination changes. That is, depending on the time and place, the road environment has various illumination values, and since the lane colors have various values according to the illumination changes, the reliability of the lane detection can be reduced.

In addition, the vision-based lane detection method for extracting features from the boundary line that can be formed in the image may degrade the reliability of the lane detection because the sharpness of the image is decreased as the distance increases.

The problem with the above-described conventional lane detection method can also be derived from a camera that basically captures an image of a road in order to recognize a lane on the road. In order to detect a conventional lane, The angle of view is limited. Therefore, when the curvature of the road on which the vehicle is traveling is changed rapidly and is extended, the camera can not photograph all the extending directions of the road, there was.

Accordingly, in order to secure a wide field of view of the camera, a method of using a wide-angle camera having a wide angle of view range has been proposed. However, in this case, distortion of the image is seriously generated due to a wide angle, , The wide-angle camera has a problem that the manufacturing cost is increased due to the high price.

Therefore, it is required to develop a technique of an angle support system which can detect a lane of a road more precisely and improve the safety of the driving of the vehicle, and can detect the lane regardless of the curvature of the road.

An object of the present invention is to provide a driving support system and method capable of more accurately detecting a lane of a road on which a vehicle is running.

It is another object of the present invention to provide a driving support system and method capable of detecting a lane on a road with a large curvature.

A driving support system according to an embodiment of the present invention is a driving support system that photographs the periphery of a vehicle during driving or driving on a driving route and supports driving of the vehicle according to a photographed image, A binarized image generating unit for generating a binarized image by binarizing the image so that a lane candidate region appears white and a background region outside the lane candidate region appears black; Wherein the lane candidate area includes a plurality of lane configuration areas that are areas corresponding to lanes around the vehicle, A binarization image filtering unit for determining the lane configuration area, The lane departure determining unit determines a relative distance between the vehicle and a plurality of the lane departure regions to derive a plurality of lane area by connecting the vehicle to the vehicle with a minimum ratio, And a lane departure analysis unit for analyzing whether or not the vehicle is in contact with the vehicle.

In the present invention, the at least one photographing unit may include a side surface having a curved shape toward all directions around the vehicle, and may include a plurality of camera modules for photographing an image around the vehicle on the side surface.

In the present invention, at least one photographing unit may include at least one of a visible light camera module and an infrared camera module.

In the present invention, the binarized image generation unit may include a panoramic image generation unit for generating a panoramic image by combining images photographed through a plurality of the photographing units, a method for enhancing the contrast of the panoramic image by stretching the panoramic image, And a binarization processor for separating the panorama image having improved stretching parts and contrast into a plurality of unit areas, and binarizing the divided unit areas to generate the binarized image.

In the present invention, the binarized image filtering unit may include a background area filtering unit for whitening the background area less than a predetermined area within the lane candidate area, a lane candidate area filtering unit for deleting the lane candidate area less than a predetermined area, And the lane candidate area included in the imaginary rectangle whose ratio of the length of the short side to the length of the long side of the imaginary rectangle surrounding the lane candidate region exceeds a predetermined ratio, And a configuration area determination unit.

In the present invention, the lane area derivation unit may include a coordinate setting unit that sets a plurality of coordinates on the plurality of lane configuration areas, a virtual line formation unit that forms a plurality of virtual lines by connecting the adjacent coordinates, A curvature calculating unit for calculating a curvature of each of the plurality of virtual lines and a lane area determining unit for determining a plurality of virtual lines formed with a minimum curvature among the plurality of virtual lines and formed with the same curvature as the lane area have.

In the present invention, the driving support system may further include a driving support unit for supporting the driving of the vehicle according to an analysis result of the lane departure analysis unit.

In the present invention, when the lane departure analysis section analyzes that the vehicle is in contact with the lane, the driving support section may support the driving direction change of the vehicle so that the vehicle does not contact the lane.

In the present invention, when the lane departure analysis section analyzes that the vehicle is in contact with the lane, the driving support section may provide a lane departure warning to the driver of the vehicle.

In the present invention, when the lane departure analysis section analyzes that the vehicle is not in contact with a pair of the lanes adjacent to the vehicle, the driving support section may support the constant speed running or the accelerated running of the vehicle.

According to another aspect of the present invention, there is provided a driving support method for photographing a periphery of a vehicle during driving or driving on a driving route and supporting driving of the vehicle in accordance with a photographed image, Generating a binarized image by binarizing the image so that the lane candidate region appears as white and the background region outside the lane candidate region appears as black through a photographing portion of the vehicle; A binarized image in which the background area of a predetermined area or less is whitened in the lane candidate area and the lane candidate area of a predetermined area or less is deleted to thereby extract a plurality of lane candidate areas A binarized image filtering step of determining a lane configuration area, A lane area derivation step of deriving a plurality of lane area by connecting the pre-configuration area with the curvature minimized, and determining a relative distance between the vehicle and a plurality of the lane area, And a lane area departure analysis step for analyzing whether the vehicle is in contact with any one of the lane area departure analysis step.

In the present invention, the at least one photographing unit may include a side surface having a curved shape toward all directions around the vehicle, and may include a plurality of camera modules for photographing an image around the vehicle on the side surface.

In the present invention, at least one photographing unit may include at least one of a visible light camera module and an infrared camera module.

In the present invention, the generating of the binarized image may include generating a panoramic image by combining images photographed through the plurality of photographing units, stretching the panoramic image to improve the contrast of the panoramic image And a binarization processing step of dividing the panorama image having improved contrast and the improved contrast into a plurality of unit areas and binarizing the divided unit areas to generate the binarized image.

In the present invention, the binarized image filtering step may include a background area filtering step of whitening the background area within a predetermined area within the lane candidate area, a lane candidate area deleting the lane candidate area below a predetermined area, And the lane candidate area included in the imaginary rectangle whose ratio of the length of the short side to the length of the long side of the imaginary rectangle surrounding the lane candidate area exceeds a predetermined ratio is deleted to determine the lane configuration area And a lane configuration area determination step.

In the present invention, the lane area derivation step may include a coordinate setting step of setting a plurality of coordinates on the plurality of lane configuration areas, a virtual line formation step of forming a plurality of virtual lines by connecting the adjacent coordinates, A curvature calculating step of calculating a curvature of each of the plurality of virtual lines, and a lane area determining step of determining a plurality of virtual lines formed with a minimum curvature among the plurality of virtual lines and formed in parallel with the same curvature, as the lane area can do.

The present invention may further include a driving support step of supporting the driving of the vehicle according to an analysis result of the lane departure departure analysis step.

In the present invention, when the vehicle is analyzed as being in contact with the lane in the lane departure separation analysis step, the driving assistance step may support the driving direction change of the vehicle so that the vehicle does not contact the lane.

In the present invention, when the vehicle is analyzed as being in contact with the lane in the lane departure separation analysis step, the driving assistance step may provide a lane departure warning to the driver of the vehicle.

In the present invention, when the vehicle is analyzed as not being in contact with a pair of the lanes adjacent to the vehicle in the lane departure-departure-analyzing step, the driving assistance step may support the constant-speed running or the accelerated running of the vehicle .

According to another aspect of the present invention, there is provided a driving support system for photographing a periphery of a self-driving vehicle during driving or driving on a road, and supporting the driving of the self-driving vehicle in accordance with the photographed image, A lane area detecting unit for detecting a plurality of lane areas of the image and a plurality of lane area detecting units for detecting a plurality of lane areas in the image from the subject vehicle in advance When the lane departure analysis section determines that a plurality of the lane area is not positioned at a predetermined distance or more from the subject vehicle in the image, And can rotate according to the degree of curvature.

In the present invention, at least one photographing section may be provided in the front area of the subject vehicle so as to photograph an image of the vicinity of the subject vehicle from a front area of the subject vehicle.

In the present invention, the photographing unit may include at least one of a visible light camera module, an infrared camera module, and a thermal camera module.

In the present invention, the lane area detecting unit may include a binarized image generating unit for generating a binarized image by binarizing the image so that the lane candidate region appears white and the background region outside the lane candidate region appears black, The lane candidate area is whitened in the lane candidate area to a predetermined area or less and the lane candidate area is deleted to a predetermined area or less so that a plurality of lane configuration areas And a lane area deriving unit for deriving a plurality of lane areas by connecting the lane configuration areas adjacent to each other with curvature being minimized.

In the present invention, the binarized image generating unit may include a stretching unit for enhancing the contrast of the image by stretching the image, and a binarizing unit for binarizing the enhanced image of the image into a plurality of unit regions, And a binarization unit for generating a binarized image.

In the present invention, the binarized image filtering unit may include a background area filtering unit for whitening the background area less than a predetermined area within the lane candidate area, a lane candidate area filtering unit for deleting the lane candidate area less than a predetermined area, And the lane candidate area included in the imaginary rectangle whose ratio of the length of the short side to the length of the long side of the imaginary rectangle surrounding the lane candidate region exceeds a predetermined ratio, And a configuration area determination unit.

In the present invention, the lane area derivation unit may include a coordinate setting unit that sets a plurality of coordinates on the plurality of lane configuration areas, a virtual line formation unit that forms a plurality of virtual lines by connecting the adjacent coordinates, A curvature calculating unit for calculating a curvature of each of the plurality of virtual lines and a lane area determining unit for determining a plurality of virtual lines formed with a minimum curvature among the plurality of virtual lines and formed with the same curvature as the lane area have.

The lane departure analysis unit may compare the curvature with a predetermined reference curvature so that when the curvature is equal to or greater than the reference curvature, the plurality of lane areas are located at a predetermined distance or more from the subject vehicle in the image .

In the present invention, the photographing unit may be controlled to rotate by a rotation control unit that controls the rotation of the photographing unit according to the analysis result of the lane departure direction analysis unit.

In the present invention, the photographing section may rotate according to the analysis result of the lane departure-departure analyzing section, and position the plurality of lane areas at a predetermined distance or more from the subject vehicle in the image.

In the present invention, the photographing section may rotate according to the degree of curvature of the plurality of lane regions detected from the image of the road on which the vehicle is expected to run, in accordance with the steering direction of the steering wheel of the driver of the own vehicle.

In the present invention, the operation support system may further comprise: a target object recognition unit for recognizing a target object around the subject vehicle; a target object analysis unit for analyzing a distance between the subject vehicle and the target object; And a driving support unit for supporting the operation of the subject vehicle according to the result.

In the present invention, when the target object analyzing unit analyzes that the subject vehicle is close to the target object by a predetermined distance or less, the driving support unit can support braking of the subject vehicle.

In the present invention, when the target object analyzing unit analyzes that the subject vehicle is maintained at a predetermined distance or more from the target object, the driving support unit may support the constant speed running or the accelerated running of the subject vehicle.

According to still another aspect of the present invention, there is provided a driving support method for photographing the periphery of a vehicle during driving or driving on a driving route and supporting the driving of the vehicle in accordance with the photographed image, A lane area detection step of detecting a plurality of lane areas of the image, and a lane area detection step of detecting a lane area of the vehicle from a plurality of lane areas in the image from the subject vehicle And a lane departure analysis step of analyzing whether or not the lane area is located over a predetermined distance, and in the lane departure analysis step, when a plurality of the lane area is analyzed as being not located more than a predetermined distance from the subject vehicle in the image, Wherein the photographing step includes a step of photographing at least one photographing The first part can be rotated.

In the present invention, at least one photographing section may be provided in the front area of the subject vehicle so as to photograph an image of the vicinity of the subject vehicle from a front area of the subject vehicle.

 In the present invention, the photographing unit may include at least one of a visible light camera module, an infrared camera module, and a thermal camera module.

In the present invention, the lane area detecting step may include: generating a binarized image by binarizing the image so that the lane candidate area appears white and the background area outside the lane candidate area appears black; Wherein the lane candidate area is made up of a plurality of lane lines constituting an area corresponding to a lane around the own vehicle among the binarized images by making the background area less than a predetermined area in the lane candidate area whiten, And a lane area deriving unit for deriving a plurality of lane areas by connecting the lane configuration areas adjacent to each other with curvature being minimized.

In the present invention, the binarized image generation step may include a stretching step of stretching the image to enhance the contrast of the image, a step of binarizing the enhanced image of the image into a plurality of unit areas, And a binarization processing step of generating the binarized image.

In the present invention, the binarized image filtering step may include a background area filtering step of whitening the background area within a predetermined area within the lane candidate area, a lane candidate area deleting the lane candidate area below a predetermined area, And the lane candidate region included in the imaginary rectangle having a length ratio of a short side to a length of a long side of a virtual rectangle surrounding the lane candidate region exceeds a predetermined ratio to determine the lane configuration region And a lane configuration area determination step.

In the present invention, the lane area derivation step may include a coordinate setting step of setting a plurality of coordinates on the plurality of lane configuration areas, a virtual line formation step of forming a plurality of virtual lines by connecting the adjacent coordinates, A curvature calculating step of calculating a curvature of each of the plurality of virtual lines, and a lane area determining step of determining a plurality of virtual lines formed with a minimum curvature among the plurality of virtual lines and formed in parallel with the same curvature, as the lane area can do.

The lane departure analysis step may compare the curvature with a predetermined reference curvature so that when the curvature is equal to or greater than the reference curvature, the plurality of lane areas are positioned at a predetermined distance or more from the subject vehicle in the image It can be analyzed as not doing so.

In the present invention, the photographing step may control the rotation of the photographing unit through a rotation control unit that controls the rotation of the photographing unit according to the analysis result of the lane departure-from-analysis step.

In the present invention, the photographing step may rotate the photographing unit in accordance with the analysis result of the lane departure-from-departure analysis step, and position the plurality of lane areas at a predetermined distance or more from the subject vehicle in the image.

In the present invention, the photographing step may rotate the photographing unit in accordance with the degree of curvature of the plurality of lane regions detected from the image of the road where the vehicle is expected to travel, in accordance with the steering direction of the steering wheel of the driver of the subject vehicle .

In the present invention, the driving support method may further include: a target object recognition step of recognizing a target object around the subject vehicle; a target object analysis step of analyzing a distance between the subject vehicle and the target object; And a driving support step of supporting the operation of the subject vehicle according to the result.

In the present invention, in the target object analyzing step, when the subject vehicle is analyzed as being close to the target object by a predetermined distance or less, the driving assistance step may support the braking of the subject vehicle.

In the present invention, in the target object analyzing step, when the subject vehicle is analyzed as being maintained at a predetermined distance or more from the target object, the driving assistance step may support the constant-speed running or the accelerated running of the subject vehicle .

According to an embodiment of the present invention, there is provided a driving support system capable of detecting a lane area more accurately by binarizing a four-way image of a vehicle obtained from a plurality of cameras provided in a vehicle and detecting a lane area through a plurality of noise processes This is possible.

Further, the present invention can provide a more accurate driving support system because it can detect the lane area on the road regardless of the curvature of the road including the road recognition unit that rotates according to the curvature of the road in which the vehicle is running.

1 is a configuration diagram of a driving support system according to an embodiment of the present invention.
2 illustrates an angle of view of a vehicle and a photographing unit equipped with a plurality of photographing units according to an embodiment of the present invention.
3 is a perspective view of a photographing unit according to an embodiment of the present invention.
FIG. 4 illustrates that the contrast of a panoramic image is improved according to an embodiment of the present invention.
FIG. 5 illustrates the binarization processing of a panoramic image with improved contrast according to an exemplary embodiment of the present invention.
6 shows the change of the binarized image in the background region filtering unit.
FIG. 7 shows a change of the binarized image in the lane candidate region filtering unit.
Fig. 8 shows a change of the binarized image in the lane-configuration area determining unit.
Figure 9 illustrates that a lane area is derived from a lane configuration area in accordance with an embodiment of the present invention.
Figure 10 shows the lane area applied to an image around the vehicle in accordance with an embodiment of the present invention.
FIG. 11 illustrates a configuration for controlling the steering of a vehicle under the support of a driving support unit according to an embodiment of the present invention.
12 shows a vehicle in which the steering is controlled by the driving support unit according to an embodiment of the present invention.
FIG. 13 illustrates an accelerator pedal that is automatically controlled by a driving support unit according to an embodiment of the present invention.
FIG. 14 illustrates an accelerator pedal that is disconnected from a driving support unit and switched to a manual mode according to an embodiment of the present invention.
15 is a flowchart of a driving support method according to an embodiment of the present invention.
16 is a configuration diagram of a driving support system according to another embodiment of the present invention.
17 is a perspective view of a photographing unit according to another embodiment of the present invention.
FIG. 18 shows that a lane area is detected from an image photographed from a photographing section according to an embodiment of the present invention.
Fig. 19 shows the rotation of the photographing section and the image capturing range of the photographing section according to the curvature of the road on which the subject vehicle travels according to an embodiment of the present invention.
20 shows rotation of the photographing unit according to an embodiment of the present invention.
21 is a flowchart of a driving support method according to another embodiment of the present invention.

While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the invention. It should be understood, however, that the invention is not to be limited to the specific embodiments, but includes all changes, equivalents, and alternatives falling within the spirit and scope of the invention.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS In the following description of the present invention, detailed description of related arts will be omitted when it is determined that the gist of the present invention may be unnecessarily blurred.

Also, in this specification, when an element is referred to as being " connected " or " connected " with another element, the element may be directly connected or directly connected to the other element, As long as the opposite substrate does not exist, it may be connected or connected via another component in the middle.

1 is a configuration diagram of a driving support system according to an embodiment of the present invention.

A driving support system according to an embodiment of the present invention is a driving support system that photographs the surroundings of a vehicle while driving or driving while driving and supports the driving of the vehicle according to a photographed image, A binarized image generating unit 120 for generating a binarized image by binarizing the image so that a lane candidate region appears white and a background region outside the lane candidate region appears black, The binarized image is made to whiten the background area within a predetermined area on the inside of the lane candidate area and the lane candidate area below a predetermined area is deleted so that a plurality of lane- A binarized image filtering unit 130 for determining two lane configuration areas, A lane area derivation unit (140) for deriving a plurality of lane area by connecting the area with a curvature minimized, and a controller for determining a relative distance between the vehicle and the plurality of lane areas, And a lane area departure analysis unit 150 for analyzing whether the vehicle is in contact with any one of the areas.

The photographing unit 110 may include at least one camera mounted on the vehicle for photographing an image around the vehicle.

Here, the photographing unit 110 may be configured to photograph an image of the surroundings of the vehicle by being mounted on one of the vehicles. In this case, the photographing unit 110 may photograph the surroundings of the vehicle, Since the vehicle is mounted at a high position in the central upper portion of the vehicle in order to photograph a surrounding image, the vertical volume of the vehicle may increase and the appearance of the vehicle may be deteriorated. As a result, the driving stability of the vehicle may be deteriorated.

Therefore, it is preferable that the photographing unit 110 is configured to photograph an image of the surroundings of the vehicle through a plurality of photographing units 110 provided along the upper outer region of the vehicle.

2 illustrates an angle of view of a vehicle and a photographing unit equipped with a plurality of photographing units according to an embodiment of the present invention.

Referring to FIG. 2, a plurality of photographing units 110 are mounted on the vehicle 1000 at a relatively high position, that is, at a peripheral area above the vehicle 1000, It is possible to minimize the possibility that the vehicle 1000 is included in the view angle of the photographing unit 110 when the plurality of photographing units 110 photographs the image of the surroundings of the vehicle 1000 have.

Here, the sum of angle of view of the plurality of photographing units 110 is 360 degrees or more, and images of all directions around the vehicle 1000 from the vehicle 1000 are captured through the plurality of photographing units 110 . That is, the plurality of photographing units 110 may be configured such that the angle of view between the adjacent photographing units 110 overlaps a predetermined area, ) Can be detected without fail.

3 is a perspective view of the photographing unit 110 according to an embodiment of the present invention.

The photographing unit 110 has a function of photographing an image of the surroundings of the vehicle 1000 mounted on the vehicle 1000. However, the photographing unit 110 may detect a recognition object that can be detected from the image of the surroundings of the vehicle 1000, And the function of determining the distance between the vehicle 1000 and the vehicle 1000 may be one of important functions of the photographing unit 110.

That is, the photographing unit 110 photographs an image of the surroundings of the vehicle 1000, and provides a relative distance value between the object provided by the image and the vehicle 1000, It is possible to enable the corresponding running.

Therefore, the photographing unit 110 includes a curved side face toward all directions around the vehicle 1000, and along the side, preferably along the same horizontal axis on the side, And a plurality of camera modules 111 and 112 for photographing the camera modules 111 and 112.

The photographing unit 110 can determine the relative distance between the subject from the vehicle 1000 by including a plurality of the camera modules. The plurality of camera modules 111 and 112 are arranged on the same horizontal axis The respective images photographed from the plurality of camera modules 111 and 112 can be displayed at different positions within the same image standard, that is, at different right and left distance differences (time differences) have.

Accordingly, the photographing unit 110 may be configured to photograph the photographing unit 110, that is, the photographing unit 110, which is mounted on the photographing unit 110, through the parallax between the same subjects shown in the respective images provided from the plurality of camera modules 111 and 112 A relative distance value between the subject 1000 and the subject.

The photographing unit 110 may include at least one of a visible light camera module 111 and an infrared camera module 112. The photographing unit 110 photographs a surrounding image of the vehicle 1000, The visible light camera module 111 and the infrared camera module 112 may be included to discriminate the relative distance between the subject and the vehicle 1000. [

The plurality of visible light camera modules 111 that may be included in the photographing unit 110 may be a CMOS (Complementary Metal Oxide Semiconductor) camera Module or a CCD (Charge-Coupled Device) camera module. Since the region of about 400 to 800 nm, which is a visible light band, is photographed, it can be easily used during a day in which optimized images can be obtained through light.

Here, the visible light camera module 111 may not be able to be photographed unless a separate lighting device is used in combination because the energy of the visible light band, which can be photographed at night when the visible light is weak, is very small. However, when the visible light camera module 111 is used in combination with the illumination device, the result of the image that can be photographed may be different depending on the intensity and the type of the illumination device, .

Accordingly, the photographing unit 110 may include a plurality of the infrared camera modules 112 together with a plurality of the visible light camera modules 111, and the visible light camera module 111 may include a plurality of infrared camera modules 112, And the infrared camera module 112 may take the role of the visible light camera module 111 at night.

In general, at night where moonlight or starlight exists, near infrared ray energy having a band around 580 nm is considerably present, so that the photographing unit 110 can sufficiently photograph an image outside the vehicle 1000 with the infrared camera module 112 alone have.

However, as in the case of starlight but without moonlight, there is little near-infrared energy in the darker nights, and instead there is significant energy in the infrared region of the 1000-1200 nm band. Accordingly, the infrared camera module 112 may be the infrared camera module 112 capable of capturing infrared rays in the range of 1000 to 1200 nm so as to prepare for a darker situation.

However, since both the moonlight and the starlight are present, there is almost no infrared energy in the range of 1000 to 1200 nm as described above in a brighter night. Therefore, in order to compensate for this, the photographing unit 110 has infrared And an infrared LED 113 for emitting infrared light. Accordingly, in the night when relatively dark nighttime, that is, in the nighttime in which near infrared energy in the band of 1000 to 1200 nm is considerably present, the vicinity of the vehicle 1000 can be photographed only through the infrared camera module 112 and relatively bright nighttime, the infrared rays of the energy band of 1000 to 1200 nm can be supplemented through the infrared LED 113, so that it is possible to compensate the infrared rays of the infrared camera 113 at all the nights, regardless of the brightness, It is possible to shoot an image through the module 112. [

Contrary to the above, when the infrared camera module 112 photographs the near infrared rays in the front and rear band of 850 nm and the near infrared rays in the relatively dark nighttime, that is, the front and rear band of 850 nm are hardly detected, The infrared LED 113 may be configured to complement the infrared LED 113.

However, since the infrared wavelength of 1000 to 1200 nm is capable of transmitting energy farther than near infrared rays in the 850 nm band as well as the characteristic of sensing a shape that is not perceived by the naked eye, It is more preferable that the infrared LED 113 for radiating an infrared ray having a wavelength of 1000 to 1200 nm is configured to complement the infrared camera module 112 at a relatively bright night Do.

The photographing unit 110 may include at least one air jetting unit 114. The air blowing unit 114 blows compressed air to the visible light camera module 111 and the infrared camera module 112 to clean the visible light camera module 111 and the infrared light camera module 112, And the function of the lens of the infrared camera module 112 due to the contamination can be prevented in advance.

The binarized image generation unit 120 may generate a binarized image by binarizing the image so that the lane candidate region appears white and the background region outside the lane candidate region appears black.

Specifically, the binarized image generating unit 120 includes a panoramic image generating unit 121 for generating a panoramic image by combining images photographed through the plurality of photographing units 110, and a binarization unit 120 for stretching the panoramic image, A binarization unit 123 for separating the panoramic image with improved contrast into a plurality of unit areas and binarizing the divided unit areas to generate the binarized image; can do.

FIG. 4 illustrates that the contrast of a panoramic image is improved according to an embodiment of the present invention. 4 (a) is the panorama image generated through the panorama image generation unit.

A driving support system according to an embodiment of the present invention may include a plurality of the photographing apparatuses 1000 for photographing the periphery of the vehicle 1000 to detect a lane area in a form corresponding to a lane of the road from a road on which the vehicle 1000 travels, A plurality of images photographed through the plurality of photographing units 110 may be combined through the panorama image generating unit 121 to generate the panorama image 20. [

Since the panoramic image 20 is generated by combining a plurality of the images, the image quality and the characteristics of the screen are the same. Therefore, in describing the specification of the present invention, the panoramic image 20 and the image And can be described through the same drawing.

The image of the surroundings of the vehicle 1000 provided as the panoramic image 20 is different from the image 20 of the surroundings of the vehicle 1000, (20) of the lane area detection, it is possible to provide a more reliable result in the lane area detection.

As described above, each of the plurality of photographing units 110 includes a plurality of camera modules 111 and 112, and the plurality of camera modules 111 and 112 are connected to the object among the images 20 through the image 20 photographed from each camera module. The panoramic image 20 formed through the panoramic image generation unit 121 is generated by combining the plurality of camera modules 111 and 112 included in each of the plurality of photographing units 110, The image 20 may be a combined image 20 or a combined image 20 of at least one camera module.

The contrast of the panoramic image generated through the panoramic image generation unit 121 may be improved through the stretching unit 122.

Generally, the contrast of an image means the difference between the bright and dark regions in the image, that is, the contrast ratio. The larger the contrast ratio, the higher the contrast. Since an image having a high contrast can be regarded as an image having high clarity due to a clear difference in a dark region in the image, if the image having a low contrast is converted into an image having a high contrast, the sharpness of the image can be improved.

The stretching unit 122 may extend the range between the minimum image level and the maximum image level of the image level of each pixel of the panoramic image 20 to more evenly distribute the image level distribution of the panoramic image 20, Accordingly, the contrast ratio of the panoramic image 20 can be further increased, and the panoramic image 20 can be made more vivid.

The stretching unit 122 may enhance the contrast of the panoramic image 20 by histogram stretching, which is a method of extracting a histogram of an image, which is a general method for a general trader, and expanding a minimum image level and a maximum image level of the histogram. And a detailed description of the process can be omitted.

The binarization processing unit 123 may binarize the panorama image 25 with improved contrast to generate the binarized image in which the lane candidate region appears white and the background region outside the lane candidate region appears black.

The lane candidate area of white and the background area of black that can be formed in the binarized image can be used as the background of the road from which the vehicle 1000 is running from the image around the vehicle 1000, It is possible to provide important information for detecting the lane area which is an area corresponding to the lane.

That is, the lane candidate region may be provided to define the lane area, and may be provided by limiting the color range of the binarized image according to the present invention to white, and the background region other than the lane candidate region Black, so that a candidate group capable of detecting the lane area from the binarized image is provided.

In the general binarization process, one brightness value of a brightness value range of an image to be processed is selected as a reference brightness value, a brightness value larger than the reference brightness value is expressed as white based on the reference brightness value, and a small brightness value It is generally expressed in black.

The binarization processing unit 123 selects one reference brightness value among all brightness value ranges included in the image 25 having improved contrast and selects a region having a larger brightness value based on the reference brightness value as white The area having the brightness value can be set to black. Here, it is preferable that the reference brightness value has a brightness value that is lower than the brightness value of the road lane that can be displayed in the image 20, It is preferable that the brightness value of the road lane that can be displayed in the image is previously measured in advance and is selected within the measured brightness value range.

As a method of determining the reference brightness value by the binarization processing unit 123, one reference brightness value is determined based on the brightness value of the road-lane previously measured as described above, or the brightness of the image 25 A method in which the image 25 determined by using the average brightness values of the formed interface or determined by using the distribution of the mountains and valleys of the histogram of the image 25 whose contrast is improved can be used. And a background brightness value that can be separately classified into the background area and the background area may be determined.

FIG. 5 illustrates the binarization processing of a panoramic image with improved contrast according to an exemplary embodiment of the present invention.

The panorama image 20 generated by combining a plurality of the images photographed through the plurality of photographing units 110 can be used for various types of illumination such as lighting, street lamps, or sunlight or moonlight generated from the vehicle 1000 running on the road A partial illumination region may be generated by the illumination region. However, the illumination region may cause a problem in generating the binarized image 30, that is, the binarized image 30 having a definite boundary.

Accordingly, the binarization processing unit 123 can divide the panorama image with improved contrast into a plurality of unit regions 1, and generate the binarized image 30 by binarizing the divided unit regions 1. Accordingly, when the panorama image 25 with improved contrast is binarized, it is relatively easy to remove the illumination region for each unit region 1. Therefore, in order to binarize the whole image collectively, The image quality can be improved and the binarized image 30 in which the lane candidate region w having white and the background region b having black are clearly distinguished can be obtained.

The binarization image filtering unit 130 whitens the background area b within a predetermined area of the lane candidate area w in the binarized image 30 to a predetermined area a plurality of lane configuration areas 31 corresponding to lanes around the vehicle 1000 among the binarized images can be determined by deleting the lane configuration information w.

Specifically, the binarized image filtering unit 130 includes a background area filtering unit 131 for whitening the background area b within a predetermined area within the lane candidate area w, A lane candidate region filtering unit 132 for removing a lane candidate region and a lane candidate region filtering unit 132 for filtering the lane candidate region included in the virtual rectangle having a ratio of lengths of short sides shorter than a length of a long side of a virtual rectangle surrounding the lane candidate region, And a lane-configuration area determination unit 133 for determining the lane-configuration area 31 by deleting the lane-configuration area 31. [

FIG. 7 shows the change of the binarized image in the lane candidate region filtering unit, and FIG. 8 shows the change of the binarized image in the lane configuration region 31. FIG. Lt; / RTI >

Referring to FIG. 6, the background area filtering unit 131 extracts a predetermined area of a plurality of the background areas b existing in the lane candidate area w among the binarized images 30, specifically, The shape of the lane configuration area 31 can be made clear by performing whitening of the background area b having an area smaller than that of the lane configuration area 31. [

The area of the lane configuration area 31 may be derived from a previously measured value, that is, an area value of the lane configuration area 31 previously detected and applied by the operation support system according to the present invention, The filtering unit 131 is configured to filter the background area b having a smaller area than the minimum area of the plurality of lane configuration areas 31 for more reliable whitening of the background area b existing in the lane configuration area 31 By making the area (b) whiten, the shape of the plurality of the lane-configuration areas (31) can be made totally clear.

The lane candidate region filtering unit 132 may delete the lane candidate region w that is less than a predetermined area from a plurality of the lane candidate regions w existing in the binarized image 30. [ The lane candidate area filtering unit 132 may filter the lane candidate area smaller than the area of the lane configuration area 31 previously detected and applied in advance by the operation support system according to the present invention as in the background area filtering unit 131. [ The lane candidate area w smaller than the area of the lane configuration area 31 is deleted from the binarized image 30 so that the lane of the binarized image 30 is deleted, The determination of the configuration area 31 can be facilitated.

According to an embodiment of the present invention, the shape of the lane-forming area 31 is the same as the shape of the lane on the road on which the vehicle 1000 travels, and may be linear.

Therefore, the lane-configuration region determining unit 133 determines whether or not the length ratio of the shorter sides of the plurality of virtual rectangles surrounding each of the plurality of lane candidate regions (w) The lane configuration area 31 can be determined by deleting the lane candidate area w included in the virtual rectangle exceeding the determined ratio.

The plurality of virtual rectangles surrounding each of the plurality of lane candidate regions (w) are arranged such that the lane configuration region (31) in a line shape having a long minor axis and a long axis as long as the ratio of the length of the short side can do. In contrast, as the ratio of the length of the short side to the length of the long side increases, the lane configuration area 31, which can be located inside the virtual rectangle, is not a linear shape within the virtual rectangle, but a shape like a rectangle or a circle .

Therefore, the lane configuration area determining unit 133 determines the lane configuration area 31 (i.e., the lane configuration area 31) of the plurality of virtual rectangles that is located inside the virtual rectangle whose ratio of the length of the long sides to the length of the long sides exceeds a predetermined value 31 may be deleted to delete the remaining lane candidate areas w except for the lane candidate area w that can be a plurality of the lane configuration areas 31 among the binarized images 30. [

Here, the predetermined numerical value of the ratio of the length of the long side to the length of the long side of the imaginary rectangle is a virtual rectangular long side surrounding the lane configuration region 31 previously applied and detected by the driving support system according to the present invention Can be derived from the length of the sides and the length ratio of the short sides and is preferably derived from the maximum of the lengths of the longer sides of the plurality of virtual rectangles previously measured and the ratio of the lengths of the shorter sides.

As described above, the binarized image filtering unit 130 performs three stages of image processing to determine the lane configuration area 31 from the binarized image 30, that is, a plurality of the lane candidate areas w (B) that is less than a predetermined area formed in the interior of the lane configuration area (31), whitening the background area (b) smaller than the area of the lane configuration area (31) The lane configuration area 31 can be determined in such a manner that the lane candidate area w less than the area of the construction area 31 is deleted and the lane candidate area w other than the line shape is deleted.

Therefore, the method of detecting the lane area in the driving support system according to the embodiment of the present invention is a method of detecting the lane area by deriving an equation of a straight line through coordinates that can be extracted from the binary image, It is possible to detect the lane more accurately.

Figure 9 illustrates that a lane area is derived from a lane configuration area in accordance with an embodiment of the present invention.

The lane area derivation unit 140 may derive a plurality of the lane area 33 by connecting the lane configuration areas 31 adjacent to each other so that curvature is minimized. More specifically, the lane area derivation unit 140 includes a coordinate setting unit 141 for setting a plurality of coordinates 32 on the plurality of lane configuration areas 31, A curvature calculating section for calculating a curvature of each of the plurality of imaginary lines (i); a curvature calculating section for calculating a curvature of each of the plurality of imaginary lines (i) And a lane area 33 determination unit for determining a plurality of the virtual lines i formed in parallel with the curvature as the lane area 33. [

The coordinate setting unit 141 may set a plurality of the coordinates 32 on the plurality of lane configuration areas 31 determined in the binarized image. A plurality of the coordinates 32 may be set on an area including the top, middle, and bottom ends of each of the plurality of lane constituent regions 31, and a straight line or a curve may be formed through connection of the plurality of coordinates 32 The number of the coordinates 32 that can be set in each of the lane configuration areas 31 is not particularly limited.

The virtual line forming unit 142 may connect the coordinates 32 adjacent to each other to form a plurality of virtual lines i.

9C, a plurality of the coordinates 32 formed on the plurality of lane constituent regions 31 are formed by connecting the adjacent coordinates 32 to form a plurality of imaginary lines i Can be confirmed. In order to become the lane area 33 corresponding to the lane on which the vehicle 1000 travels on the road among the plurality of virtual lines i, the shape corresponding to the lane on the road, that is, They should be formed side by side with curvature.

Accordingly, the curvature calculator 143 can calculate the curvature of the plurality of virtual lines i, thereby calculating the curvature of the plurality of virtual lines i, It is possible to provide a basis for determining the virtual line i.

The lane area determining unit 144 may determine the plurality of virtual lines i formed at the minimum curvature among the plurality of virtual lines i and formed at the same curvature as the lane area 33. [

As described above, the lane area derivation unit 140 sets a plurality of the coordinates 32 on the plurality of lane configuration areas 31 and connects the plurality of the coordinates 32 in a direction in which the curvature is minimized It is possible to form a plurality of the lane area 33 formed side by side with the same curvature.

In the lane area derivation unit 140, the plurality of the lane area 33 is not connected to the plurality of the lane configuration areas 31 through a simple straight line equation, but includes the uppermost, middle, Since a plurality of the coordinates 32 set in the plurality of areas can be connected in a direction in which the curvature is minimized, it is possible to prevent the inaccurate lane area 33 from being formed by being connected based on merely adjacent distances There is an effect that the lane area 33 can be reliably provided.

The plurality of lane area 33 derived through the lane area derivation unit 140 corresponds to the lane that is the traveling path of the vehicle 1000 on the road, It may be a criterion for determining whether or not the vehicle 1000 is deviated from the traveling path.

Figure 10 shows the lane area applied to an image around the vehicle in accordance with an embodiment of the present invention.

The lane departure analysis unit 150 determines a relative distance between the vehicle 1000 and the plurality of lane areas 33 so that the vehicle 1000 can determine the relative distance between the vehicle 1000 and a pair of lane areas (33). ≪ / RTI >

The lane departure analysis unit 150 determines the relative distance between the vehicle 1000 and the plurality of lane areas 33 to determine whether the vehicle 1000 is in the lane area 33, The vehicle 1000 normally travels in the traveling path of the vehicle 1000 in contact with any one of the pair of lanes adjacent to the vehicle 1000, i.e., the vehicle 1000 may appear as a pair of the lane areas 33. [ Can be analyzed.

The relative distance between the vehicle 1000 and the plurality of lane areas 33 may be a stereo type photographing unit 110 capable of determining a relative distance with respect to a subject through the plurality of camera modules 111 and 112, A relative distance value from the vehicle 1000 to the lane candidate area w that can be the lane area 33 may be provided through the vehicle 1000, and thus a detailed description thereof will be omitted.

When the vehicle 1000 is analyzed as being in contact with any one of the pair of adjacent lane configuration areas 31 through the lane departure analysis unit 150, It can be determined that the vehicle 1000 is out of the traveling path. In this case, the possibility that the vehicle 1000 may collide with other vehicles, obstacles, and pedestrians may increase.

Therefore, the driving support system according to the present invention needs to provide countermeasures against the case where the vehicle 1000 leaves the traveling path.

Accordingly, the operation support system may further include a driving support unit 160 for supporting the operation of the vehicle 1000 according to the analysis result of the lane departure analysis unit 150. [

FIG. 11 shows a configuration of controlling the steering of a vehicle under the support of the driving support unit 160 according to an embodiment of the present invention. FIG. 12 illustrates a configuration of the driving support unit 160 in accordance with an embodiment of the present invention. And the steering is controlled.

The lane departure analysis unit 150 may analyze the lane departure analysis unit 150 to make contact with the lane so that the vehicle 1000 is not in contact with the lane, ) Can be supported.

More specifically, the driving support unit 160 may include a traveling steering support unit 162 for controlling the traveling direction of the vehicle 1000. The traveling steering support unit 162 may include a lane departure direction analysis unit 150, The control unit 1100 controls the steering of the vehicle 1000 so as to change the running direction of the vehicle 1000 to a normal path, The steering of the vehicle 1000 can be controlled.

11, the traveling steering support unit 162 is connected to a steering controller 1110 that determines a traveling direction of the vehicle 1000 and controls the steering controller 1110 to determine a steering angle of the steering controller 1110 And controls the driving direction of the vehicle 1000 by controlling the rotation of the steering wheel shaft 1160 that changes the traveling direction of the vehicle 1000. [

More specifically, the steering controller 1110 connected to the operation support unit 160 controls the driving of the motor 1120 that transmits the rotational driving force to the steering wheel shaft 1160, The first gear 1130 is connected to the second gear 1150 through a belt gear 1140 and rotates the second gear 1150 to rotate the steering wheel 1150 including the second gear 1150 on the same rotation axis. The running direction of the vehicle 1000 can be changed by rotating the shaft 1160. [

Therefore, when the lane departure analysis unit 150 analyzes that the vehicle 1000 is in contact with the adjacent lane (FIG. 12 (b)), the driving support unit 160 determines that the lane departure- The direction of the rotation of the motor 1120 may be determined so that the steering wheel shaft 1160 is rotated by the vehicle 1000 so that the vehicle 1000 travels in a direction away from the lane in contact with the vehicle 1000 The traveling direction can be changed (Fig. 12 (c)).

The vehicle 1000 that has entered the normal driving route through the support of the driving support unit 160 after the contact with the lane is again controlled by the driving support unit 160, (Fig. 12 (d)) in accordance with the rotational drive of the motor 1120. [0100]

The steering control unit 1100 is connected to the steering controller 1110 to cut off the connection between the steering controller 1110 and the steering support unit so that the vehicle 1000 can be driven by manual operation of the driver The driver of the vehicle 1000 may selectively apply the support of the driving steering support 162 to the vehicle 1000 so that the more flexible vehicle 1000 The vehicle 1000 may be operated in a manual operation mode when the traveling steering support unit 162 operates abnormally to generate a dangerous factor in the driving of the vehicle 1000 The driver can operate the vehicle 1000 more safely.

When the lane departure analysis unit 150 analyzes that the vehicle 1000 is in contact with the lane adjacent to the vehicle 1000, the driving support unit 160 informs the driver of the vehicle 1000 By including the lane departure warning section 161 for providing an alarm, the driver's immediate ability to cope with the departure of the traveling path of the vehicle 1000 can be improved.

As described above, when the vehicle 1000 is out of the traveling path of the vehicle 1000, the driving support unit 160 re-enters the vehicle 1000 into the traveling path, And the like.

However, the operation support unit 160 not only supports the function of coping with the occurrence of a problem during running of the vehicle 1000, but also can support the running of the vehicle 1000 while the vehicle 1000 is running normally have. More specifically, the driving support unit 160 includes a cruise support unit 163 that supports the constant speed driving or the accelerated traveling of the vehicle 1000, so that the lane departure analysis unit 150 determines that the vehicle 1000 It is possible to support the vehicle 1000 at a constant speed or an accelerated travel when it is analyzed that the vehicle 1000 is not in contact with a pair of adjacent lanes and is normally traveling.

FIG. 13 illustrates an accelerator pedal p automatically controlled by the driving support unit 160 according to an embodiment of the present invention. FIG. 14 illustrates an operation of the driving support unit 160 and the driving support unit 160 according to an embodiment of the present invention. Shows the accelerator pedal p that is disconnected and switched to the manual mode.

The accelerator pedal p is configured to accelerate the engine 1000 by regulating the amount of gasoline and air in the vehicle 1000 so that the vehicle 1000 can be accelerated and advanced. And is pressed by the driver of the vehicle 1000 to function.

However, in the driving support system according to the present invention, the vehicle 1000 is not in contact with a pair of the lane areas 33 adjacent to the vehicle 1000, so that the vehicle 1000 is traveling in a normal traveling route The cruise support unit 163 controls the cruise control unit 1200 of the vehicle 1000 such that the accelerator pedal p is automatically depressed so that the vehicle 1000 can be driven at constant speed or accelerated .

12, when the lane departure analysis unit 150 analyzes that the vehicle 1000 is not in contact with a pair of lane areas 33 adjacent to the vehicle 1000, The controller 160 may apply power to the solenoid 1210 to cause the solenoid 1210 to deliver the compressed air from the air pump 1220 to the cylinder 1230. The cylinder 1230 which is supplied with the compressed air is positioned inside the cylinder 1230 so that a piston rod 1240 driven up and down is projected to the outside of the cylinder 1230 to rotate the accelerator pedal p Thereby permitting the vehicle 1000 to travel at constant speed or accelerate.

The constant speed running or the accelerated running of the vehicle 1000 according to the support of the driving support unit 160 can be determined according to the degree to which the accelerator pedal p is depressed due to the degree of protrusion of the piston rod 1240.

Like the steering control unit 1100, the cruise control unit 1200 includes an emergency release switch 1290 connected to the solenoid 1210 to block the control of the accelerator pedal p supported by the operation support unit 160 ).

14, when the emergency release switch 1290 is operated to turn off the solenoid 1210, the solenoid 1210 does not transfer the compressed air from the air pump 1220 to the cylinder 1230 The piston rod 1240 located in the cylinder 1230 is inserted into the cylinder 1230 again.

When the emergency release switch 1290 is operated, the piston rod 1240 pressing the accelerator pedal p is inserted into the cylinder 1230, so that the accelerator pedal p 160 can be switched to a completely manual mode in which the support of the controller 160 is interrupted.

Hereinafter, a driving support method for photographing the periphery of the vehicle 1000 and supporting the operation of the vehicle 1000 in accordance with the photographed image, when the vehicle 1000 to which the driving support system is applied is traveling You can look at it.

15 is a flowchart of a driving support method according to an embodiment of the present invention.

A driving support method according to an embodiment of the present invention is a driving support method of photographing a periphery of a vehicle 1000 in operation and supporting the operation of the vehicle 1000 according to a photographed image, A photographing step S110 of photographing an image of the surroundings of the vehicle 1000 through at least one photographing unit 110 mounted thereon, a lane candidate area w appearing white and a background area other than the lane candidate area w a binarized image generation step of binarizing the image so that the binarized image is displayed in black and generating a binarized image in step b) The background area b is whitened and the lane candidate area w of a predetermined area or less is deleted to form a plurality of lane configuration areas 31 corresponding to lanes around the vehicle 1000 in the binarized image, To determine the binarization A lane area derivation step S140 of deriving a plurality of lane areas 33 by connecting the lane configuration areas 31 adjacent to each other with curvature being minimized, A lane departure departure analysis step S150 for determining a relative distance between the lane area 33 and the vehicle 1000 and analyzing whether the vehicle 1000 is in contact with one of the pair of lane areas 33 adjacent to the vehicle 1000; . ≪ / RTI >

The photographing unit 110 may be mounted on the vehicle 1000 to photograph the image 20 around the vehicle 1000. The photographing unit 110 may be configured such that one photographing unit 110 is mounted on the vehicle 1000 and photographs the image 20 around the vehicle 1000. In this case, Since the vertical volume of the vehicle 1000 is increased to be mounted at a high position in the center of the vehicle 1000 in order to photograph the image 20 around the vehicle 1000 without being interfered with the entire area of the vehicle 1000, The aesthetics of the vehicle 1000 may be deteriorated, and as a result, the running stability of the vehicle 1000 may be degraded.

The photographing unit 110 is configured to photograph the image 20 around the vehicle 1000 through the plurality of photographing units 110 with a plurality of photographing units 110 along the upper outer region of the vehicle 1000 .

A plurality of the photographing units 110 may be mounted on the vehicle 1000 at a relatively high position, that is, at a peripheral area above the vehicle 1000 to detect the image 20 around the vehicle 1000 And when the plurality of photographing units 110 photograph an image of the surroundings of the vehicle 1000, it is possible to minimize the photographing of the vehicle 1000 including the vehicle 1000 within the angle of view of the photographing unit 110.

The sum of the view angles of the plurality of photographing units 110 is 360 ° or more so that the image 20 in all directions around the vehicle 1000 from the vehicle 1000 through the plurality of photographing units 110, As shown in FIG. That is, the plurality of photographing units 110 may be configured so that the angle of view between the adjacent photographing units 110 overlaps a predetermined area, 1000 can be detected without fail.

The photographing unit 110 has a function of photographing an image of the surroundings of the vehicle 1000 mounted on the vehicle 1000. However, the photographing unit 110 may detect a recognition object that can be detected from the image of the surroundings of the vehicle 1000, The function of determining the distance between the subject and the vehicle 1000 may also be one of the important functions of the photographing unit 110. That is, the photographing unit 110 photographs the image 20 around the vehicle 1000 and provides a relative distance value between the subject and the vehicle 1000 provided by the image 20, 1000 may enable a corresponding running to the subject.

Therefore, the photographing unit 110 includes a curved side face toward all directions around the vehicle 1000, and images along the side face, preferably along the same horizontal axis on the side face, And may include a plurality of camera modules 111 and 112 for photographing.

The photographing unit 110 can determine the relative distance between the subject from the vehicle 1000 by including a plurality of the camera modules 111 and 112. The plurality of camera modules 111 and 112 may include a plurality of camera modules 111 and 112, The respective images photographed from the plurality of camera modules 111 and 112 may be displayed at different positions within the same image range, that is, different right and left distance differences (parallax) . Accordingly, the photographing unit 110 photographs the photographing unit 110, in other words, the photographing unit 110 is mounted, through the parallax between the same objects indicated by the respective images 20 provided from the plurality of camera modules 111 and 112 The relative distance value between the vehicle 1000 and the subject.

The photographing unit 110 may include at least one of a visible light camera module 111, an infrared camera module 112 and a thermal camera module. As described above, the peripheral image 20 And at least one of the visible light camera module 111, the infrared camera module 112, and the infrared camera module 100 to discriminate the relative distance between the subject and the vehicle 1000 in the image 20, And may include a plurality of one.

The plurality of visible light camera modules 111 that may be included in the photographing unit 110 may be a CMOS (Complementary Metal Oxide Semiconductor) camera module or a CCD (Charge-Coupled Device) camera module. Since the region of about 400 to 800 nm, which is a visible light band, is photographed, it can be easily used during a day in which an optimized image can be obtained through light.

The visible light camera module 111 may not be able to be photographed unless a separate lighting device is used in combination because the energy of the visible light band that can be photographed at night when the visible light is weak is very small. However, when the visible light camera module 111 is used in combination with the illumination device, the result of the image 20 that can be photographed may vary depending on the intensity and type of the illumination device, The reliability of the system can be degraded. Accordingly, the photographing unit 110 includes a plurality of the infrared camera modules 112 together with a plurality of the visible light camera modules 111, and in the daytime, the camera 1000 is connected to the visible light camera module 111 through the visible light camera module 111, The infrared camera module 112 may take the role of the visible light camera module 111 at night.

In general, at night where moonlight or starlight exists, near infrared ray energy having a band around 580 nm is considerably present, so that the photographing unit 110 can sufficiently photograph an image outside the vehicle 1000 with the infrared camera module 112 alone have.

However, as in the case of starlight but without moonlight, there is little near-infrared energy in the darker nights, and instead there is significant energy in the infrared region of the 1000-1200 nm band. Accordingly, the infrared camera module 112 may be the infrared camera module 112 capable of capturing infrared rays in the range of 1000 to 1200 nm so as to prepare for a darker situation.

However, since both the moonlight and the starlight are present, there is almost no infrared energy in the range of 1000 to 1200 nm as described above in a brighter night. Therefore, in order to compensate for this, the photographing unit 110 has infrared And an infrared LED 113 for emitting infrared light. Therefore, in the night when relatively dark nighttime, that is, in the nighttime in which near infrared ray energy of 1000 to 1200 nm band is considerably present, the vicinity of the vehicle 1000 can be photographed only through the infrared camera module 112, When there is almost no infrared energy in the 1200 nm band, the infrared LED 113 can supplement the infrared energy of the band of 1000 to 1200 nm. Therefore, It is possible to capture an image through the camera module 112. [

On the contrary, when the infrared camera module 112 photographs near infrared rays in the front and rear band of 850 nm and when near infrared rays in the relatively dark nighttime, that is, in the front and rear band of 850 nm are hardly detected, And may be configured to supplement it through an infrared LED 113. [

However, since the infrared wavelength of 1000 to 1200 nm can transmit energy farther than near infrared rays in the front and rear band of 850 nm as well as the characteristic of sensing a shape that is not perceived by the naked eye, the infrared camera module 112) is configured to radiate infrared rays of 1000 to 1200 nm and to configure the infrared LED 113 for irradiating an infrared wavelength of 1000 to 1200 nm to complement the infrared camera module 112 at a relatively bright nighttime .

The photographing unit 110 may include at least one air jetting unit 114. The air blowing unit 114 blows compressed air to the visible light camera module 111 and the infrared camera module 112 to clean the visible light camera module 111 and the infrared light camera module 112, And the function of the lens of the infrared camera module 112 due to the contamination can be prevented in advance.

Next, the binarized image generation step S120 binarizes the image so that the lane candidate area w appears white and the background area b outside the lane candidate area w appears black, Can be generated. More specifically, the binarized image generation step S120 includes a panoramic image generation step S121 of generating a panoramic image 20 by combining images photographed through the plurality of photographing units 110 in the photographing step S110, A stretching step S122 of stretching the panoramic image to improve the contrast of the panoramic image, a step of dividing the panoramic image 25 with improved contrast into a plurality of unit areas 1, And a binarization processing step (S123) of binarizing the binarized image (30).

The driving support method according to an embodiment of the present invention may include a method of detecting a lane area 33 corresponding to a lane of the road from a road on which the vehicle 1000 travels, The plurality of images photographed through the plurality of photographing units 110 may be combined through the panorama image generating step S121 to generate the panorama image.

The image 20 of the surroundings of the vehicle 1000 provided as the panorama image 20 is different from the image of the surroundings of the vehicle 1000 that was provided by photographing only the front of the conventional traveling direction on the road, And provides a more reliable result in the detection of the lane area 33. FIG.

As described above, each of the plurality of photographing units 110 includes a plurality of the camera modules 111 and 112, and determines the distance of the subject from the image through the images photographed from the camera modules 111 and 112 The panorama image 20 formed through the panorama image generation step S121 may be generated by combining the plurality of camera modules 111 and 112 included in each of the plurality of photographing parts 110, May be a combined image or a combined image of the image 20 provided from at least one of the camera modules 111 and 112.

The contrast of the panoramic image generated through the panoramic image generation step S121 may be improved through the stretching step S122.

Generally, the contrast of an image means the difference between the bright and dark regions in the image, that is, the contrast ratio. The larger the contrast ratio, the higher the contrast. Since an image having a high contrast can be regarded as an image having a clear sharpness due to a clear difference between dark regions in the image, if the image having a low contrast is converted into an image having a high contrast by processing the image with a high contrast, Can be obtained.

Therefore, the stretching step S122 may expand the range between the minimum image level and the maximum image level of the image level of each pixel of the panoramic image 20 to more evenly distribute the image level distribution of the panoramic image, The contrast ratio of the panoramic image can be further increased and the panoramic image 20 can be made more vivid.

The stretching step S122 may improve the contrast of the panoramic image by histogram stretching, which is a method of extracting a histogram of an image, which is a general method to a trader, and extending a minimum image level and a maximum image level of the histogram Therefore, a detailed description of the process can be omitted.

The binarization processing step S123 binarizes the panorama image 25 with improved contrast so that the lane candidate area w appears white and the background area b outside the lane candidate area w is black The binarized image 30 can be generated.

The lane candidate area w of white and the background area b of black which can be formed in the binarized image 30 are distinguished from the image around the vehicle 1000, It is possible to provide important information for detecting the lane area 33, which is an area corresponding to the lane of the road in which the vehicle 1000 is running.

That is, the lane candidate region (w) is an area where the lane area (33) can be formed. In the binarized image (30) according to the present invention, The background area (b) other than the lane candidate area (w) is limited to black, thereby providing a candidate group capable of detecting the lane area (33) from the binarized image.

In the general binarization process, one brightness value of a brightness value range of an image to be processed is selected as a reference brightness value, a brightness value larger than the reference brightness value is expressed as white based on the reference brightness value, and a small brightness value It is generally expressed in black.

Therefore, the binarization processing step S123 may select one reference brightness value among all the brightness value ranges included in the binarized image 25 with improved contrast, and determine a region having a larger brightness value based on the reference brightness value as white The area having a smaller brightness value can be set to black. Here, it is preferable that the reference brightness value has a brightness value that is lower than the brightness value of the road lane that can be displayed in the image 20, It is preferable that the brightness value of the road lane that can be displayed in the image is previously measured in advance and is selected within the measured brightness value range.

As a method of determining the reference brightness value by the binarization processing step S123, one reference brightness value is determined based on the brightness value of the road lane previously measured as described above, or the brightness of the image 25 , Or a method of determining the contrast by using the distribution of the mountains and valleys of the histogram of the image 25 in which the contrast is improved can be used. The lane candidate region (w) and the background region (b), any method may be used as long as it can determine the reference brightness value that can be distinguished.

The panorama image 20 generated by combining a plurality of the images photographed through the plurality of photographing units 110 may be displayed on a display screen of a display device A partial illumination region may be generated by the illumination. The illumination region may cause a problem in generating the binarized image 30, that is, the binarized image 30 having a definite boundary.

Therefore, the binarization processing step S123 divides the panoramic image 25 having improved contrast into a plurality of unit areas 1, binarizes the divided unit areas 1 to generate the binarized image 30, can do. Accordingly, when the panorama image 25 with improved contrast is binarized, it is relatively easy to remove the illumination region for each unit region 1. Therefore, in order to binarize the whole image collectively, The image quality can be improved and the binarized image 30 in which the lane candidate region w having white and the background region b having black are clearly distinguished can be obtained.

Next, the binarized image filtering step S 130 whitens the background area b in the binarized image 30 to a predetermined area or less within the lane candidate area w, The lane candidate area w may be deleted to determine a plurality of lane configuration areas 31 corresponding to lanes around the vehicle 1000 among the binarized images.

Specifically, the binarized image filtering step S 130) includes: a background area filtering step (S 131) of whitening the background area (b) within a predetermined area within the lane candidate area (w) A lane candidate region filtering step (S132) of deleting the lane candidate region (w) and a lane candidate region filtering step (S132) of deleting the lane candidate region (w) from the virtual rectangle And a lane configuration area determination step (S133) of deleting the lane candidate area (w) included in the lane configuration area (31) included in the lane configuration candidate area (w).

The background area filtering step S131 may be performed on a predetermined area or less of a plurality of the background areas b existing in the lane candidate area w among the binarized images 30, The area of the lane configuration area 31 can be made clear by performing whitening of the background area b having an area smaller than that of the lane configuration area 31. [

The area of the lane configuration area 31 can be derived from a previously measured value, that is, the area value of the lane configuration area 31, which is previously applied and detected in advance by the operation support method according to the present invention, The area filtering step S131 is a step of filtering the area of the lane configuration area 31 having the area smaller than the minimum value of the areas of the plurality of lane configuration areas 31 for more reliable whitening of the background area & By making the background area (b) white, the shape of the plurality of lane-forming areas (31) can be made totally clear.

The lane candidate region filtering step (S132) may delete the lane candidate region (w) of a predetermined area or less among a plurality of the lane candidate regions (w) existing in the binarized image.

The lane candidate area filtering step S132 may be performed in the same manner as in the background area filtering step S131 except that the driving support method according to the present invention is applied to the lane candidate area smaller than the area of the lane configuration area 31, The lane candidate area w smaller than the area of the lane configuration area 31 is deleted from the binarized image 30 so that the lane of the binarized image 30 is deleted, The determination of the configuration area 31 can be facilitated.

According to the present invention, the shape of the lane-forming area 31 is the same as the shape of the lane on the road on which the vehicle 1000 travels, and preferably has a line shape.

Therefore, the step of determining the lane configuration region (S133) may include a step of determining a lane configuration region (S133) of the binarized image (30) based on a ratio of a length of a long side to a length of a long side among a plurality of imaginary rectangles surrounding each of the plurality of lane candidate regions It is possible to determine the lane configuration area 31 by deleting the lane candidate area w included in the virtual rectangle exceeding the ratio.

The plurality of virtual rectangles surrounding each of the plurality of lane candidate regions (w) are arranged such that the lane configuration region (31) in a line shape having a long minor axis and a long axis as long as the ratio of the length of the short side can do. In contrast, as the ratio of the length of the short side to the length of the long side increases, the lane configuration area 31, which can be located inside the virtual rectangle, is not a linear shape within the virtual rectangle, but a shape like a rectangle or a circle .

Therefore, in the lane-configuration region determination step S133, the lane configuration area 31 located inside the virtual rectangle whose length ratio of the shorter sides of the plurality of virtual rectangles exceeds the predetermined value The lane candidate area w may be deleted from the binarized image 30 except for the lane candidate area w that can be a plurality of the lane configuration areas 31. [

The predetermined value of the length ratio of the short side of the long side of the imaginary rectangle corresponds to the length of the long side of the imaginary rectangle surrounding the detected lane configuration region 31, And the length ratio of the short sides, and may be derived from the maximum of the lengths of the long sides of the plurality of virtual rectangles previously measured and the ratio of the lengths of the short sides.

As described above, the binarized image filtering step S 130 is a three-step image processing step for determining the lane configuration area 31 from the binarized image 30, that is, (B) that is less than a predetermined area formed in the interior of the lane configuration area (31), whitening the background area (b) The lane configuration area 31 can be determined in such a manner that the lane candidate area w less than the area of the lane configuration area 31 is deleted and the lane candidate area w other than the line shape is deleted.

Therefore, the method of detecting the lane in the driving support method according to an embodiment of the present invention is a method of detecting the lane by deriving an equation of a straight line through a coordinate 32 that can be extracted from a binary image, It is possible to detect the lane more accurately.

The lane area deriving step S140 may derive a plurality of the lane area 33 by connecting the lane configuration areas 31 adjacent to each other so that curvature is minimized.

More specifically, the lane area deriving step S140 includes a coordinate setting step S141 for setting a plurality of coordinates 32 on the plurality of lane-forming constituent areas 31, a plurality of coordinates 32 adjacent to each other, A curvature calculation step (S143) of calculating a curvature of each of the plurality of virtual lines (i), a virtual line forming step (S142) of forming the virtual line (i) And determining a plurality of the virtual lines i formed in parallel with the same curvature as the lane area 33 (step S144).

The coordinate setting step S141 may set a plurality of the coordinates 32 on the plurality of the lane configuration areas 31 determined in the binarized image. A plurality of the coordinates 32 may be set on an area including the top, middle, and bottom ends of each of the plurality of lane constituent regions 31, and a straight line or a curve may be formed through connection of the plurality of coordinates 32 The number of the coordinates 32 that can be set in each of the lane configuration areas 31 is not particularly limited.

Next, the virtual line forming step S142 may form a plurality of virtual lines i by connecting the coordinates 32 adjacent to each other.

In order to become the lane area 33 corresponding to the lane on which the vehicle 1000 travels on the road among the plurality of virtual lines i, the shape corresponding to the lane on the road, that is, They should be formed side by side with curvature. Accordingly, the curvature calculation step S143 can calculate the curvature of the plurality of virtual lines i, thereby calculating the curvature of the plurality of virtual lines i corresponding to the plurality of the lane areas 33 It is possible to provide a basis for determining the virtual line i.

Next, the lane area determination step S144 may determine the plurality of virtual lines i, which are formed of the minimum curvature among the plurality of virtual lines i and are formed in parallel with the same curvature, as the lane area 33 .

As described above, in the step of deriving the lane area (S140), a plurality of the coordinates (32) are set on the plurality of the lane configuration areas (31), and a plurality of the coordinates (32) are connected in a direction in which the curvature is minimized It is possible to form a plurality of the lane area 33 formed side by side with the same curvature.

In the lane area deriving step S140, the plurality of the lane area 33 is not connected to the plurality of the lane configuration areas 31 through a simple straight line equation, but includes the uppermost, middle, Since a plurality of the coordinates 32 set in the plurality of areas can be connected in a direction in which the curvature is minimized, it is possible to prevent the inaccurate lane area 33 from being formed by being connected based on merely adjacent distances There is an effect that the lane area 33 can be reliably provided.

The plurality of lane area 33 derived through the lane area deriving step S140 corresponds to the lane that is the traveling path of the vehicle 1000 on the road, It may be a criterion for determining whether or not the vehicle 1000 is deviated from the traveling path.

The lane departure analysis step S150 determines the relative distance between the vehicle 1000 and the plurality of lane areas 33 so that the vehicle 1000 can determine the relative distance between the vehicle 1000 and the pair of lane areas 331, (33). ≪ / RTI >

More specifically, the lane departure analysis step S150 determines the relative distance between the vehicle 1000 and the plurality of lane areas 33 to determine whether the vehicle 1000 is located within the plurality of lane areas 33 It is assumed that the vehicle 1000 is in contact with any one of a pair of the adjacent lanes adjacent to the vehicle 1000, that is, the vehicle 1000 can be normally displayed in the traveling path of the vehicle 1000, Can analyze whether it is driving or not.

The relative distance between the vehicle 1000 and the plurality of lane areas 33 may be determined by the camera 110 via the photographing unit 110 of the stereo type, which can determine the relative distance to the subject through the plurality of camera modules, A relative distance value from the vehicle 1000 to the lane candidate area w that can be the lane area 33 can be provided, and thus detailed description thereof can be omitted.

When the vehicle 1000 is analyzed as being in contact with any one of the pair of adjacent lane configuration areas 31 through the lane departure analysis step S150, It can be determined that the vehicle 1000 is out of the traveling path. In this case, the possibility that the vehicle 1000 may collide with other vehicles, obstacles, and pedestrians may increase. Therefore, the driving support method according to the present invention needs to provide a countermeasure against the case where the vehicle 1000 leaves the traveling route. Accordingly, the driving support method may further include a driving support step (S160) for supporting the operation of the vehicle 1000 according to the analysis result of the lane departure-from-departure analysis step S150.

The driving support step S160 may include a lane departure analysis step S150 when the lane departure analysis step S150 determines that the vehicle 1000 is in contact with the lane, 1000) can be supported. Specifically, the driving support step S160 may include a driving steering support step S162 for controlling the driving direction of the vehicle 1000, and the driving steering support step S162 may include the lane departure analysis step When the vehicle 1000 analyzes that the vehicle 1000 is in contact with the lane at a time when the vehicle 1000 travels through the vehicle 1000 via the driving steering support unit 162 that supports driving of the steering control unit 1100 of the vehicle 1000, The steering of the vehicle 1000 can be controlled so that the direction changes to a normal path.

The traveling steering support unit 162 controls the steering controller 1110 for determining the traveling direction of the vehicle 1000 so that the traveling steering support unit 162 rotates in accordance with the determination of the steering controller 1110 to change the traveling direction of the vehicle 1000 The driving direction of the vehicle 1000 can be controlled by controlling the rotation of the steering wheel shaft 1160. [

The driving assistant support unit 162 controls the steering controller 1110 to control the driving of the motor 1120 that transmits the rotational driving force to the steering wheel shaft 1160, The first motor 1120 provided on the rotating shaft of the motor 1120 is connected to the first motor 1120 through a belt gear and rotates in the same direction as the steering wheel shaft 1160 in the steering wheel shaft 1160 So that the steering wheel shaft 1160 can rotate to change the running direction of the vehicle 1000. [0064] As shown in FIG.

Therefore, in the driving support step S160, when the lane departure analysis step S150 analyzes that the vehicle 1000 is in contact with the adjacent lane, the steering support shaft step (S160) The direction of rotation of the motor 1120 can be determined such that the vehicle 1000 is rotated so that the vehicle 1000 is driven in a direction away from the lane in contact with the vehicle 1000. [ .

The vehicle 1000 that has entered the normal driving route through the support of the driving support step S160 after the contact with the lane is again controlled by the driving support step S160, The normal traveling can be maintained in the traveling route in accordance with the rotational driving of the motor 1120 under the control of the control unit 1120. [

The steering control unit 1100 is connected to the steering controller 1110 to block the steering controller 1110 from being controlled through the steering assist step S162 so that the vehicle 1000 can be operated by manual operation of the driver The driver of the vehicle 1000 may selectively apply the assistance of the driving-steering support step S162 to the vehicle 1000 The vehicle 1000 can be operated more flexibly and the vehicle 1000 can be quickly operated manually when an error occurs in the vehicle 1000 under the control of the driving-steering support step S162 The vehicle can be switched to the traveling mode, so that the driver can operate the vehicle 1000 more safely.

In the lane departure-from-departure analysis step S150, the driver assistance step S160 is a step of providing a driver of the vehicle 1000 with a lane departure- By including the lane departure warning step (S161) for providing the departure alarm, the driver's immediate coping ability with respect to departure of the traveling path of the vehicle 1000 can be improved.

As described above, the driving support step S160 is a step of re-entering the vehicle 1000 into the driving route when the vehicle 1000 is out of the traveling path of the vehicle 1000, It can support driving.

However, the operation support step S160 not only supports the function of coping with the occurrence of a problem during running of the vehicle 1000, but also supports the driving of the vehicle 1000 while the vehicle 1000 is running normally . Specifically, the driving support step S160 includes a cruise support step S163 to support the constant-speed driving or the accelerated traveling of the vehicle 1000, so that the lane departure- Is analyzed as if it is not in contact with a pair of the lanes adjacent to the vehicle 1000 and is normally traveling, the smooth support step S163 is performed through the cruise support unit 163, Or accelerated running.

The accelerator pedal p of the vehicle 1000 controls the amount of gasoline and air in the vehicle 1000 to speed up the rotation of the engine of the vehicle 1000 to enable acceleration and forward of the vehicle 1000 So that the driver of the vehicle 1000 normally presses and operates.

However, in the driving support method according to the present invention, when the vehicle 1000 is not in contact with a pair of the lane areas 33 adjacent to the vehicle 1000 and the vehicle 1000 is traveling in a normal traveling route The cruise support step S163 may allow the accelerator pedal p to be automatically depressed through the cruise support unit 163 to allow the vehicle 1000 to travel at constant speed or accelerate have.

If the lane departure analysis step S150 analyzes that the vehicle 1000 is not in contact with the pair of lane areas 33 adjacent to the vehicle 1000, The solenoid 1210 can supply the compressed air from the air pump 1220 to the cylinder 1230 by applying power to the solenoid 1210 through the cruise support unit 163. [ The cylinder 1230 which is supplied with the compressed air is positioned inside the cylinder 1230 so that a piston rod 1240 driven up and down is projected to the outside of the cylinder 1230 to rotate the accelerator pedal p Thereby allowing the vehicle 1000 to travel at constant speed or accelerate.

Here, the constant-speed running or the accelerated running of the vehicle 1000 according to the support to the driving support step S160 is determined according to the degree to which the accelerator pedal p is depressed by the degree of protrusion of the piston rod 1240 .

The cruise control unit 1200 is connected to the solenoid 1210 in the same manner as the steering control unit 1100 to block the control of the accelerator pedal p in accordance with the support in the step of supporting the operation S160, ) To the manual mode.

The image 20 on the four sides of the vehicle 1000 is photographed through the plurality of photographing units 110 mounted on the vehicle 1000 and the lane area 33 is formed on the basis of the image 20. [ And a driving support system and method for supporting the operation of the vehicle 1000 corresponding to the lane area 33 have been described.

According to the driving support system and method, the driving support system and method include the photographing unit 110 capable of photographing all the images 20 on all four sides of the vehicle 1000, It is possible to accurately detect the lane area 33 corresponding to the lane on the road so that the reliability of the user for the operation support system and method and the accuracy and stability of running the vehicle 1000 can be further improved.

However, the driving support system and method can accurately detect the lane area 33 without photographing all the images of the four sides of the vehicle 1000 by the photographing part 110. For this purpose, the vehicle 1000 The lane detection unit 110 must rotate the photographing unit 110 in accordance with the curvature of the road on which the vehicle 1000 travels, that is, the degree of curvature of the road on which the vehicle 1000 travels.

Hereinafter, the driving according to another embodiment of the present invention, in which the photographing section rotates in accordance with the curvature of the road on which the vehicle 1000 travels, thereby sufficiently positioning the lane area 33 within the angle of view range of the photographing section Support systems and methods are discussed.

Hereinafter, a driving support system and method according to another embodiment of the present invention will be described with reference to FIGS. 5 to 10, and FIGS. 12 to 14, which were referred to in the description of the foregoing invention. When the drawings are referred to in the following description, constructions having the same reference numerals as the above-described invention have the same functions and features throughout the specification, and therefore, have.

16 is a configuration diagram of a driving support system according to another embodiment of the present invention.

The driving support system is a driving support system that captures the periphery of the subject vehicle 1000 during driving or driving and supports the driving of the subject vehicle 1000 according to the photographed image, (220) for detecting a plurality of lane areas (33) of the image, and a plurality of lane detecting sections (220) for detecting a lane area The lane departure analysis unit 230 may include a lane departure analysis unit 230 for analyzing whether the area 33 is located within a predetermined distance from the vehicle 1000 in the image, When the lane area 33 is analyzed to be not located at a predetermined distance from the subject vehicle 1000 in the image, the photographing unit 210 may rotate according to the degree of curvature can do.

The photographing unit 210 has a function of photographing the image 20 around the subject vehicle 1000 from the subject vehicle 1000 during traveling or on a paddock, The position where the photographing unit 210 is mounted on the subject vehicle 1000 is not particularly limited as long as the subject image 20 can be photographed. However, the subject vehicle 1000 generally has a high-speed forward running function as a main function The image 20 around the subject vehicle 1000 can be detected from at least one area including the front area of the subject vehicle 1000 by being mounted on at least one position including the front area of the subject vehicle 1000 It is preferable to be configured to take a picture.

17 is a perspective view of a photographing unit 210 according to another embodiment of the present invention.

The photographing unit 210 has a function of photographing an image of the surroundings of the subject vehicle 1000 mounted on the subject vehicle 1000. However, The function of determining the distance to the subject vehicle 1000 may also be one of important functions of the photographing unit 210. Here, the subject may be any object that can appear in the image 20, such as a lane, an obstacle, a pedestrian on the road on which the subject vehicle 1000 travels, and a vehicle 1000 other than the subject vehicle 1000 .

That is, the photographing unit 210 photographs an image of the surroundings of the subject vehicle 1000 and provides a relative distance value between the subject and the subject vehicle 1000 displayed on the image 20, Thereby making it possible to carry out the travel corresponding to the object. The description of the traveling of the subject vehicle 1000 corresponding to the subject will be described later.

The photographing unit 210 may include a plurality of camera modules 221, 222 and 223 for photographing an image 20 outside the subject vehicle 1000 along the same horizontal axis on the side of the photographing unit 210.

The photographing unit 210 can determine the relative distance between the subject from the subject vehicle 1000 by including a plurality of camera modules 221, 222 and 223. The plurality of camera modules 221, 222, The respective images photographed from the plurality of camera modules 221, 222, and 223 are captured at different positions within the same image standard, that is, different left and right distance differences ).

Accordingly, the photographing unit 210 may include the photographing unit 210, that is, the photographing unit 210, which is mounted with the photographing unit 210, through the parallax between the same subjects shown in the respective images provided from the plurality of camera modules 221, 222, And may provide a relative distance value between the vehicle 1000 and the subject.

The photographing unit 210 may include at least one of a visible light camera module 221, an infrared camera module 222 and a thermal camera module 223. As described above, The infrared camera module 222 and the infrared camera module 222 in order to determine the relative distance between the subject in the image 20 and the subject vehicle 1000. In addition, (223) may be included.

The plurality of visible light camera modules 221 that may be included in the photographing unit 210 may be a CMOS (Complementary Metal Oxide Semiconductor) camera module or a CCD (Charge-Coupled Device) camera module. Since the region of about 400 to 800 nm, which is a visible light band, is photographed, it can be easily used during a day in which an optimized image can be obtained through light.

The visible light camera module 221 may not be able to be photographed unless it is used in combination with another illumination device because the energy of the visible light band that can be photographed at night when the visible light is weak is very small. However, when the visible light camera module 221 is used in combination with the illumination device, the result of the image that can be photographed may differ depending on the intensity and the type of the illumination device, Can be degraded.

Therefore, the photographing unit 210 includes a plurality of infrared camera modules 222 together with a plurality of the visible light camera modules 221, and in the daytime, the infrared camera module 222 is connected to the vehicle 1000 through the visible light camera module 221, The infrared camera module 222 may be configured to take the role of the visible light camera module 221 at night.

In general, at night where moonlight or starlight exists, near infrared ray energy having a band around 580 nm is considerably present, so that the photographing unit 210 can photograph an image of the outside of the vehicle 1000 with the infrared camera module 222 alone have.

However, as in the case of starlight but without moonlight, there is little near-infrared energy in the darker nights, and instead there is significant energy in the infrared region of the 1000-1200 nm band. Accordingly, the infrared camera module 222 may be the infrared camera module 222 capable of shooting infrared rays in the range of 1000 to 1200 nm so as to prepare for a darker situation.

However, since both the moonlight and the starlight exist, there is almost no infrared energy in the band of 1000 to 1200 nm in the bright nighttime. Thus, in order to compensate for this, the photographing unit 210 has infrared And an infrared LED 224 for emitting infrared light. Therefore, in the night when relatively dark nighttime, that is, in the nighttime in which near infrared ray energy in a band of 1000 to 1200 nm is considerably present, the vicinity of the vehicle 1000 can be photographed only through the infrared camera module 222, and relatively bright nighttime, the infrared rays of the energy band of 1000 to 1200 nm can be supplemented through the infrared LED 224, so that it is possible to compensate the infrared rays of the infrared camera 224 at all the nights, regardless of the brightness, It is possible to shoot an image through the module 222. [

On the other hand, when the infrared camera module 222 photographs near-infrared rays in the front and rear band of 850 nm and when near infrared rays in the relatively dark nighttime, that is, in the front and rear band of 850 nm are hardly detected, And may be configured to supplement it via an infrared LED 224.

However, since infrared rays having a wavelength of 1000 to 1200 nm can transmit energy farther than near infrared rays having a wavelength of 850 nm as well as a characteristic of sensing a shape that is not perceived with the naked eye, the infrared ray camera module 222 More preferably, the infrared LED 224 for radiating an infrared ray having a wavelength of 1000 to 1200 nm is configured to complement the infrared camera module 222 at a relatively bright night.

In addition, the photographing unit 210 may include at least one infrared camera module 223. The thermal imaging camera module 223 senses the subject by electronically measuring the thermal radiation of the subject. An image of the outside of the subject vehicle 1000 recognized through the thermal imaging camera module 223, So that it is useful for recognizing the subject having a body temperature such as a pedestrian or an animal on the road.

The photographing unit 210 can easily recognize the subject in the daytime and at night in the traveling path of the subject vehicle 1000 through the visible light camera module 221 and the infrared camera module 222 Through the infrared camera module 223, it is possible to more accurately recognize the subject having a body temperature such as a pedestrian or an animal.

The photographing unit 210 may include at least one air jetting unit 225. 17, the photographing unit injects the lens-compressed air of the visible light camera module 221, the infrared camera module 222, and the thermal imaging camera module 223 to the outside of the vehicle 1000 By cleaning the visible light camera module 221, the infrared camera module 222 and the lenses of the infrared camera module 223, which are located and can be contaminated by various external conditions such as rain and snow, It is possible to prevent the light ray camera module 221, the infrared ray camera module 222 and the infrared ray camera module 223 from being degraded in function due to contamination.

The photographing unit 210 further includes a washer liquid injector 226 for injecting washer liquid into the lenses of the visible light camera module 221, the infrared camera module 222 and the thermal imaging camera module 223, The visible light camera, the infrared camera, and the wiper 227 for wiping the lens of the thermal imaging camera, so that the external recognition capability of the photographing unit 210 can be kept from deteriorating.

The lane area detecting unit 220 can detect a plurality of the lane area 33 through the image photographed through the photographing unit 210.

18 shows that the lane area 33 is detected from the image photographed by the photographing section 210 according to an embodiment of the present invention.

Hereinafter, the lane area detecting unit 220 can be described with reference to FIG. 18 and FIGS. 5 to 10. FIG. 5 to 10 have been referred to in the description of the embodiment of the present invention. However, since the changes of the image are the same except for the difference of the photographing unit 210, 10 and Fig. 16, the following invention can be explained.

More specifically, the lane area detecting unit 220 generates the binarized image 30 by binarizing the image so that the lane candidate area w appears white and the background area b outside the lane candidate area w appears black A binarized image generating unit 221 for binarizing the binarized image and for rendering the background area b of a predetermined area or less in the lane candidate area w in the binarized image, A binarized image filtering unit 222 for determining a plurality of lane configuration areas 31 corresponding to lanes around the vehicle 1000 among the binarized images 30 and a lane configuration sub- And a lane area deriving part 223 for deriving a plurality of lane areas 33 by connecting the lane area 31 with the curvature minimized.

The binarized image generating unit 221 includes a stretching unit 221-1 for enhancing the contrast of the image 20 by stretching the image and a stretching unit 221-1 for dividing the enhanced image into a plurality of unit areas 1, And a binarization processing unit 221-2 for binarizing the binarized image 30 according to the unit area 1.

Generally, the contrast of an image means the difference between the bright and dark regions in the image, that is, the contrast ratio. The larger the contrast ratio, the higher the contrast. Since an image having a high contrast can be regarded as an image having a clear sharpness due to a clear difference between dark regions in the image, if the image having a low contrast is converted into an image having a high contrast by processing the image with a high contrast, Can be obtained.

Therefore, the stretching unit 221-1 may further extend the range between the minimum image level and the maximum image level of the image level of each pixel of the image to more evenly distribute the image level distribution of the image, The contrast ratio of the image 20 can be further increased to make the image 20 sharper.

The stretching unit 221-1 may enhance the contrast of the image through histogram stretching, which is a method of extracting a histogram of an image, which is a general method for a general user, and extending a minimum image level and a maximum image level of the histogram, Therefore, a detailed description of the process can be omitted.

The binarization processing unit 221-2 binarizes the image 25 with improved contrast so that the lane candidate area w appears white and the background area b outside the lane candidate area w appears black The binarized image 30 can be generated.

The lane candidate area w of white and the background area b of black which can be formed in the binarized image 30 are distinguished from the image around the subject vehicle 1000, It is possible to provide important information for detecting the lane area 33, which is an area corresponding to the lane of the road in which the car 1000 is running.

That is, the lane candidate area (w) is an area where the lane area 33 can be formed. In the binarized image 30 according to the present invention, the color range may be limited to white, The background area (b) other than the lane candidate area (w) is limited to black, thereby providing a candidate group capable of detecting the lane area (33) from the binarized image (30).

In the general binarization process, one of the brightness values of the image to be processed is selected as a reference brightness value, a brightness value larger than the reference brightness value is expressed as white based on the reference brightness value, It is common to display a small brightness value in black.

Accordingly, the binarization processing unit 221-2 may select one brightness value among all the brightness value ranges included in the image 25 with improved contrast as a reference brightness value, and output a larger brightness value based on the reference brightness value The region having a smaller brightness value can be set to white while the region having a smaller brightness value can be set to black. Here, it is preferable that the reference brightness value has a brightness value that is darker than the brightness value of the lane on the road that can be displayed in the image. To this end, the reference brightness value is preset in advance through the photographing unit 210 Is preferably selected within a range of brightness value areas of the road lanes that can appear in the image.

As a method of determining the reference brightness value by the binarization processing unit 221-2, one reference brightness value may be determined based on the brightness value of the road lane previously measured as described above, or the brightness of the image 25 or a method in which the image is determined by using the average values of the brightness of the formed interface or the distribution of the mountains and valleys of the histogram of the image 25 whose contrast is improved, May be any method as long as it can determine the reference brightness value that can clearly distinguish the lane candidate region (w) and the background region (b) that can be displayed.

The image 20 photographed through the photographing unit 210 can be partially illuminated by various lighting such as illumination generated from the vehicle 1000 running on the road, streetlight or sunlight or moonlight However, there is a problem in generating the binarized image due to the illumination region, that is, the binarized image having a definite boundary division.

Therefore, the binarization processing unit 221-2 can divide the enhanced image into a plurality of unit regions 1, and generate the binarized image 30 by binarizing the separated unit regions 1 . Accordingly, when the image 25 with improved contrast is binarized, it is relatively easy to remove the illumination region for each unit region 1. Therefore, in order to binarize the entire image in batch, It is possible to obtain the binarized image 30 in which the lane candidate region w having white and the background region b of black are clearly distinguished from each other.

Next, the binarization image filtering unit 222 whitens the background area b in the binarized image to a predetermined area or less within the lane candidate area w, w) of the binarized image to determine a plurality of lane configuration areas 31 corresponding to lanes around the vehicle 1000 among the binarized images. Specifically, the binarized image filtering unit 222 includes a background area filtering unit 222-1 for whitening the background area b within a predetermined area within the lane candidate area w, A lane candidate region filtering unit 222-2 for deleting the lane candidate region w of the lane candidate region w and a ratio of a length of a short side of a long side of a virtual rectangle surrounding the lane candidate region w to a predetermined ratio And a lane configuration area determination unit 222-3 for determining the lane configuration area 31 by deleting the lane candidate area w included in the virtual rectangle.

The background area filtering unit 222-1 may be configured to reduce the number of the background areas b within the lane candidate area w among the binarized images 30 to a predetermined area or less, (B) having an area smaller than that of the lane-forming area (31), the shape of the lane-forming area (31) can be clarified.

Here, the area of the lane-forming area 31 may be derived from a previously measured value, that is, the area value of the lane-forming area 31 previously detected and applied by the operation support system according to the present invention, The background area filtering unit 222-1 may reduce the area of the plurality of lane configuration areas 31 to less than the minimum value of the areas of the plurality of lane configuration areas 31 for more reliable whitening of the background area b existing in the lane configuration area 31. [ By making the background area (b) having an area white, the shape of the plurality of lane configuration areas (31) can be made totally clear.

The lane candidate region filtering unit 222-2 may delete the lane candidate region w that is less than or equal to a predetermined area from a plurality of the lane candidate regions w existing in the binarized image.

The lane candidate area filtering unit 222-2 may be configured to filter the lane candidate area filtering unit 222-2 such that the driving support system according to the present invention is smaller than the area of the lane configuration area 31 previously applied and detected in advance, The lane candidate area w smaller than the area of the lane configuration area 31 is deleted from the binarized image 30 so that the binarized image 30 The determination of the lane configuration area 31 can be facilitated.

According to the present invention, the shape of the lane-forming area 31 is preferably the same shape as the lane on the road on which the vehicle 1000 travels, and has a line shape.

Therefore, the lane-configuration region determining unit 222-3 determines that the ratio of the length of the long side to the length of the long side among the plurality of virtual rectangles surrounding each of the plurality of lane candidate regions (w) among the binarized images 30 The lane configuration area 31 can be determined by deleting the lane candidate area w included in the virtual rectangle exceeding a predetermined ratio.

The plurality of virtual rectangles surrounding each of the plurality of lane candidate regions (w) are arranged such that the lane configuration region (31) in a line shape having a long minor axis and a long axis as long as the ratio of the length of the short side can do. In contrast, as the ratio of the length of the short side to the length of the long side increases, the lane configuration area 31, which can be located inside the virtual rectangle, is not a linear shape within the virtual rectangle, but a shape like a rectangle or a circle .

The lane-forming area determining unit 222-3 determines the lane-forming area 31-3 located inside the virtual rectangle whose length ratio of the short sides of the plurality of virtual rectangles exceeds a predetermined value, Of the binarized image to delete the remaining lane candidate regions (w) other than the lane candidate region (w) that can be a plurality of the lane configuration regions (31) of the binarized image.

The predetermined numerical value of the ratio of the length of the short side of the long side of the virtual rectangle to the length of the long side of the imaginary rectangle surrounding the lane configuration region 31 detected beforehand by the operation support system according to the present invention And the length ratio of the short sides, and may be derived from the maximum of the lengths of the long sides of the plurality of virtual rectangles previously measured and the ratio of the lengths of the short sides.

As described above, the binarized image filtering unit 222 performs three-stage image processing to determine the lane configuration area 31 from the binarized image 30, that is, Of the plurality of lane candidate regions (w), the background region (b) being less than the predetermined area formed in the lane configuration region (31) The lane configuration area 31 can be determined in such a manner that the lane candidate area w less than the area of the area 31 is deleted and the lane candidate area w other than the line shape is deleted.

Therefore, the method of detecting the lane in the driving support system according to the embodiment of the present invention is a method of detecting the lane area by deriving a straightening equation through a coordinate 32 that can be extracted from a binary image, It is possible to detect the lane area more accurately than the method.

Next, the lane area derivation unit 223 may derive a plurality of the lane area 33 by connecting the lane configuration areas 31 adjacent to each other so that curvature is minimized. Specifically, the lane area derivation unit 223 includes a coordinate setting unit 223-1 for setting a plurality of coordinates 32 on the plurality of lane configuration areas 31, A curvature calculating section (223-3) for calculating a curvature of each of the plurality of virtual lines (i), and a plurality of virtual lines (i) And a lane area determining unit 223-4 for determining the plurality of virtual lines i formed in parallel with the same curvature as the lane area 33. The lane area determining unit 223-4 may be a lane area determining unit 223-4.

The coordinate setting unit 223-1 may set a plurality of the coordinates 32 on the plurality of lane configuration areas 31 determined in the binarized image 30. [ A plurality of the coordinates 32 may be set on an area including the top, middle, and bottom ends of each of the plurality of lane constituent regions 31, and a straight line or a curve may be formed through connection of the plurality of coordinates 32 The number of the coordinates 32 that can be set in each of the lane configuration areas 31 is not particularly limited.

Then, the virtual line forming unit 223-2 may form a plurality of virtual lines i by connecting the coordinates 32 adjacent to each other. In order to become the lane area 33 corresponding to the lane on which the vehicle 1000 travels on the road among the plurality of virtual lines i, the shape corresponding to the lane on the road, that is, It should be formed parallel to curvature.

Therefore, the curvature calculator 223-3 can calculate the curvature of the plurality of virtual lines i, and accordingly, the curvature of the plurality of virtual lines i corresponding to the plurality of the lane areas 33 And can provide a basis for determining the virtual line i.

Since the photographing unit 210 can provide a relative distance value between the subject and the lane and the subject vehicle 1000, as described above, the curvature calculating unit 223-3 Determines the relative distance between the lane area (33) and the subject vehicle (1000), which are formed so as to correspond to the lane, and then determines the change of the displacement of the lane area (33) The curvature of the lane area 33 can be determined.

The lane area determining unit 223-4 can determine the plurality of virtual lines i formed at the minimum curvature among the plurality of virtual lines i and formed in parallel with the same curvature to be the lane area 33 .

As described above, the lane area derivation unit 223 sets a plurality of the coordinates 32 on the plurality of the lane configuration areas 31 and connects the plurality of the coordinates 32 in the direction in which the curvature is minimized It is possible to form a plurality of the lane area 33 formed side by side with the same curvature.

In the lane area derivation unit 223, a plurality of the lane area 33 is not connected to a plurality of the lane configuration areas 31 through a simple straight line equation, but includes the uppermost, middle, Since a plurality of the coordinates 32 set in the plurality of areas can be connected in a direction in which the curvature is minimized, it is possible to prevent the inaccurate lane area 33 from being formed by being connected based on merely adjacent distances There is an effect that the lane area 33 can be reliably provided.

The lane area 33 derived from the lane area derivation unit 223 is used to calculate the lane area 33 of the road 20 in which the image 20 of the road on which the subject vehicle 1000 is taken, As a guide for guiding the traveling path of the subject vehicle 1000 to the corresponding image.

However, in a section where the curvature of the road suddenly changes, for example, in a section in which the road suddenly bends in one direction, the photographing unit 210 detects the image 20 of the road within a short range from the subject vehicle 1000, In this case, since the photographing unit 210 can not photograph an image of the road at a distance from the subject vehicle 1000, the subject vehicle 1000 may be an area corresponding to the lane on the road The lane area 33 can not be sufficiently recognized, so that the traveling of the vehicle 1000 corresponding to the lane area 33 may occur.

In addition, when the photographing unit 210 can not photograph an image of the road at a distance from the subject vehicle 1000, the subject vehicle 1000 may be an obstacle such as a pedestrian, Or the vehicle 1000 other than the own vehicle 1000, and thus there may arise a problem in supporting the corresponding operation.

Therefore, the photographing unit 210 can accurately photograph the image 20 of the road regardless of the shape of the road by rotating according to the shape of the road on which the vehicle 1000 travels.

Here, in order for the photographing unit 210 to rotate in accordance with the shape of the road, a basis for determining whether the photographing unit 210 rotates corresponding to the shape of the road is required, The analysis unit 230 may analyze the lane area 33 and provide a basis for the rotation of the photographing unit 210 corresponding to the lane area 33. [

Specifically, the lane departure analysis unit 230 may analyze whether the lane area 33 is located at a predetermined distance or more from the subject vehicle 1000 in the image according to the curvature.

The predetermined distance may be information on various subjects that can be provided due to the image photographed by the photographing unit 210 when the car 1000 travels on the road, And a distance within a range in which the subject vehicle 1000 can respond correspondingly to the displayed lane, the pedestrian, the obstacle, the other vehicle 1000, etc. that may exist on the road, The lane area 33 having a sufficient distance must be detected through the image that can be photographed through the photographing unit 210 in order for the car 1000 to recognize the information from the image and to travel accordingly.

If the lane area 33 detected from the image 20 is insufficient, the own vehicle 1000 can not accurately recognize the traveling path on the road, A problem may arise in traveling the traveling path of the subject vehicle 1000 accurately.

Therefore, the lane departure analysis unit 230 can analyze whether the lane area 33 is located within a predetermined distance or more in the image to which the lane area 33 is applied, Can rotate. Specifically, when the lane departure analysis unit 230 analyzes a plurality of the lane area 33 as being not located at a predetermined distance or more from the subject vehicle 1000 in the image, May be rotated in accordance with the degree of curvature of the lane area (33) to position the plurality of lane areas (33) in the image by a predetermined distance or more from the subject vehicle (1000).

The lane departure analysis unit 230 may determine the rotation of the lane departure analysis unit 230 based on the analysis result of the lane departure analysis unit 230, (Not shown).

19 shows the rotation of the photographing unit 210 and the image capturing range of the photographing unit 210 according to the curvature of the road on which the vehicle 1000 travels according to an embodiment of the present invention, The rotation of the photographing unit 210 according to an embodiment of the present invention.

19 and 20, when the lane area 33 of the image photographed by the photographing unit 210 is located over a predetermined distance, that is, when the angle of view of the photographing unit 210 is the distance When the photographing unit 210 can photograph the road 20 at a predetermined distance or more (Figs. 19A and 20A), the photographing unit 210 can photograph the image 20 of the road The lane area 33 can be sufficiently detected.

However, when the view angle of the photographing unit 210 can not photograph the road more than the predetermined angle (Figs. 19B and 20B), the photographing unit 210 displays the image 20 of the road, Can be photographed only from the subject vehicle 1000 to a near-range area.

As described above, when the lane departure analysis unit 230 analyzes the lane area 33 as being not located at a predetermined distance or more from the subject vehicle 1000 in the image, the photographing unit The rotation control unit 240 controls the photographing unit 210 to move the lane area 33 to the lane area 33. In this case, (Fig. 19 (b), Fig. 20 (c)) depending on the degree of curvature of the curved portion.

The curvature is a value calculated through the curvature calculating unit 223-3 constituting the lane area deriving unit 223. The photographing unit 210 rotates in accordance with the curvature, The lane area 33 can be detected from the image 20 photographed within the range of the angle of view of the lane area 33. [

In the above description, the photographing unit 210 determines whether the lane area 33 is located at a predetermined distance or more from the subject vehicle 1000 in the image to which the lane area 33 is applied, I looked at turning.

However, the lane departure analysis unit 230 may provide a basis for the rotation of the photographing unit 210 through the degree of the area where the lane area 33 located in the image is located, May be compared with a predetermined reference curvature, and the rotation basis of the photographing unit 210 may be presented according to the comparison result.

Specifically, the lane departure analysis unit 230 compares the curvature with a predetermined reference curvature so that when the curvature is equal to or greater than the reference curvature, It can be analyzed that the vehicle 1000 is not located more than a predetermined distance.

The reference curvature may be obtained by previously measuring the curvature measured according to the degree to which the lane area 33 is located in the image obtained through the photographing part 210 of the vehicle 1000 to which the driving support system is applied It is possible to analyze the curvature of the lane area 33 and to present the basis of the rotation of the photographing part 210 according to the reference curvature as a value so that the rotation of the photographing part 210 It is possible to make a correspondence possible.

The photographing unit 210 may also calculate a curvature of a plurality of the lane regions 33 detected from the image of the road where the vehicle 1000 is expected to travel according to the steering direction of the steering wheel of the driver of the subject vehicle 1000 It can rotate along.

Specifically, the road on which the vehicle 1000 travels can be divided into a plurality of roads at any one point. In this case, the division point of the road is located in front of the vehicle 1000, The photographing unit 210 must rotate along the curvature of one of the plurality of divided roads.

In this case, the photographing unit 210 may photograph a plurality of the roads in a direction in which the rotation of the steering wheel is directed in accordance with the anticipated traveling direction of the vehicle 1000, that is, The photographing unit 210 can be rotated in accordance with the driving intention of the driver of the subject vehicle 1000. [

Since the driving support system according to the above description includes the photographing unit 210 that rotates according to the curvature of the road, the user can photograph the road regardless of the shape of the road, It is possible to provide the information of the lane area 33 that is the basis of the maintenance of the normal traveling route of the vehicle 1000 and the obstacle such as a pedestrian on the road or a rockfall that can be derived from the image, The present invention can contribute to the driving stability and flexibility of the subject vehicle 1000. [

For example, the driving support system recognizes the target object among the traveling route of the own vehicle 1000, that is, the traveling route of the own vehicle 1000, and controls the operation of the own vehicle 1000 .

Specifically, the driving support system includes a target object recognizing unit 250 for recognizing a target object around the subject vehicle 1000, a target object analyzing unit 250 for analyzing a distance between the subject vehicle 1000 and the target object And a driving support unit 270 for supporting the operation of the subject vehicle 1000 according to an analysis result of the target object analysis unit 260. [

The target object recognizing unit 250 can recognize the target object through the image provided from the photographing unit 210. As described above, the target object recognizing unit 250 recognizes the visible light camera module 221, Can be recognized through the information of the image 20 photographed from the camera module 222 and the thermal imaging camera module 223.

The target object analyzing unit 260 can recognize the image 20 provided from the photographing unit 210 through the binarized image 30 binarized through the binarized image generating unit 221, The target object may be directly recognized from the object of the target object analyzer 260 shown in the image from the target object analyzer 20 itself. Also, the target object analyzing unit 260 can be recognized through an ultrasonic sensor (not shown) that may be additionally provided in the photographing unit 210 or the vehicle 1000.

As described above, the target object analyzer 260 can recognize the target object through a variety of methods. Therefore, a detailed description of various methods by which the target object analyzer 260 recognizes the target object can be omitted .

The target object analyzing unit 260 may analyze a distance between the subject vehicle 1000 and the target object. In this case, as described above, the photographing unit 210 may include a plurality of camera modules The photographing unit 210 provides a relative distance value between the subject in the image photographed by the photographing unit 210 and the subject vehicle 1000. Therefore, The analysis unit 260 may analyze the relative distance between the subject vehicle 1000 and the target object through the relative distance value provided from the photographing unit 210. [

There is a need for the subject vehicle 1000 to travel while maintaining a certain distance from the target object. If the distance between the subject vehicle 1000 and the target object is very close to the target object, the possibility of the subject vehicle 1000 colliding with the target object may increase, You must travel at a distance greater than the safety distance, which is the distance to avoid collision.

The safety distance may be a minimum distance that can avoid collision with the target object while the subject vehicle 1000 is running on the basis of the speed of the subject vehicle 1000 and the relative distance between the subject vehicle 1000 and the target object It can be distance.

Accordingly, when the target object analyzer 260 analyzes the subject vehicle 1000 as being close to the target object at a distance less than the safety distance, the emergency damping unit 271 included in the driving support unit 270 It is possible to prevent the own vehicle 1000 from colliding with the target object by assisting the emergency braking of the subject vehicle 1000.

Emergency braking of the vehicle 1000 according to the support of the driving support unit 270 can be explained with reference to FIG.

13 is a diagram for explaining the cruise support unit 163 described in the foregoing invention. In the following description of the emergency braking of the car 1000, the cruise support unit 163 and the cruise support unit 163 shown in FIG. Reference numerals of the brake pedal 163 can be understood as reference numerals of the emergency brake unit 271 and the emergency brake unit 271. The reference numerals of the brake pedal p And can be understood to be used in combination.

13, the brake pedal p is an essential constitution of the vehicle 1000 for reducing or braking the speed of the vehicle 1000. In general, the brake pedal p is operated by the driver of the vehicle 1000 It becomes pressed and functions.

However, in the operation support system according to the present invention, when the target object analyzing unit 260 analyzes that the subject vehicle 1000 is close to the target object by a distance less than the safety distance, the emergency suspending unit 271, It can be automatically depressed by the control of the controller.

Specifically, the emergency breaking unit 271 may apply power to the solenoid 1210 to allow the solenoid 1210 to deliver the compressed air from the air pump 1220 to the cylinder 1230, The cylinder 1230 is positioned inside the cylinder 1230 and the piston rod 1240 driven up and down is projected to the outside of the cylinder 1230 to press the brake pedal p, Thereby enabling emergency braking of the vehicle 1000.

The driving support unit 270 supports urgent braking of the subject vehicle 1000 when approaching the target object that is running at a distance less than the safe distance as described above, Can be prevented.

However, when there is a problem in traveling of the subject vehicle 1000 such that the possibility of the subject vehicle 1000 colliding with the target object increases during driving of the vehicle 1000, And can support the running of the vehicle 1000 while the vehicle 1000 is running normally.

13 and 14, when the target object analyzing unit 260 analyzes that the subject vehicle 1000 maintains the safety distance or more with the target object, it is included in the driving support unit 270 The cruise support unit 272 can support the constant speed running or the accelerated running of the subject vehicle 1000 by controlling the accelerator pedal of the subject vehicle 1000 to be automatically depressed.

The accelerator pedal controls the amount of gasoline and air in the vehicle 1000 to accelerate the rotation of the engine of the vehicle 1000 to accelerate and advance the vehicle 1000. In the normal state, And is pressed by the driver of the vehicle 1000 to function.

However, in the operation support system according to the embodiment of the present invention, when the target object analyzing unit 260 analyzes that the subject vehicle 1000 maintains a distance equal to or longer than the safety distance from the target object, The control unit 272 can control the accelerator pedal to be automatically depressed so that the vehicle 1000 can be driven at constant speed or accelerated.

13, when the target object analyzing unit 260 analyzes that the subject vehicle 1000 maintains a distance equal to or longer than the safety distance from the target object, the cruise support unit 272 controls the solenoid 1210, The solenoid 1210 can supply the compressed air from the air pump 1220 to the cylinder 1230 by applying power to the cylinder 1230. The cylinder 1230 supplied with the compressed air is positioned inside the cylinder 1230 and projects a piston rod 1240 driven up and down to the outside of the cylinder 1230 to press the accelerator pedal, Thereby enabling the constant-speed running or the accelerated running of the subject vehicle 1000. The constant speed running or the accelerated running of the subject vehicle 1000 according to the support of the cruise support unit 272 can be determined according to the degree to which the accelerator pedal is depressed by the degree of protrusion of the piston rod 1240.

The solenoid 1210 may be connected to an emergency release switch 1290 that can block the control of the accelerator pedal according to the support of the cruise support unit 272.

14, when the emergency release switch 1290 is operated to turn off the solenoid 1210, the solenoid 1210 can not deliver the compressed air from the air pump 1220 to the cylinder 1230 It can be seen that the piston rod 1240 located inside the cylinder 1230 is inserted back into the cylinder 1230 and therefore when the emergency release switch 1290 is actuated, Since the piston rod 1240 is inserted into the cylinder 1230, the accelerator pedal can be switched to a completely manual mode in which the support of the driving support unit 270 is interrupted.

Accordingly, the driver of the subject vehicle 1000 can selectively apply the driving support of the cruise support unit 272 to the vehicle 1000, thereby making it possible to operate the vehicle 1000 more flexibly, The driver can quickly switch the mode of the own vehicle 1000 to the manual driving mode when the cruise support unit 272 malfunctions and generates a risk factor in the driving of the subject vehicle 1000, The vehicle 1000 can be operated more safely.

Hereinafter, when the subject vehicle 1000 or the driving vehicle supporting the driving support system is driving, the photographing unit 210 mounted on the subject vehicle 1000 rotates in accordance with the curvature of the road on which the subject vehicle 1000 is located, A driving support method for capturing an image of the surroundings of the vehicle 1000 and supporting the operation of the vehicle 1000 according to the captured image.

21 is a flowchart of a driving support method according to another embodiment of the present invention.

The driving support method is a driving support method of photographing the periphery of the subject vehicle 1000 during driving or driving and supporting the driving of the subject vehicle 1000 according to the photographed image, (S210) for photographing an image around the subject vehicle (1000) through at least one photographing unit (210), a lane area detecting step (S220) for detecting a plurality of lane areas (33) And a lane departure analysis step (S230) of analyzing whether a plurality of the lane area (33) is located at a predetermined distance or more from the subject vehicle (1000) in the image according to the curvature, If it is determined in step S230 that the plurality of lane areas 33 are not located within a predetermined distance or more from the subject vehicle 1000 in the image, 10 may rotate at least one photographing unit 210 according to the degree of curvature.

The photographing unit 210 has a function of photographing the image 20 around the subject vehicle 1000 from the subject vehicle 1000 during traveling or on a paddock, The position where the photographing unit 210 is mounted on the subject vehicle 1000 is not particularly limited as long as the subject image 20 can be photographed. However, the subject vehicle 1000 generally has a high-speed forward running function as a main function , It is possible to mount a camera in at least one position including a front area of the car 1000 so as to take an image of the surroundings of the car 1000 from at least one area including a front area of the car 1000 .

The photographing unit 210 has a function of photographing the image 20 around the subject vehicle 1000 mounted on the subject vehicle 1000. However, The function of determining the distance between the subject and the subject vehicle 1000 may be one of the important functions of the photographing unit 210. Here, the subject may be any object that can appear in the image 20, such as a lane, an obstacle, a pedestrian on the road on which the subject vehicle 1000 travels, and a vehicle 1000 other than the subject vehicle 1000 .

That is, the photographing unit 210 photographs an image of the surroundings of the subject vehicle 1000 and provides a relative distance value between the subject and the subject vehicle 1000, It is possible to make the vehicle run in response to the object. The description of the traveling of the subject vehicle 1000 corresponding to the subject will be described later.

The photographing unit 210 may include a plurality of camera modules 221, 222 and 223 for photographing the image 20 outside the subject vehicle 1000 along the same horizontal axis on the side of the photographing unit 210.

The photographing unit 210 can determine the relative distance between the subject from the subject vehicle 1000 by including a plurality of the camera modules. The plurality of camera modules are arranged on the same horizontal axis on the photographing unit 210 Each of the images 20 photographed from the plurality of camera modules can represent the same subject at different positions in the same image standard, that is, at different right and left distance differences (time difference) .

Accordingly, the photographing unit 210 may photograph the photographing unit 210, that is, the photographing unit 210 on which the photographing unit 210 is mounted, through the parallax between the same objects shown in the respective images 20 provided from the plurality of camera modules. And may provide a relative distance value between the vehicle 1000 and the subject.

The photographing unit 210 may include at least one of a visible light camera module 221, an infrared camera module 222 and a thermal camera module 223. As described above, The infrared camera module 222 and the infrared camera module 222 in order to determine the relative distance between the subject in the image 20 and the subject vehicle 1000. In addition, (223) may be included.

The plurality of visible light camera modules 221 that may be included in the photographing unit 210 may be a CMOS (Complementary Metal Oxide Semiconductor) camera module or a CCD (Charge-Coupled Device) camera module. Since the region of about 400 to 800 nm, which is a visible light band, is photographed, it can be easily used during a day in which an optimized image can be obtained through light.

The visible light camera module 221 may not be able to be photographed unless it is used in combination with another illumination device because the energy of the visible light band that can be photographed at night when the visible light is weak is very small. However, when the visible light camera module 221 is used in combination with the illumination device, the result of the image that can be photographed may differ depending on the intensity and the type of the illumination device, Can be degraded.

Therefore, the photographing unit 210 includes a plurality of infrared camera modules 222 together with a plurality of the visible light camera modules 221, and in the daytime, the infrared camera module 222 is connected to the vehicle 1000 through the visible light camera module 221, The infrared camera module 222 may be configured to take the role of the visible light camera module 221 at night.

In general, at night where moonlight or starlight exists, near infrared ray energy having a band around 580 nm is considerably present, so that the photographing unit 210 can photograph an image of the outside of the vehicle 1000 with the infrared camera module 222 alone have.

However, as in the case of starlight but without moonlight, there is little near-infrared energy in the darker nights, and instead there is significant energy in the infrared region of the 1000-1200 nm band. Accordingly, the infrared camera module 222 may be the infrared camera module 222 capable of shooting infrared rays in the range of 1000 to 1200 nm so as to prepare for a darker situation.

However, since both the moonlight and the starlight exist, there is almost no infrared energy in the band of 1000 to 1200 nm in the bright nighttime. Thus, in order to compensate for this, the photographing unit 210 has infrared And an infrared LED 224 for emitting infrared light. Therefore, in the night when relatively dark nighttime, that is, in the nighttime in which the near-infrared energy in the band of 1000 to 1200 nm is considerably present, the vicinity of the vehicle 1000 can be photographed only through the infrared camera module 222, When there is almost no infrared energy in the 1200 nm band, the infrared LED 224 can supplement the infrared energy of the band of 1000 to 1200 nm. Therefore, It is possible to capture an image through the camera module 222. [

In contrast, when the infrared camera module 222 photographs near-infrared rays in the front and rear band of 850 nm and near infrared rays in the relatively dark nighttime, that is, in the front and rear band of 850 nm are hardly detected, The infrared LED 224 may be configured to supplement the infrared LED 224.

However, infrared light having a wavelength of 1000 to 1200 nm can transmit energy farther than near-infrared light having a wavelength of 850 nm as well as a characteristic of sensing a shape that is not perceived by the naked eye. Therefore, It is more preferable that the infrared LED 224 for radiating an infrared ray having a wavelength of 1000 to 1200 nm is configured to complement the infrared camera module 222 at a relatively bright nighttime .

In addition, the photographing unit 210 may include at least one infrared camera module 223. The thermal imaging camera module 223 senses the subject by electronically measuring the thermal radiation of the subject. The thermal imaging camera module 223 senses the image 20 outside the subject vehicle 1000, recognized through the thermal imaging camera module 223, Can be used for recognizing the subject having a body temperature such as a pedestrian or an animal on the road because the color varies depending on the measured temperature of the subject.

The photographing unit 210 can easily recognize the subject in the daytime and at night through the visible light camera module 221 and the infrared camera module 222 on the driving route of the subject vehicle 1000 Through the infrared camera module 223, it is possible to more accurately recognize the subject having a body temperature such as a pedestrian or an animal.

The photographing unit 210 may include at least one air jetting unit 225. The air jetting unit 225 emits the lens compressed air of the visible light camera module 221, the infrared camera module 222 and the infrared camera module 223 as shown in FIG. 17, The infrared camera module 222 and the lens of the infrared camera module 223, which are located outside the infrared camera module 222 and can be contaminated by various external conditions such as rain and snow, It is possible to prevent the visible light camera module 221, the infrared camera module 222 and the infrared camera module 223 from being degraded in function due to contamination.

The photographing unit 210 further includes a washer liquid injector 226 for injecting washer liquid into the lenses of the visible light camera module 221, the infrared camera module 222 and the thermal imaging camera module 223, And a wiper 227 for wiping the lens of the visible light camera module 221, the infrared camera module 222 and the infrared camera module 223, Can be kept from being lowered.

The lane area detecting step S220 may detect a plurality of the lane areas 33 through the image photographed through the photographing part 210 in the photographing step S210. More specifically, the lane area detecting step S220 binarizes the image so that the lane candidate area w appears white and the background area b outside the lane candidate area w appears black, (B) of a predetermined area or less in the lane candidate region (w) in the binarized image and generating the lane candidate region (w A binarized image filtering step (S222) of determining a plurality of lane configuration areas (31) that are areas corresponding to lanes around the subject vehicle (1000) in the binarized image, and a binarized image filtering step And a lane area deriving step (step S223) of deriving a plurality of lane areas 33 by connecting the lane area 33 with curvature minimized.

The binarized image generation step S221 may include a stretching step S221-1 of stretching the image to enhance the contrast of the image 20 and a step S221-1 of separating the enhanced image 25 into a plurality of unit areas l And a binarization processing step (S221-2) of binarizing the divided unit areas (1) to generate the binarized image (30).

Generally, the contrast of an image means the difference between the bright and dark regions in the image, that is, the contrast ratio. The larger the contrast ratio, the higher the contrast. Since an image having a high contrast can be regarded as an image having a clear sharpness due to a clear difference between dark regions in the image, if the image having a low contrast is converted into an image having a high contrast by processing the image with a high contrast, Can be obtained.

Therefore, the stretching step S221-1 may further extend the range between the minimum image level and the maximum image level of the image level of each pixel of the image 20 to more evenly distribute the image level distribution of the image, It is possible to make the image 20 more clear by increasing the contrast ratio of the image.

The stretching step S221-1 may enhance the contrast of the image by histogram stretching, which is a method of extracting a histogram of an image, which is a general method for a general user, and extending a minimum image level and a maximum image level of the histogram, Therefore, a detailed description of the process can be omitted.

The binarization processing step S221-2 binarizes the image 25 with improved contrast so that the lane candidate area w appears white and the background area b outside the lane candidate area w The binary image 30 appearing in black can be generated.

The lane candidate area w of white and the background area b of black which can be formed in the binarized image 30 are distinguished from the image around the subject vehicle 1000, It is possible to provide important information for detecting the lane area 33, which is an area corresponding to the lane of the road in which the car 1000 is running.

That is, the lane candidate region (w) is an area in which the lane area 33 can be formed. In the binarized image 30 according to the present invention, the color range may be limited to white, The background area (b) other than the lane candidate area (w) is limited to black, thereby providing a candidate group capable of detecting the lane area (33) from the binarized image.

In the general binarization process, one of the brightness values of the image to be processed is selected as a reference brightness value, a brightness value larger than the reference brightness value is expressed as white based on the reference brightness value, It is common to display a small brightness value in black.

Accordingly, the binarization processing step S221-2 may select one brightness value among all the brightness value ranges included in the image 25 with improved contrast as a reference brightness value and set a larger brightness value based on the reference brightness value The region having a smaller brightness value can be set to white while the region having a smaller brightness value can be set to black. Here, it is preferable that the reference brightness value has a brightness value that is darker than the brightness value of the lane on the road that can be displayed in the image. To this end, the reference brightness value is previously recorded in the photographing step S210 Is preferably selected within a range of brightness value areas of the road lanes that can appear in the image.

As a method of determining the reference brightness value by the binarization processing step S221-2, one reference brightness value may be determined based on the brightness value of the road-lane previously measured as described above, A method in which the brightness is determined using the average brightness values of the boundary formed in the image 25 or the distribution of the mountains and valleys of the histogram of the image 25 in which the contrast is improved can be used, The lane candidate area w and the background area b may be determined by the method of determining the reference brightness value.

The image 20 photographed through the photographing step S210 can be partially illuminated by various lights such as illumination generated from the vehicle 1000 running on the road, streetlight or sunlight or moonlight However, there is a problem in generating the binarized image 30, that is, the binarized image 30 having a definite boundary division due to the illumination region.

Accordingly, the binarization processing step S221-2 may divide the image with improved contrast into a plurality of unit areas 1, and binarize the divided unit areas 1 to generate the binarized image. Accordingly, when the image having improved contrast is binarized, it is relatively easy to remove the illumination region of each unit region 1. Therefore, the quality of the resulting image can be improved in comparison with the binarization processing of the entire image The binarized image 30 in which the lane candidate region w having white and the background region b of black are clearly distinguished can be obtained.

Next, the binarized image filtering step S222 whitens the background area b in the binarized image to a predetermined area or less within the lane candidate area w, w of the binarized image 30 to determine a plurality of lane configuration areas 31 corresponding to lanes around the vehicle 1000 among the binarized images 30. [

Specifically, the binarized image filtering step S222 may include a background area filtering step S222-1 of whitening the background area b within a predetermined area within the lane candidate area w, A lane candidate region filtering step (S222-2) of deleting the lane candidate region (w) of the lane candidate region (w) and a ratio of a length of a short side of a long side of a virtual rectangle surrounding the lane candidate region (w) And a lane configuration area determination step (S222-3) of deleting the lane candidate area (w) included in the virtual rectangle to determine the lane configuration area (31).

The background area filtering step S222-1 may be performed on a predetermined area or less of a plurality of the background areas b existing in the lane candidate area w among the binarized images 30, (B) having an area smaller than that of the lane-forming area (31), the shape of the lane-forming area (31) can be clarified.

The area of the lane configuration area 31 can be derived from a previously measured value, that is, the area value of the lane configuration area 31, which is previously applied and detected in advance by the operation support method according to the present invention, The area filtering step S222-1 is performed to reduce an area less than a minimum value of the areas of the plurality of lane configuration areas 31 for more reliable whitening of the background area b existing in the lane configuration area 31 And the background area (b) of the lane configuration area (31) is whitened, the shape of the plurality of lane configuration areas (31) can be made totally clear.

The lane candidate region filtering step S222-2 may delete the lane candidate region w that is less than or equal to a predetermined area from a plurality of the lane candidate regions w existing in the binarized image 30. [

The lane candidate region filtering step S222-2 may be performed in the same manner as in the background area filtering step S222-1 except that the driving support method according to the present invention is applied in advance to the area of the lane configuration area 31 The lane candidate area w smaller than the area of the lane configuration area 31 is deleted from the binarized image 30 so that the binarized image 30 The determination of the lane configuration area 31 can be facilitated.

According to the present invention, the shape of the lane-forming area 31 is preferably the same shape as the lane on the road on which the vehicle 1000 travels, and has a line shape.

Accordingly, in the binarized image area determination step S222-3, the lane configuration region determination step S222-3 may include a step of determining a lane-forming configuration area S222-3 based on a ratio of a length of a long side to a length of a long side among a plurality of imaginary rectangles surrounding each of the plurality of lane candidate regions The lane configuration area 31 can be determined by deleting the lane candidate area w included in the virtual rectangle exceeding the predetermined ratio.

The plurality of virtual rectangles surrounding each of the plurality of lane candidate regions (w) are arranged such that the lane configuration region (31) in a line shape having a long minor axis and a long axis as long as the ratio of the length of the short side can do. In contrast, as the ratio of the length of the short side to the length of the long side increases, the lane configuration area 31, which can be located inside the virtual rectangle, is not a linear shape within the virtual rectangle, but a shape like a rectangle or a circle .

Therefore, the step of determining the lane-forming area S222-3 may include determining the lane-configuration area S222-3 of the plurality of virtual rectangles having the length ratio of the short side to the long side of the virtual rectangle exceeding a predetermined value, ) Of the lane candidate region (w) other than the lane candidate region (w) that can be a plurality of the lane configuration regions (31) of the binarized image.

Here, the predetermined numerical value of the ratio of the length of the short side to the length of the long side of the imaginary rectangle is a virtual rectangular long side surrounding the lane configuration area 31 previously detected and applied beforehand according to the present invention. Can be derived from the length of the sides and the length ratio of the short sides and is preferably derived from the maximum of the lengths of the longer sides of the plurality of virtual rectangles previously measured and the ratio of the lengths of the shorter sides.

As described above, the binarized image filtering step S222 is a three-step image processing step to determine the lane configuration area 31 from the binarized image, that is, The background area b which is less than the predetermined area, that is, the background area b which is less than the area of the lane configuration area 31 is whitened and the lane configuration area 31 of the plurality of lane candidate areas w, The lane configuration area 31 can be determined by deleting the lane candidate area w less than the area of the lane candidate area 31 and deleting the lane candidate area w other than the line shape.

Therefore, in the driving support method according to an embodiment of the present invention, the method of detecting the lane may include deriving a straightening equation of a straight line through coordinates (32) that can be extracted from the binarized image (30) It is possible to detect the lane more accurately than the detection method.

Next, the lane area deriving step S223 may derive a plurality of the lane area 33 by connecting the lane configuration areas 31 adjacent to each other so that curvature is minimized.

More specifically, the lane area deriving step S223 includes a coordinate setting step S223-1 for setting a plurality of coordinates 32 on the plurality of lane-forming constituent areas 31, a step of setting the coordinates 32 adjacent to each other A curvature computing step (S223-3) of computing a curvature of each of the plurality of imaginary lines (i), and a step of calculating a curvature of each of the plurality of virtual lines (i) And a lane area determination step (step S223-4) of determining the plurality of virtual lines i formed in parallel with the same curvature among the lines i as the lane area 33. [

The coordinate setting step S223-1 may set a plurality of the coordinates 32 on the plurality of the lane configuration areas 31 determined in the binarized image. A plurality of the coordinates 32 may be set on an area including the top, middle, and bottom ends of each of the plurality of lane constituent regions 31, and a straight line or a curve may be formed through connection of the plurality of coordinates 32 The number of the coordinates 32 that can be set in each of the lane configuration areas 31 is not particularly limited.

Subsequently, the virtual line forming step S223-2 may form a plurality of virtual lines i by connecting the coordinates 32 adjacent to each other.

Here, in order to become the lane area 33 corresponding to the lane on which the vehicle 1000 runs on the road among the plurality of virtual lines i, the shape corresponding to the lane on the road, that is, A plurality should be formed side by side with minimum curvature.

Accordingly, the curvature calculation step S223-3 can calculate the curvature of the plurality of virtual lines i, and thereby calculate the curvature of the plurality of virtual lines i corresponding to the plurality of the lane areas 33 And can provide a basis for determining the virtual line i.

Since the photographing unit 210 can provide a relative distance value between the subject and the lane and the subject vehicle 1000, as described above, the curvature calculating step S223-3 Determines the relative distance between the lane area 33 and the car 1000 that are formed to correspond to the lane and then determines whether the lane area 33 is located within the image 20, The curvature of the lane area 33 can be determined.

Subsequently, the lane area determination step S223-4 may determine a plurality of virtual lines i, which are formed with a minimum curvature among the plurality of virtual lines i and are formed in parallel with the same curvature, as the lane area 33 have.

As described above, in the lane area deriving step S223, a plurality of the coordinates 32 are set on the plurality of the lane configuration areas 31, and a plurality of the coordinates 32 are connected in a direction in which the curvature is minimized It is possible to form a plurality of the lane area 33 formed side by side with the same curvature.

In the lane area deriving step S223, the plurality of the lane area 33 is not connected to the plurality of the lane configuration areas 31 through a simple straight line equation, but includes the uppermost, middle, Since a plurality of the coordinates 32 set in the plurality of areas can be connected in a direction in which the curvature is minimized, it is possible to prevent the inaccurate lane area 33 from being formed by being connected based on merely adjacent distances There is an effect that the lane area 33 can be reliably provided.

The lane area 33 derived in the lane area deriving step S223 corresponds to the road lane appearing in the image of the road on which the car 1000 is photographed by the photographing unit 210 As a guide for guiding the traveling path of the subject vehicle 1000 to the area.

However, in a section where the curvature of the road suddenly changes, for example, in a section where the road suddenly bends in one direction, the photographing section 210 may photograph only the image of the road within a short range from the subject vehicle 1000 In this case, since the photographing unit 210 does not photograph an image of the road at a distance from the subject vehicle 1000, the subject vehicle 1000 can not photograph the road, which is an area corresponding to the lane on the road, The area 33 can not be recognized sufficiently, so that there is a problem in running the vehicle 1000 corresponding to the lane area 33. [

In addition, when the photographing unit 210 can not photograph an image of the road at a distance from the subject vehicle 1000, the subject vehicle 1000 may be an obstacle such as a pedestrian, Or the other vehicle 1000 other than the own vehicle 1000 may not be correctly recognized, thereby causing a problem in supporting the corresponding operation.

Accordingly, the photographing unit 210 can accurately photograph an image of the road regardless of the shape of the road, by rotating the photographing unit 210 corresponding to the shape of the road on which the subject vehicle 1000 runs.

In order for the photographing unit 210 to rotate in accordance with the shape of the road, a basis for determining whether the photographing unit 210 rotates corresponding to the shape of the road is required, and the lane departure- S230 may analyze the lane area 33 to provide a basis for the rotation of the photographing unit 210 corresponding to the lane area 33. [

In more detail, the lane departure analysis step S230 may analyze whether the lane area 33 is located at a predetermined distance or more from the subject vehicle 1000 in the image according to the curvature.

The predetermined distance may be information on various objects that can be provided due to the image 20 photographed by the photographing unit 210 when the car 1000 travels on the road, A distance within a range in which the subject vehicle 1000 can respond to the lane indicated on the road, the pedestrian, the obstacle, the other vehicle 1000, etc. that may be present on the road, In order to recognize the information from the image 20 photographed by the vehicle 1000 and to travel in response to the information, the lane of the lane of sufficient distance through the image 20, which can be photographed through the photographing unit 210, Area 33 should be detected.

If the lane area 33 detected from the image 20 is insufficient, the own vehicle 1000 can not accurately recognize the traveling path on the road and can be provided to the lane area 33 A problem may arise in traveling the traveling path of the subject vehicle 1000 accurately.

Therefore, the lane departure analysis step S230 can analyze whether the lane area 33 is located within a predetermined distance or more in the image to which the lane area 33 is applied, May be rotated by the photographing unit 210. Specifically, when the lane departure-from-departure analysis step S230 analyzes that the plurality of lane area 33 is not positioned at a predetermined distance or more from the subject vehicle 1000 in the image 20, The plurality of lane areas 33 can be rotated in accordance with the degree of curvature of the lane area 33 so that the plurality of lane areas 33 are located at a predetermined distance or more from the subject vehicle 1000 in the image 20. [ .

The driving support method includes a rotation control step S240 for controlling the rotation of the photographing unit 210 according to an analysis result of the lane departure analysis step S230, May be rotated through the control of step S240.

When the lane area 33 of the image 20 photographed from the photographing part 210 is located over a predetermined distance, that is, when the angle of view of the photographing part 210 is longer than the predetermined distance The photographing unit 210 may detect the lane area 33 from the image 20 of the road without rotating the vehicle 1000. In this case, A sufficient area can be detected.

However, when the view angle of the photographing unit 210 can not photograph the road more than the predetermined angle (Figs. 19B and 20B), the photographing unit 210 displays the image 20 of the road, Can be photographed only from the subject vehicle 1000 to a near-range area.

As described above, when the lane departure analysis step S230 analyzes that the lane area 33 is not positioned more than a predetermined distance from the subject vehicle 1000 in the image 20, If the photographing unit 210 determines that the image of the road can be photographed only from the subject vehicle 1000 to a nearby area, the rotation control step S240 may move the photographing unit 210 to the lane area 33 (Fig. 19 (b) and Fig. 20 (c)) in accordance with the degree of curvature of the curved portion.

The curvature is a value calculated through the curvature calculation step S223-3 constituting the lane area deriving step S223. The photographing part 210 rotates according to the curvature, The lane area 33 may be detected from an image photographed within a range of the angle of view of the lane area 33 in a predetermined area or more.

The above description has been made in order to analyze whether the lane area 33 is located at a predetermined distance or more from the subject vehicle 1000 in the image 20 to which the lane area 33 is applied, 210 are rotated.

However, in the lane departure analysis step S230, in addition to a method of providing the basis for the rotation of the photographing unit 210 through the degree of the area where the lane area 33 located in the image is located, May be compared with a predetermined reference curvature, and the rotation basis of the photographing unit 210 may be presented according to the comparison result.

More specifically, the lane departure analysis step S230 compares the curvature with a predetermined reference curvature, and when the curvature is equal to or greater than the reference curvature, a plurality of the lane areas 33 are detected in the image 20, It can be analyzed that it is not located more than a predetermined distance from the mobile terminal 1000.

Herein, the reference curvature may be calculated based on the degree of the position of the lane area 33 within the image 20 obtained through the photographing unit 210 of the vehicle 1000 to which the driving support method is applied, It is possible to analyze the curvature of the lane area 33 and present the basis of the rotation of the photographing unit 210 according to the reference curvature as a value measured in advance of the curvature, (Not shown).

The photographing step S210 is performed to determine whether or not the plurality of the lane regions 33 detected from the image 20 of the road where the vehicle 1000 is expected to travel in accordance with the steering direction of the steering wheel of the driver of the subject vehicle 1000 The photographing unit 210 can be rotated according to the degree of curvature.

Specifically, the road on which the vehicle 1000 travels can be divided into a plurality of roads at any one point. In this case, the division point of the road is located in front of the vehicle 1000, The photographing unit 210 must rotate along the curvature of one of the plurality of divided roads.

In this case, the photographing step S210 is a step of photographing one of the plurality of roads extending in the direction in which the rotation of the steering wheel is directed in accordance with the anticipated traveling direction of the vehicle 1000, that is, The photographing unit 210 can be rotated in accordance with the driving intention of the driver of the subject vehicle 1000. In this case,

Since the driving support method according to the present invention includes the photographing unit 210 that rotates according to the curvature of the road, the user can photograph the image 20 of the road regardless of the shape of the road, It is possible to provide the information of the lane area 33 that is the basis of the maintenance of the normal traveling route of the vehicle 1000 and also to provide information on the obstacle such as a pedestrian on the road, And providing information on various target objects such as the vehicle 1000, thereby contributing to the driving stability and flexibility of the vehicle 1000. [

For example, the driving support method recognizes the target object in the vicinity of the subject vehicle 1000, that is, the traveling route of the subject vehicle 1000, and controls the operation of the subject vehicle 1000 in response to the target object .

Specifically, the driving support method includes a target object recognition step S250 for recognizing a target object around the subject vehicle 1000, a target object analysis step for analyzing a distance between the subject vehicle 1000 and the target object S260) and supporting the operation of the subject vehicle 1000 according to the analysis result of the target object analysis step S260 (S270).

The target object recognition step S250 may be recognized through the image 20 provided from the photographing unit 210. As described above, the visible light camera module 221 included in the photographing unit 210, The infrared camera module 222, and the thermal imaging camera module 223, as shown in FIG.

In this case, the target object analysis step S260 may recognize the image 20 provided from the photographing unit 210 through the binarized image 30 binarized through the binarization image generation step S221 , The target object may be immediately recognized from a subject in the image photographed through the photographing step S210 from the image 20 itself. Also, the target object analysis step may be recognized through an ultrasonic sensor (not shown) that may be additionally provided in the photographing unit 210 or the vehicle 1000.

As described above, since the target object analysis step S260 can recognize the target object through various methods, a detailed description of the various methods by which the target object analysis step S260 recognizes the target object have.

The target object analysis step S260 may analyze the distance between the subject vehicle 1000 and the target object. In this case, as described above, the photographing unit 210 includes a plurality of camera modules The photographing unit 210 provides the relative distance value between the subject in the image 20 photographed by the photographing unit 210 and the subject vehicle 1000, The target object analysis step S260 can analyze the relative distance between the subject vehicle 1000 and the target object through the relative distance value provided from the photographing unit 210. [

There is a need for the subject vehicle 1000 to travel while maintaining a certain distance from the target object. If the distance between the subject vehicle 1000 and the target object is very close to the target object, the possibility of the subject vehicle 1000 colliding with the target object may increase, You must travel at a distance greater than the safety distance, which is the distance to avoid collision.

Here, the safety distance may be determined based on a speed of the subject vehicle 1000 and a relative distance between the subject vehicle 1000 and the target object, so that the subject vehicle 1000 can avoid a collision with the target object It can be a minimum distance.

Therefore, when the target object analysis step analyzes that the subject vehicle 1000 is close to the target object by a distance less than the safety distance, the emergency braking step S271 included in the driving assistance step S270 includes the steps of: The emergency braking of the subject vehicle 1000 can be supported to prevent the subject vehicle 1000 from colliding with the target object.

The emergency braking of the subject vehicle 1000 according to the support of the driving support step S270 can be explained with reference to FIG.

13 is a diagram for explaining the cruise support unit 163 described in the foregoing invention. In the following description of the emergency braking of the car 1000, the cruise support unit 163 and the cruise support unit 163 shown in FIG. The reference numerals of the accelerator pedal 163 can be understood as reference numerals of the emergency brake unit 271 and the emergency brake unit 271. The reference numerals of the accelerator pedal p are used in combination with the brake pedal p Can be understood as being used.

13, the brake pedal p is an essential constitution of the vehicle 1000 for reducing or braking the speed of the vehicle 1000. In general, the brake pedal p is operated by the driver of the vehicle 1000 It becomes pressed and functions.

However, in the driving support method according to the present invention, when the target object analysis step S260 analyzes that the subject vehicle 1000 is close to the target object at a distance less than the safety distance, the brake pedal p, May be automatically depressed through the emergency breaking part 271 in the emergency braking step S271.

Specifically, the emergency breaking unit 271 may apply power to the solenoid 1210 to allow the solenoid 1210 to deliver the compressed air from the air pump 1220 to the cylinder 1230, The cylinder 1230 is positioned inside the cylinder 1230 and the piston rod 1240 driven up and down is projected to the outside of the cylinder 1230 to press the brake pedal p, Thereby enabling emergency braking of the vehicle 1000.

As described above, the driving support step S270 supports urgent braking of the subject vehicle 1000 when approaching the target object that is running with a distance less than the safety distance, Can be prevented.

However, in the driving support step S270, when there is a problem in the running of the subject vehicle 1000 such that the possibility of the subject vehicle 1000 colliding with the target object increases during traveling of the vehicle 1000 , It is possible not only to support the function of coping with the vehicle 1000, but also to support the running of the vehicle 1000 while the vehicle 1000 is running normally.

13 and 14, when the target object analysis step analyzes that the subject vehicle 1000 maintains the safety distance or more with the target object, the driving assistance step (S270) The cruise support step S272 included in the control unit 250 controls the cruise support unit 272 for controlling the cruise control of the subject vehicle 1000 so as to automatically depress the accelerator pedal of the subject vehicle 1000, Speed running or accelerated running of the vehicle 1000. [

The accelerator pedal p is configured to accelerate the engine of the car 1000 by adjusting the amount of gasoline and air in the car 1000 to allow the car 1000 to accelerate and advance. The driver of the vehicle 1000 operates by being depressed.

However, in the driving support method according to the present invention, when the target object analysis step S260 analyzes that the subject vehicle 1000 maintains a distance equal to or longer than the safety distance from the target object, Can be controlled such that the accelerator pedal is automatically depressed so that the vehicle 1000 can be driven at constant speed or accelerated.

13, when the target object analysis step analyzes that the subject vehicle 1000 maintains a distance equal to or longer than the safety distance from the target object, the cruise support step S272 may include the cruise support step 272, The solenoid 1210 can supply the compressed air from the air pump 1220 to the cylinder 1230 by applying power to the solenoid 1210 through the solenoid 1210. The cylinder 1230 supplied with the compressed air is positioned inside the cylinder 1230 and projects a piston rod 1240 driven up and down to the outside of the cylinder 1230 to press the accelerator pedal, Thereby enabling the constant-speed running or the accelerated running of the subject vehicle 1000.

The constant speed running or the accelerated running of the subject vehicle 1000 according to the support of the cruise support step S272 may be determined according to the degree of the depression of the accelerator pedal p by the degree of protrusion of the piston rod 1240 have.

The solenoid 1210 may be connected to an emergency release switch 1290 that can block the control of the accelerator pedal p in accordance with the support of the cruise support step S272.

14, the emergency release switch 1290 is operated to turn off the solenoid 1210 so that the solenoid 1210 does not transfer compressed air from the air pump 1220 to the cylinder 1230 The piston rod 1240 located inside the cylinder 1230 can be seen to be inserted back into the cylinder 1230. Therefore, when the emergency release switch 1290 is operated, the acceleration pedal 1230 p is inserted into the cylinder 1230 so that the accelerator pedal p can be switched to a fully manual mode in which the support of the operation support step S270 is interrupted.

Accordingly, the driver of the subject vehicle 1000 can selectively apply the driving assistance of the cruise support step S272 to the vehicle 1000, thereby enabling the vehicle 1000 to operate more flexibly, In addition, when the cruise support step S272 operates abnormally to generate a risk factor in the running of the subject vehicle 1000, the subject vehicle 1000 can be quickly switched to the manual driving mode, The host vehicle 1000 can operate the vehicle 1000 more safely.

The foregoing description is merely illustrative of the technical idea of the present invention, and various changes and modifications may be made by those skilled in the art without departing from the essential characteristics of the present invention.

Therefore, the embodiments disclosed in the present invention are intended to illustrate rather than limit the scope of the present invention, and the scope of the technical idea of the present invention is not limited by these embodiments.

The scope of protection of the present invention should be construed according to the following claims, and all technical ideas within the scope of equivalents should be construed as falling within the scope of the present invention.

110: photographing unit 120: binarized image generating unit
130: binarized image filtering unit 140: lane area derivation unit
150: lane departure departure analysis section 160:

Claims (48)

1. A driving support system for photographing a periphery of a vehicle during driving or driving on a driving route and supporting driving of the vehicle in accordance with the photographed image,
At least one photographing unit mounted on the vehicle and photographing an image of the surroundings of the vehicle;
A binarized image generation unit for generating a binarized image by binarizing the image so that a lane candidate region of the image is represented by white and a background region outside the lane candidate region is represented by black;
A lane candidate region corresponding to the lane around the vehicle is determined from the binarized image by deleting the lane candidate region that is less than or equal to a predetermined area out of the lane candidate regions, A binarization image filtering unit;
A lane area derivation unit for deriving a plurality of lane area by connecting the lane configuration areas adjacent to each other with curvature being minimized; And
And a lane departure analysis unit for determining a relative distance between the vehicle and the plurality of lane areas and analyzing whether the vehicle is in contact with the lane adjacent to the vehicle.
The method according to claim 1,
Wherein the at least one photographing unit includes a side having a curved shape toward all directions around the vehicle, and a plurality of camera modules for photographing an image around the vehicle on the side.
3. The method of claim 2,
Wherein the at least one photographing unit comprises at least one visible light camera module or an infrared camera module.
The method according to claim 1,
Wherein the binarized image generating unit comprises:
A panoramic image generation unit for generating a panoramic image by combining images photographed through the plurality of photographing units;
A stretching unit stretching the panorama image to improve a contrast of the panorama image; And
And a binarization processor for separating the panoramic image with improved contrast into a plurality of unit areas and binarizing the divided unit areas to generate the binarized image.
The method according to claim 1,
Wherein the binarization image filtering unit comprises:
A background area filtering unit for whitening the background area within a predetermined area within the lane candidate area;
A lane candidate region filtering unit for deleting the lane candidate region of a predetermined area or less among the lane candidate regions; And
A lane configuration area for determining the lane configuration area by deleting the lane candidate area included in the imaginary rectangle whose length ratio of a short side to a length of a long side of a virtual rectangle surrounding the lane candidate area exceeds a predetermined ratio, And a determination unit for determining whether or not the driving support system is operating.
The method according to claim 1,
The lane area derivation unit
A coordinate setting unit configured to set a plurality of coordinates on the plurality of lane configuration areas;
A virtual line forming unit connecting the coordinates adjacent to each other to form a plurality of virtual lines;
A curvature calculator for calculating a curvature of each of the plurality of virtual lines; And
And a lane area determining unit configured to determine a plurality of virtual lines formed with a minimum curvature among the plurality of virtual lines and formed with the same curvature as the lane area.
The method according to claim 1,
Wherein the driving support system further comprises a driving support unit for supporting the driving of the vehicle according to an analysis result of the lane departure analysis unit.
8. The method of claim 7,
Wherein when the lane departure analysis section analyzes that the vehicle is in contact with the lane, the driving support section supports the driving direction change of the vehicle so that the vehicle does not contact the lane.
8. The method of claim 7,
Wherein when the lane departure analysis section analyzes that the vehicle is in contact with the lane, the driving support section provides the lane departure warning to the driver of the vehicle.
8. The method of claim 7,
Wherein when the lane departure analysis section analyzes that the vehicle is not in contact with a pair of lanes adjacent to the vehicle, the driving support section supports the vehicle at constant speed or accelerating.
1. A driving assistance method for shooting a periphery of a vehicle during driving or driving on a driving route and supporting driving of the vehicle in accordance with a photographed image,
A photographing step of photographing an image of the surroundings of the vehicle through at least one photographing unit mounted on the vehicle;
Generating a binarized image by binarizing the image so that a lane candidate region of the image is represented by white and a background region outside the lane candidate region is represented by black;
Wherein the lane candidate region is a region corresponding to a lane around the vehicle, and the lane candidate region is deleted from the lane candidate region, A binarized image filtering step of determining a configuration area;
A lane area derivation step of deriving a plurality of lane area by connecting the lane configuration areas adjacent to each other with curvature being minimized; And
And analyzing whether the vehicle is in contact with the lane adjacent to the vehicle by determining a relative distance between the vehicle and a plurality of the lane regions.
12. The method of claim 11,
Wherein the at least one photographing unit includes a side having a curved shape toward all directions around the vehicle, and a plurality of camera modules for photographing an image around the vehicle on the side.
13. The method of claim 12,
Wherein the at least one photographing unit comprises at least one visible light camera module or an infrared camera module.
12. The method of claim 11,
Wherein the generating the binarized image comprises:
A panorama image generation step of generating a panorama image by combining images photographed through the plurality of photographing units;
A stretching step of stretching the panoramic image to improve contrast of the panoramic image; And
And binarizing the binarized image by dividing the panoramic image having improved contrast into a plurality of unit regions and binarizing the divided unit regions to generate the binarized image.
12. The method of claim 11,
Wherein the binarizing image filtering step comprises:
A background area filtering step of whitening the background area within a predetermined area within the lane candidate area;
A lane candidate region filtering step of deleting the lane candidate region of a predetermined area or less among the lane candidate regions; And
A lane configuration area for determining the lane configuration area by deleting the lane candidate area included in the imaginary rectangle whose length ratio of a short side to a length of a long side of a virtual rectangle surrounding the lane candidate area exceeds a predetermined ratio, And a determining step of determining whether or not the driving assistance is performed.
12. The method of claim 11,
The step of deriving the lane area includes:
A coordinate setting step of setting a plurality of coordinates on the plurality of lane configuration areas;
A virtual line forming step of connecting the coordinates adjacent to each other to form a plurality of virtual lines;
A curvature computing step of computing a curvature of each of the plurality of virtual lines; And
And determining a plurality of virtual lines formed with a minimum curvature among the plurality of virtual lines and formed with the same curvature as the lane area.
12. The method of claim 11,
And a driving support step of supporting the driving of the vehicle according to an analysis result of the lane departure departure analysis step.
18. The method of claim 17,
Wherein when the vehicle is analyzed as being in contact with the lane in the lane departure separation analysis step, the driving support step supports the driving direction change of the vehicle so that the vehicle does not come into contact with the lane .
18. The method of claim 17,
Wherein the driving support step provides a lane departure warning to the driver of the vehicle when the vehicle is analyzed as being in contact with the lane in the lane departure departure analysis step.
18. The method of claim 17,
Wherein when the vehicle is analyzed as not being in contact with a pair of the lanes adjacent to the vehicle in the lane departure-departure-analyzing step, the driving support step supports the constant-speed driving or the acceleration running of the vehicle Way.
A driving support system for photographing a periphery of a subject vehicle during driving or driving on the road and supporting the driving of the subject vehicle in accordance with the photographed image,
At least one photographing unit mounted on the subject vehicle and photographing an image around the subject vehicle;
A lane area detecting unit detecting a lane area of the image; And
And a lane departure analysis unit for analyzing whether the lane area is located within a predetermined distance from the subject vehicle in the image,
Wherein the photographing unit rotates according to the degree of curvature when the lane departure departure analyzing unit analyzes that the plurality of lane area is not located at a predetermined distance or more from the subject vehicle in the image.
22. The method of claim 21,
Wherein the photographing section photographs an image of the surroundings of the subject vehicle from a front region of the subject vehicle by providing at least one in the front region of the subject vehicle.
23. The method of claim 22,
Wherein the photographing unit includes at least one of a visible light camera module, an infrared camera module, and a thermal camera module.
22. The method of claim 21,
Wherein the lane area detecting unit comprises:
A binarized image generating unit for generating a binarized image by binarizing the image so that a lane candidate region of the image is represented by white and a background region outside the lane candidate region is represented by black;
Wherein the lane candidate area is whitened and the lane candidate area is deleted from the lane candidate area and the lane candidate area is deleted from the lane candidate area. A binarization image filtering unit for determining a binarization image; And
And a lane area derivation unit for deriving a plurality of lane areas by connecting the lane configuration areas adjacent to each other with curvature being minimized.
25. The method of claim 24,
Wherein the binarized image generating unit comprises:
A stretching unit stretching the image to improve a contrast of the image; And
And a binarization processor for dividing the image having improved contrast into a plurality of unit areas and binarizing the divided unit areas to generate the binarized image.
25. The method of claim 24,
Wherein the binarization image filtering unit comprises:
A background area filtering unit for whitening the background area within a predetermined area within the lane candidate area;
A lane candidate region filtering unit for deleting the lane candidate region of a predetermined area or less among the lane candidate regions; And
A lane configuration area for determining the lane configuration area by deleting the lane candidate area included in the imaginary rectangle whose length ratio of a short side to a length of a long side of a virtual rectangle surrounding the lane candidate area exceeds a predetermined ratio, And a determination unit for determining whether or not the driving support system is operating.
25. The method of claim 24,
The lane area derivation unit
A coordinate setting unit configured to set a plurality of coordinates on the plurality of lane configuration areas;
A virtual line forming unit connecting the coordinates adjacent to each other to form a plurality of virtual lines;
A curvature calculator for calculating a curvature of each of the plurality of virtual lines; And
And a lane area determining unit configured to determine a plurality of virtual lines formed with a minimum curvature among the plurality of virtual lines and formed with the same curvature as the lane area.
22. The method of claim 21,
The lane departure departure analyzing unit compares the curvature with a predetermined reference curvature to analyze that the plurality of lane areas are not located more than a predetermined distance from the subject vehicle in the image when the curvature is equal to or greater than the reference curvature Features a driving support system.
22. The method of claim 21,
The operation support system further includes a rotation control unit,
Wherein the rotation control unit controls the rotation of the photographing unit according to an analysis result of the lane departure analysis unit.
30. The method of claim 29,
Wherein the rotation control unit controls the rotation of the photographing unit so that the lane area of the image photographed by the photographing unit appears at a predetermined distance or more from the subject vehicle.
31. The method of claim 30,
Wherein the photographing section rotates according to a degree of curvature of a plurality of the lane area detected from an image of a road on which the vehicle is expected to run according to the steering direction of the steering wheel of the driver of the own vehicle.
22. The method of claim 21,
Wherein the driving support system comprises:
A target object recognition unit for recognizing a target object around the subject vehicle;
A target object analyzing unit for analyzing a distance between the subject vehicle and the target object; And
And a driving support unit for supporting the operation of the subject vehicle according to an analysis result of the target object analyzing unit.
33. The method of claim 32,
Wherein the driving support unit supports the braking of the subject vehicle when the target object analyzing unit analyzes the subject vehicle as being close to the target object by a predetermined distance or less.
33. The method of claim 32,
Wherein the driving support unit supports the constant speed driving or the accelerated driving of the subject vehicle when the target object analyzing unit analyzes the subject vehicle as being maintained at a predetermined distance or more with the target object.
A driving support method for shooting the periphery of a vehicle while driving or driving on a driving route and supporting driving of the vehicle in accordance with a photographed image,
A photographing step of photographing an image of the surroundings of the vehicle through at least one photographing unit mounted on the vehicle;
A lane area detecting step of detecting a lane area in the image; And
And a lane departure analysis step of analyzing whether the lane area is located within a predetermined distance from the subject vehicle in the image,
Wherein the photographing unit rotates according to the degree of curvature when the plurality of lane area is analyzed as being not located at a predetermined distance or more from the subject vehicle in the image in the lane area departure analyzing step .
36. The method of claim 35,
Wherein the photographing section is provided with at least one in a front region of the vehicle so as to photograph an image of the vicinity of the vehicle from a front region of the vehicle.
37. The method of claim 36,
Wherein the photographing unit includes at least one of a visible light camera module, an infrared camera module, and an infrared camera module.
36. The method of claim 35,
Wherein the lane-
Generating a binarized image by binarizing the image so that a lane candidate region of the image is represented by white and a background region outside the lane candidate region is represented by black;
Wherein the lane candidate area is whitened and the lane candidate area is deleted from the lane candidate area and the lane candidate area is deleted from the lane candidate area. A binarization image filtering step of determining a binarization image; And
And a lane area derivation unit for deriving a plurality of lane areas by connecting the lane configuration areas adjacent to each other with curvature being minimized.
39. The method of claim 38,
Wherein the generating the binarized image comprises:
A stretching step of stretching the image to improve a contrast of the image; And
And binarizing the binarized image by dividing the image having improved contrast into a plurality of unit regions and performing binarization processing on the divided unit regions.
39. The method of claim 38,
Wherein the binarizing image filtering step comprises:
A background area filtering step of whitening the background area within a predetermined area within the lane candidate area;
A lane candidate region filtering step of deleting the lane candidate region of a predetermined area or less among the lane candidate regions; And
A lane configuration area for determining the lane configuration area by deleting the lane candidate area included in the virtual rectangle whose ratio of the length of the long side to the length of the long side of the imaginary rectangle surrounding the lane candidate area exceeds a predetermined ratio, And a determining step of determining whether or not the driving assistance is performed.
39. The method of claim 38,
The step of deriving the lane area includes:
A coordinate setting step of setting a plurality of coordinates on the plurality of lane configuration areas;
A virtual line forming step of connecting the coordinates adjacent to each other to form a plurality of virtual lines;
A curvature computing step of computing a curvature of each of the plurality of virtual lines; And
And determining a plurality of virtual lines formed with a minimum curvature among the plurality of virtual lines and formed with the same curvature as the lane area.
36. The method of claim 35,
The lane departure departure analyzing step compares the curvature with a predetermined reference curvature and analyzes the plurality of lane areas as being not located at a predetermined distance or more from the subject vehicle in the image when the curvature is equal to or greater than the reference curvature Wherein the driving assistance method comprises the steps of:
36. The method of claim 35,
Wherein the driving support method includes a rotation control step of controlling rotation of the photographing unit according to an analysis result of the lane departure-from-departure analysis step, and the photographing unit rotates through the control of the rotation control step .
44. The method of claim 43,
Wherein the photographing step rotates the photographing unit according to an analysis result of the lane departure-from-departure analysis step, and places the plurality of lane areas at a predetermined distance or more from the subject vehicle in the image.
45. The method of claim 44,
Wherein the photographing step rotates the photographing section in accordance with a degree of curvature of a plurality of the lane area detected from the image of the road on which the vehicle is expected to travel in accordance with the steering direction of the steering wheel of the driver of the subject vehicle. .
36. The method of claim 35,
The driving support method includes:
A target object recognition step of recognizing a target object around the subject vehicle;
A target object analysis step of analyzing a distance between the subject vehicle and the target object; And
And a driving support step of supporting the driving of the subject vehicle according to an analysis result of the target object analyzing unit.
47. The method of claim 46,
Wherein the driving support step supports braking of the subject vehicle when the subject vehicle is analyzed as being close to the target object by a predetermined distance or less in the target object analysis step.
47. The method of claim 46,
Wherein the driving support step supports the constant speed driving or the accelerated driving of the subject vehicle when the subject vehicle is analyzed to maintain the subject vehicle at a predetermined distance or more in the target object analyzing step Way.
KR1020150099051A 2015-07-13 2015-07-13 Driver Assistance System And Method Thereof KR101709402B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150099051A KR101709402B1 (en) 2015-07-13 2015-07-13 Driver Assistance System And Method Thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150099051A KR101709402B1 (en) 2015-07-13 2015-07-13 Driver Assistance System And Method Thereof

Related Child Applications (1)

Application Number Title Priority Date Filing Date
KR1020160099457A Division KR20170008190A (en) 2016-08-04 2016-08-04 Driver Assistance System And Method Thereof

Publications (2)

Publication Number Publication Date
KR20170007961A true KR20170007961A (en) 2017-01-23
KR101709402B1 KR101709402B1 (en) 2017-03-08

Family

ID=57989925

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150099051A KR101709402B1 (en) 2015-07-13 2015-07-13 Driver Assistance System And Method Thereof

Country Status (1)

Country Link
KR (1) KR101709402B1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200070755A (en) * 2018-12-10 2020-06-18 재단법인대구경북과학기술원 Moving body, particularly agricultural working vehicle and system of controlling the same
WO2020171605A1 (en) * 2019-02-19 2020-08-27 에스케이텔레콤 주식회사 Driving information providing method, and vehicle map providing server and method
KR20200125189A (en) * 2019-04-26 2020-11-04 주식회사 만도 Vehicle control system, apparatus for classifing marker and method thereof
KR102224815B1 (en) * 2019-09-11 2021-03-09 한국광기술원 Glass Beads and Manufacturing Method Thereof and Lane Recognition Apparatus and System of Autonomous Vehicle Using the Same

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102239734B1 (en) * 2019-04-01 2021-04-12 인천대학교 산학협력단 Moving route creation device and method for autonomous vehicles using around view monitor system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3915621B2 (en) * 2002-07-29 2007-05-16 日産自動車株式会社 Lane mark detector
JP2008030619A (en) * 2006-07-28 2008-02-14 Toyota Motor Corp Kinds-of-road-division-line sorting system and road-division-line recognition sytem
JP2014067136A (en) * 2012-09-25 2014-04-17 Nissan Motor Co Ltd Lane line detector and lane line detection method
JP2014091380A (en) * 2012-11-01 2014-05-19 Toyota Motor Corp Driving support device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3915621B2 (en) * 2002-07-29 2007-05-16 日産自動車株式会社 Lane mark detector
JP2008030619A (en) * 2006-07-28 2008-02-14 Toyota Motor Corp Kinds-of-road-division-line sorting system and road-division-line recognition sytem
JP2014067136A (en) * 2012-09-25 2014-04-17 Nissan Motor Co Ltd Lane line detector and lane line detection method
JP2014091380A (en) * 2012-11-01 2014-05-19 Toyota Motor Corp Driving support device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200070755A (en) * 2018-12-10 2020-06-18 재단법인대구경북과학기술원 Moving body, particularly agricultural working vehicle and system of controlling the same
WO2020171605A1 (en) * 2019-02-19 2020-08-27 에스케이텔레콤 주식회사 Driving information providing method, and vehicle map providing server and method
KR20200125189A (en) * 2019-04-26 2020-11-04 주식회사 만도 Vehicle control system, apparatus for classifing marker and method thereof
KR102224815B1 (en) * 2019-09-11 2021-03-09 한국광기술원 Glass Beads and Manufacturing Method Thereof and Lane Recognition Apparatus and System of Autonomous Vehicle Using the Same

Also Published As

Publication number Publication date
KR101709402B1 (en) 2017-03-08

Similar Documents

Publication Publication Date Title
KR101709402B1 (en) Driver Assistance System And Method Thereof
US10255509B2 (en) Adaptive lane marker detection for a vehicular vision system
KR101579098B1 (en) Stereo camera, driver assistance apparatus and Vehicle including the same
EP2919197B1 (en) Object detection device and object detection method
EP1671216B1 (en) Moving object detection using low illumination depth capable computer vision
JP3822515B2 (en) Obstacle detection device and method
US10635896B2 (en) Method for identifying an object in a surrounding region of a motor vehicle, driver assistance system and motor vehicle
KR101768500B1 (en) Drive assistance apparatus and method for controlling the same
US10933798B2 (en) Vehicle lighting control system with fog detection
US9269006B2 (en) Imaging apparatus, vehicle system, and image-processing method for image magnification and expansion
EP2698777A1 (en) Vehicle-mounted peripheral object recognition device and driving assistance device using same
WO2016092537A1 (en) Object detection enhancement of reflection-based imaging unit
JP2013191072A (en) Object detection device
US20160180158A1 (en) Vehicle vision system with pedestrian detection
JP2014146267A (en) Pedestrian detection device and driving support device
JP3868915B2 (en) Forward monitoring apparatus and method
WO2011016257A1 (en) Distance calculation device for vehicle
JP4798576B2 (en) Attachment detection device
KR20170008190A (en) Driver Assistance System And Method Thereof
KR20140087622A (en) Method of extracting traffic lanes from captured image using brightness
JP5643877B2 (en) Vehicle headlamp device
JPH02190978A (en) Visual sense recognizing device for vehicle
JP7295645B2 (en) Vehicle road sign recognition support device
WO2020129517A1 (en) Image processing device
KR20170093490A (en) Monitoring system for vehicle

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
A107 Divisional application of patent
E701 Decision to grant or registration of patent right
GRNT Written decision to grant