KR101772178B1 - Land mark detecting apparatus and land mark detection method for vehicle - Google Patents

Land mark detecting apparatus and land mark detection method for vehicle Download PDF

Info

Publication number
KR101772178B1
KR101772178B1 KR1020150172239A KR20150172239A KR101772178B1 KR 101772178 B1 KR101772178 B1 KR 101772178B1 KR 1020150172239 A KR1020150172239 A KR 1020150172239A KR 20150172239 A KR20150172239 A KR 20150172239A KR 101772178 B1 KR101772178 B1 KR 101772178B1
Authority
KR
South Korea
Prior art keywords
image
vehicle
objects
interest
processor
Prior art date
Application number
KR1020150172239A
Other languages
Korean (ko)
Other versions
KR20170065894A (en
Inventor
정순홍
서승우
현대진
백일주
박준홍
조병림
Original Assignee
엘지전자 주식회사
서울대학교산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사, 서울대학교산학협력단 filed Critical 엘지전자 주식회사
Priority to KR1020150172239A priority Critical patent/KR101772178B1/en
Publication of KR20170065894A publication Critical patent/KR20170065894A/en
Application granted granted Critical
Publication of KR101772178B1 publication Critical patent/KR101772178B1/en

Links

Images

Classifications

    • G06K9/6201
    • G06K9/00791
    • G06K9/627
    • G06K9/64

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The present invention relates to an apparatus and method for detecting a landmark used in a vehicle, and a landmark detecting apparatus according to an embodiment of the present invention includes an interface unit for receiving an image photographed by at least one camera provided in the vehicle, And a processor for performing image processing on the input image provided from the interface unit. At this time, the processor generates a binary image corresponding to the input image, labels a plurality of pixels of interest included in the binary image as a plurality of objects spaced apart from each other, and based on the similarity between the plurality of objects, Classifies the plurality of objects into at least one cluster, and extracts candidate interest regions corresponding to the respective clusters from the input image.

Description

FIELD OF THE INVENTION [0001] The present invention relates to a vehicle landmark detection apparatus and method,

Field of the Invention [0002] The present invention relates to an apparatus and method for detecting a landmark, and more particularly, to an apparatus and method for detecting an image-based landmark provided from a camera provided in a vehicle.

A vehicle is a device that drives a wheel to transport a person or cargo from one place to another. For example, two-wheeled vehicles such as a motorcycle, a four-wheeled vehicle such as a sedan, as well as a train belong to the vehicle.

In order to increase the safety and convenience of users who use the vehicle, development of technologies for connecting various sensors and electronic devices to the vehicle has been accelerated. In particular, a system that provides various functions (eg, smart cruise control, lane keeping assistance) developed for the user's driving convenience is installed in the vehicle.

In recent years, the TSR (Traffic Sign Recognition) function for recognizing a traffic sign located at the front of the vehicle provides useful information to the driver, but the technology for detecting the landmark formed on the ground and providing the information is insufficient .

In particular, the landmark is formed on the ground, and is not visible in the field of view of the driver, and the driver often travels without knowing the existence of the landmark.

An object of the present invention is to provide a landmark detecting apparatus and method capable of detecting a landmark formed on the ground from an input image provided from a camera provided in a vehicle.

The problems of the present invention are not limited to the above-mentioned problems, and other problems not mentioned can be clearly understood by those skilled in the art from the following description.

According to an aspect of the present invention, there is provided an image processing apparatus including an interface unit for receiving an image captured by at least one camera provided in a vehicle, and an image processing unit for performing image processing on the input image provided from the interface unit A landmark detecting apparatus comprising: At this time, the processor generates a binary image corresponding to the input image, labels a plurality of pixels of interest included in the binary image as a plurality of objects spaced apart from each other, and based on the similarity between the plurality of objects, Classifies the plurality of objects into at least one cluster, and extracts candidate interest regions corresponding to the respective clusters from the input image

Also, the input image may be an AVM image.

In addition, the processor may convert the input image to a rail image, and apply an edge detection filter to the gray image to generate the binary image including the plurality of pixels of interest.

The edge detection filter may be a Difference of Guassian (DoG) filter configured to detect pixels having a brightness value larger than a preset threshold value among all the pixels of the gray image.

In addition, the processor may remove an object corresponding to noise among the plurality of objects.

The processor calculates the number of pixels of interest per object and recognizes an object including the number of pixels of interest smaller than the first value or larger than the second value among the plurality of objects as the noise , The second value may be greater than the first value.

Also, the processor may generate a bounding box for distinguishing any one of the plurality of objects from the remaining objects for each of the plurality of bounding boxes, and determine whether the bounding box is smaller than the first size or larger than the second size An object corresponding to a bounding box having a size may be recognized as the noise, and the second size may be larger than the first size.

Also, the processor may sample at least one sampling point for each object, and determine whether the object is adjacent to another object based on a connection line between the sampled sampling points.

The processor may determine whether the two objects are adjacent to each other based on a length of a connection line between a sample point of one object and a sample point of another object among the two objects included in the plurality of objects .

In addition, the processor may generate the connection line through a delaunay triangulation.

The processor may calculate similarity between the two objects based on at least one of a color, a size, and a gradient of each of the two objects when determining that two of the objects are adjacent to each other, When the degree of similarity between objects is equal to or greater than a preset reference value, the two objects can be classified into the same cluster.

In addition, the processor may classify the category of the candidate interest area by using an image word dictionary (Bag of Visual Word).

According to another aspect of the present invention, there is provided a method of generating a binary image, the method comprising: receiving an input image from a camera provided in a vehicle; generating a binary image based on intensity of each pixel of the input image; Classifying a plurality of pixels of interest into a plurality of objects spaced apart from each other, classifying the plurality of objects into at least one cluster based on a degree of similarity between the plurality of objects, And extracting a candidate ROI corresponding to the candidate landmark.

The generating of the binary image may include converting the input image into a rail image and applying an edge detection filter to the gray image to generate the binary image.

Effects of the landmark detection apparatus and method according to the present invention will be described as follows.

According to at least one of the embodiments of the present invention, a landmark formed on the ground can be detected from an input image provided from a camera provided in the vehicle. In particular, a landmark formed on the ground can be detected from an input image photographed in various environments such as an indoor parking lot where a difference in illuminance due to illumination is large or a flare effect appears.

According to at least one of the embodiments of the present invention, the size of each landmark is determined based on the size of each object, which is composed of pixels (i.e., pixels of interest) forming a landmark in the input image, It is possible to improve the detection accuracy of the landmark.

Also, according to at least one of the embodiments of the present invention, by classifying the category of the ROI corresponding to the detected objects, it is possible to speed up recognition of the landmark indicated by the ROI of the specific category.

The effects of the present invention are not limited to the effects mentioned above, and other effects not mentioned can be clearly understood by those skilled in the art from the description of the claims.

Figure 1 shows a block diagram of a vehicle related to the present invention.
2 is a view showing the appearance of a vehicle related to the present invention. For convenience of explanation, it is assumed that the vehicle is a four-wheeled vehicle.
Figs. 3A to 3C are views referred to for describing the external camera described above with reference to Fig. 1. Fig.
FIG. 4 shows an example of the vehicle described above with reference to FIG. For convenience of explanation, it is assumed that the vehicle is a four-wheeled vehicle.
FIG. 5 shows an example of an internal block diagram of the control unit shown in FIG.
6A and 6B are views referred to in the description of the operation of the control unit shown in FIG.
7 shows an exemplary block diagram of a landmark detection apparatus according to an embodiment of the present invention.
8 shows a flow chart of an exemplary process performed by the landmark detection apparatus according to an embodiment of the present invention.
9 is a diagram for explaining a concept of a method for generating a binary image using a DoG filter according to an embodiment of the present invention.
FIG. 10 shows an example of another binary image generated by the landmark detection apparatus using the DoG filter according to an embodiment of the present invention.
11 is a diagram for explaining a method of classifying pixels of interest into at least one object in a binary image according to an embodiment of the present invention.
12 to 14 show an example of a method of classifying pixels of interest into at least one object in a binary image according to an embodiment of the present invention.
15 is a diagram for explaining an example of a method for grouping different objects into at least one cluster according to an embodiment of the present invention.
16A to 16C are diagrams for explaining an example of a method of grouping objects of a binary image according to an embodiment of the present invention.
17A and 17B are diagrams for explaining a method of extracting a region of interest from an input image according to an embodiment of the present invention.

Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings, wherein like reference numerals are used to designate identical or similar elements, and redundant description thereof will be omitted. The suffix "module" and " part "for the components used in the following description are given or mixed in consideration of ease of specification, and do not have their own meaning or role. In the following description of the embodiments of the present invention, a detailed description of related arts will be omitted when it is determined that the gist of the embodiments disclosed herein may be blurred. It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are intended to provide further explanation of the invention as claimed. , ≪ / RTI > equivalents, and alternatives.

Terms including ordinals, such as first, second, etc., may be used to describe various elements, but the elements are not limited to these terms. The terms are used only for the purpose of distinguishing one component from another.

It is to be understood that when an element is referred to as being "connected" or "connected" to another element, it may be directly connected or connected to the other element, . On the other hand, when an element is referred to as being "directly connected" or "directly connected" to another element, it should be understood that there are no other elements in between. It should also be understood that the term "controlling" one component is meant to encompass not only one component directly controlling the other component, but also controlling through mediation of a third component something to do. It is also to be understood that any element "providing" information or signals to another element is meant to encompass not only providing the element directly to the other element, but also providing it through intermediation of a third element .

The singular expressions include plural expressions unless the context clearly dictates otherwise.

In the present application, the terms "comprises", "having", and the like are used to specify that a feature, a number, a step, an operation, an element, a component, But do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, or combinations thereof.

The vehicle described in the present specification may be a concept including both an internal combustion engine vehicle having an engine as a power source, a hybrid vehicle having an engine and an electric motor as a power source, and an electric vehicle having an electric motor as a power source.

1 shows a block diagram of a vehicle 100 related to the present invention.

The vehicle 100 includes a communication unit 110, an input unit 120, a memory 130, an output unit 140, a vehicle driving unit 150, a sensing unit 160, a control unit 170, an interface unit 180, (Not shown).

The communication unit 110 may include one or more modules that enable wireless communication between the vehicle 100 and an external device (e.g., portable terminal, external server, other vehicle). In addition, the communication unit 110 may include one or more modules that connect the vehicle 100 to one or more networks.

The communication unit 110 may include a broadcast receiving module 111, a wireless Internet module 112, a local area communication module 113, a location information module 114, and an optical communication module 115.

The broadcast receiving module 111 receives broadcast signals or broadcast-related information from an external broadcast management server through a broadcast channel. Here, the broadcast includes a radio broadcast or a TV broadcast.

The wireless Internet module 112 refers to a module for wireless Internet access, and may be built in or externally mounted on the vehicle 100. The wireless Internet module 112 is configured to transmit and receive wireless signals in a communication network according to wireless Internet technologies.

Wireless Internet technologies include, for example, WLAN (Wireless LAN), Wi-Fi (Wireless Fidelity), Wi-Fi (Wireless Fidelity) Direct, DLNA, WiBro Interoperability for Microwave Access, High Speed Downlink Packet Access (HSDPA), High Speed Uplink Packet Access (HSUPA), Long Term Evolution (LTE) and Long Term Evolution-Advanced (LTE-A) 112 transmit and receive data according to at least one wireless Internet technology, including Internet technologies not listed above. For example, the wireless Internet module 112 may exchange data wirelessly with an external server. The wireless Internet module 112 can receive weather information and road traffic situation information (for example, TPEG (Transport Protocol Expert Group)) from an external server.

The short-range communication module 113 is for short-range communication, and includes Bluetooth ™, Radio Frequency Identification (RFID), Infrared Data Association (IrDA), Ultra Wideband (UWB) (Near Field Communication), Wi-Fi (Wireless-Fidelity), Wi-Fi Direct, and Wireless USB (Wireless Universal Serial Bus) technology.

The short-range communication module 113 may form short-range wireless communication networks to perform short-range communication between the vehicle 100 and at least one external device. For example, the short-range communication module 113 can wirelessly exchange data with the occupant's portable terminal. The short-range communication module 113 can receive weather information and road traffic situation information (for example, TPEG (Transport Protocol Expert Group)) from a portable terminal or an external server. For example, when the user aboard the vehicle 100, the user's portable terminal and the vehicle 100 can perform pairing with each other automatically or by execution of the user's application.

The position information module 114 is a module for acquiring the position of the vehicle 100, and a representative example thereof is a Global Positioning System (GPS) module. For example, when the vehicle utilizes a GPS module, it can acquire the position of the vehicle using a signal sent from the GPS satellite.

The optical communication module 115 may include a light emitting portion and a light receiving portion.

The light receiving section can convert the light signal into an electric signal and receive the information. The light receiving unit may include a photodiode (PD) for receiving light. Photodiodes can convert light into electrical signals. For example, the light receiving section can receive information of the front vehicle through light emitted from the light source included in the front vehicle.

The light emitting unit may include at least one light emitting element for converting an electric signal into an optical signal. Here, the light emitting element is preferably an LED (Light Emitting Diode). The optical transmitter converts the electrical signal into an optical signal and transmits it to the outside. For example, the optical transmitter can emit the optical signal to the outside through the blinking of the light emitting element corresponding to the predetermined frequency. According to an embodiment, the light emitting portion may include a plurality of light emitting element arrays. According to the embodiment, the light emitting portion can be integrated with the lamp provided in the vehicle 100. [ For example, the light emitting portion may be at least one of a headlight, a tail light, a brake light, a turn signal lamp, and a car light. For example, the optical communication module 115 can exchange data with other vehicles through optical communication.

The input unit 120 may include a driving operation unit 121, a microphone 123, and a user input unit 124.

The driving operation means 121 receives a user input for driving the vehicle 100. The driving operation means 121 may include a steering input means 121a, a shift input means 121b, an acceleration input means 121c and a brake input means 121d.

The steering input means 121a receives a forward direction input of the vehicle 100 from the user. The steering input means 121a may include a steering wheel. According to the embodiment, the steering input means 121a may be formed of a touch screen, a touch pad, or a button.

The shift input means 121b receives inputs of parking (P), forward (D), neutral (N), and reverse (R) of the vehicle 100 from the user. The shift input means 121b is preferably formed in a lever shape. According to an embodiment, the shift input means 121b may be formed of a touch screen, a touch pad, or a button.

The acceleration input means 121c receives an input for acceleration of the vehicle 100 from the user. The brake input means 121d receives an input for decelerating the vehicle 100 from the user. The acceleration input means 121c and the brake input means 121d are preferably formed in the form of a pedal. According to the embodiment, the acceleration input means 121c or the brake input means 121d may be formed of a touch screen, a touch pad, or a button.

The camera 122 is disposed at one side of the interior of the vehicle 100 to generate an indoor image of the vehicle 100. [ For example, the camera 122 may be disposed at various positions of the vehicle 100, such as a dashboard surface, a roof surface, a rear view mirror, etc., to photograph the passenger of the vehicle 100. In this case, the camera 122 may generate an indoor image of an area including the driver's seat of the vehicle 100. [ In addition, the camera 122 may generate an indoor image of an area including an operator's seat and an assistant seat of the vehicle 100. [ The indoor image generated by the camera 122 may be a two-dimensional image and / or a three-dimensional image. To generate a three-dimensional image, the camera 122 may include at least one of a stereo camera, a depth camera, and a three-dimensional laser scanner. The camera 122 can provide the indoor image generated by the camera 122 to the control unit 170 functionally combined with the indoor image. The camera 122 may be referred to as an 'indoor camera'.

The controller 170 analyzes the indoor image provided from the camera 122 and can detect various objects. For example, the control unit 170 can detect the sight line and / or the gesture of the driver from the portion corresponding to the driver's seat area in the indoor image. As another example, the control unit 170 can detect the sight line and / or the gesture of the passenger from the portion corresponding to the indoor area excluding the driver's seat area in the indoor image. Of course, the sight line and / or the gesture of the driver and the passenger may be detected at the same time.

The microphone 123 can process an external acoustic signal into electrical data. The processed data can be utilized variously according to functions performed in the vehicle 100. The microphone 123 can convert the voice command of the user into electrical data. The converted electrical data may be transmitted to the control unit 170.

The camera 122 or the microphone 123 may be a component included in the sensing unit 160 and not a component included in the input unit 120. [

The user input unit 124 is for receiving information from a user. When information is input through the user input unit 124, the controller 170 may control the operation of the vehicle 100 to correspond to the input information. The user input unit 124 may include a touch input means or a mechanical input means. According to an embodiment, the user input 124 may be located in one area of the steering wheel. In this case, the driver can operate the user input unit 124 with his / her finger while holding the steering wheel.

The input unit 120 may include a plurality of buttons or a touch sensor. It is also possible to perform various input operations through a plurality of buttons or touch sensors.

The sensing unit 160 senses a signal related to the running of the vehicle 100 or the like. To this end, the sensing unit 160 may include a sensor, a steering sensor, a speed sensor, a tilt sensor, a weight sensor, a heading sensor, a yaw sensor, a gyro sensor, Position sensor, vehicle forward / backward sensor, battery sensor, fuel sensor, tire sensor, steering sensor by steering wheel rotation, vehicle internal temperature sensor, internal humidity sensor, ultrasonic sensor, infrared sensor, radar, . ≪ / RTI >

Accordingly, the sensing unit 160 can sense the vehicle collision information, the vehicle direction information, the vehicle position information (GPS information), the vehicle angle information, the vehicle speed information, the vehicle acceleration information, the vehicle tilt information, Fuel information, tire information, vehicle lamp information, vehicle interior temperature information, vehicle interior humidity information, steering wheel rotation angle, and the like. The control unit 170 controls the acceleration and deceleration of the vehicle 100 based on the external environment information obtained by at least one of the camera, the ultrasonic sensor, the infrared sensor, the radar, A control signal for changing direction, etc. can be generated. Here, the external environment information may be information related to various objects located within a predetermined distance from the vehicle 100 in motion. For example, the external environment information may include information on the number of obstacles located within a distance of 100 m from the vehicle 100, a distance to the obstacle, a size of the obstacle, a type of the obstacle, and the like.

The sensing unit 160 may further include an accelerator pedal sensor, a pressure sensor, an engine speed sensor, an air flow sensor AFS, an intake air temperature sensor ATS, a water temperature sensor WTS, A sensor (TPS), a TDC sensor, a crank angle sensor (CAS), and the like.

The sensing unit 160 may include a biometric information sensing unit. The biometric information sensing unit senses and acquires the biometric information of the passenger. The biometric information may include fingerprint information, iris-scan information, retina-scan information, hand geo-metry information, facial recognition information, Voice recognition information. The biometric information sensing unit may include a sensor that senses the passenger's biometric information. Here, the camera 122 and the microphone 123 can operate as sensors. The biometric information sensing unit can acquire hand shape information and facial recognition information through the camera 122. [

The sensing unit 160 may include at least one camera 161 for photographing the outside of the vehicle 100. [ The camera 161 may be named 'external camera'. For example, the sensing unit 160 may include a plurality of cameras 161 disposed at different positions of the vehicle exterior. The camera 161 may include an image sensor and an image processing module. The camera 161 can process still images or moving images obtained by an image sensor (e.g., CMOS or CCD). The image processing module may process the still image or the moving image obtained through the image sensor, extract necessary information, and transmit the extracted information to the control unit 170.

The camera 161 may include an image sensor (e.g., CMOS or CCD) and an image processing module. In addition, the camera 161 can process still images or moving images obtained by the image sensor. The image processing module can process the still image or moving image obtained through the image sensor. In addition, the camera 161 may acquire an image including at least one of a traffic light, a traffic sign, a pedestrian, another vehicle, and a road surface.

The output unit 140 may include a display unit 141, an acoustic output unit 142, and a haptic output unit 143 for outputting information processed by the control unit 170.

The display unit 141 includes at least one display and can display information processed by the controller 170 on each display. For example, the display unit 141 can display vehicle-related information. Here, the vehicle-related information may include vehicle control information for direct control of the vehicle, or vehicle driving assistance information for a driving guide to the vehicle driver. Further, the vehicle-related information may include vehicle state information indicating the current state of the vehicle or vehicle driving information related to the driving of the vehicle.

The display unit 141 may be a liquid crystal display (LCD), a thin film transistor-liquid crystal display (TFT LCD), an organic light-emitting diode (OLED) display, a 3D display, and an e-ink display.

The display unit 141 may include at least one display. When a plurality of displays are included in the display unit 141, each display may include a touch screen having a mutual layer structure with the touch sensor or formed integrally with the touch sensor. Further, each of the displays may be disposed at different positions in the interior of the vehicle 100. [ For example, one of the displays may be disposed on the front passenger side of the dashboard of the vehicle 100, and the other display may be disposed on the rear side of the hetrest of the driver seat of the vehicle 100. [ In one embodiment, the display portion 141 may include a display 200 described below.

The touch screen may function as a user input 124 that provides an input interface between the vehicle 100 and a user and may provide an output interface between the vehicle 100 and a user.

In this case, the display unit 141 may include a touch sensor that senses a touch with respect to the display unit 141 so as to receive a control command by a touch method. When a touch is made to the display unit 141, the touch sensor senses the touch, and the control unit 170 generates a control command corresponding to the touch based on the touch. The content input by the touch method may be a letter or a number, an instruction in various modes, a menu item which can be designated, and the like.

Meanwhile, the display unit 141 may include a cluster so that the driver can check the vehicle state information or the vehicle driving information while driving. Clusters can be located on the dashboard. In this case, the driver can confirm the information displayed in the cluster while keeping the gaze ahead of the vehicle.

Meanwhile, according to the embodiment, the display unit 141 may include a Head Up Display (HUD). The HUD includes a projection module and the projection module can output display light corresponding to predetermined information toward the windshield under the control of the control unit 170. [ Accordingly, the user can receive the virtual image corresponding to the predetermined information through the windshield.

The sound output unit 142 converts an electric signal from the control unit 170 into an audio signal and outputs the audio signal. For this purpose, the sound output unit 142 may include a speaker or the like. It is also possible that the sound output unit 142 outputs a sound corresponding to the operation of the user input unit 124. [

The haptic output unit 143 generates a tactile output. For example, the haptic output section 143 may vibrate the steering wheel, the seat belt, and the seat so that the user can operate to recognize the output.

The vehicle driving unit 150 can control the operation of various devices of the vehicle. The vehicle driving unit 150 includes a power source driving unit 151, a steering driving unit 152, a brake driving unit 153, a lamp driving unit 154, an air conditioning driving unit 155, a window driving unit 156, an airbag driving unit 157, A driving unit 158, and a wiper driving unit 159. [0035]

The power source drive unit 151 may perform electronic control of the power source in the vehicle 100. [ The power source drive unit 151 may include an accelerator for increasing the speed of the vehicle 100 and a decelerator for decreasing the speed of the vehicle 100. [

For example, when the fossil fuel-based engine (not shown) is a power source, the power source drive unit 151 can perform electronic control of the engine. Thus, the output torque of the engine and the like can be controlled. When the power source drive unit 151 is an engine, the speed of the vehicle can be limited by limiting the engine output torque under the control of the control unit 170. [

In another example, when the electric motor (not shown) is a power source, the power source drive unit 151 can perform control on the motor. Thus, the rotation speed, torque, etc. of the motor can be controlled.

The steering driver 152 may include a steering apparatus. Accordingly, the steering driver 152 can perform electronic control of the steering apparatus in the vehicle 100. [ For example, the steering driver 152 may be provided with a steering torque sensor, a steering angle sensor, and a steering motor, and the steering torque applied by the driver to the steering wheel may be sensed by the steering torque sensor. The steering driver 152 can control the steering force and the steering angle by changing the magnitude and direction of the current applied to the steering motor based on the speed of the vehicle 100 and the steering torque. In addition, the steering driver 152 can determine whether the running direction of the vehicle 100 is properly adjusted based on the steering angle information obtained by the steering angle sensor. Thereby, the running direction of the vehicle can be changed. In addition, when the vehicle 100 is running at a low speed, the steering driver 152 lowers the weight of the steering wheel by increasing the steering force of the steering motor and reduces the steering force of the steering motor when the vehicle 100 is traveling at high speed, The weight can be increased. When the autonomous vehicle running function of the vehicle 100 is executed, the steering driver 152 may be configured to determine whether or not the steering wheel 160 is in a state where the driver operates the steering wheel (e.g., a situation in which the steering torque is not detected) It is also possible to control the steering motor to generate appropriate steering force based on the sensing signal or the control signal provided by the control unit 170. [

The brake driver 153 may perform electronic control of a brake apparatus (not shown) in the vehicle 100. [ For example, it is possible to reduce the speed of the vehicle 100 by controlling the operation of the brakes disposed on the wheels. As another example, it is possible to adjust the traveling direction of the vehicle 100 to the left or right by differently operating the brakes respectively disposed on the left wheel and the right wheel.

The lamp driving unit 154 may control the turn-on / turn-off of at least one or more lamps disposed inside or outside the vehicle. The lamp driver 154 may include a lighting device. Further, the lamp driving unit 154 can control intensity, direction, etc. of light output from each of the lamps included in the lighting apparatus. For example, it is possible to perform control for a direction indicating lamp, a head lamp, a brake lamp, and the like.

The air conditioning driving unit 155 may perform electronic control on an air conditioner (not shown) in the vehicle 100. For example, when the temperature inside the vehicle is high, the air conditioner can be operated to control the cool air to be supplied to the inside of the vehicle.

The window driving unit 156 may perform electronic control of a window apparatus in the vehicle 100. [ For example, it is possible to control the opening or closing of the side of the vehicle with respect to the left and right windows.

The airbag drive 157 may perform electronic control of the airbag apparatus in the vehicle 100. [ For example, in case of danger, the airbag can be controlled to fire.

The sunroof driving unit 158 may perform electronic control of a sunroof apparatus (not shown) in the vehicle 100. [ For example, the opening or closing of the sunroof can be controlled.

The wiper driving unit 159 may control the wipers 14a and 14b provided on the vehicle 100. [ For example, the wiper drive 159 may be configured to provide an electronic control for the number of drives, drive speeds, etc. of the wipers 14a, 14b in response to user input upon receipt of a user input instructing to drive the wiper through the user input 124 Can be performed. The wiper drive unit 159 may determine the amount or intensity of the rainwater based on the sensing signal of the rain sensor included in the sensing unit 160 so that the wipers 14a and 14b may be used without user input, Can be automatically driven.

Meanwhile, the vehicle driving unit 150 may further include a suspension driving unit (not shown). The suspension driving unit may perform electronic control of a suspension apparatus (not shown) in the vehicle 100. For example, when there is a curvature on the road surface, it is possible to control the suspension device so as to reduce the vibration of the vehicle 100. [

The memory 130 is electrically connected to the controller 170. The memory 170 may store basic data for the unit, control data for controlling the operation of the unit, and input / output data. The memory 190 may be, in hardware, various storage devices such as ROM, RAM, EPROM, flash drive, hard drive, and the like. The memory 130 may store various data for operation of the vehicle 100, such as a program for processing or controlling the controller 170. [

The interface unit 180 may serve as a path to various kinds of external devices connected to the vehicle 100. For example, the interface unit 180 may include a port connectable to the portable terminal, and may be connected to the portable terminal through the port. In this case, the interface unit 180 can exchange data with the portable terminal.

The interface unit 180 may receive the turn signal information. Here, the turn signal information may be a turn-on signal of the turn signal lamp for the left turn or the turn right turn inputted by the user. When the left or right turn signal turn-on input is received through the user input (124 in FIG. 1) of the vehicle 100, the interface unit 180 may receive the left or right turn signal information.

The interface unit 180 may receive vehicle speed information, rotation angle information of the steering wheel, or gear shift information. The interface unit 180 may receive the sensed vehicle speed information, the steering wheel rotation angle information, or the gear shift information through the sensing unit 160 of the vehicle. Alternatively, the interface unit 180 may receive the vehicle speed information, the steering wheel rotation angle information, or the gear shift information from the control unit 170 of the vehicle. Here, the gear shift information may be information on which state the shift lever of the vehicle is in. For example, the gear shift information may be information on which state the shift lever is in the parking (P), reverse (R), neutral (N), running (D) .

The interface unit 180 may receive user input received via the user input 124 of the vehicle 100. [ The interface unit 180 may receive the user input from the input unit 120 of the vehicle 100 or may receive the user input through the control unit 170. [

The interface unit 180 can receive information obtained from an external device. For example, when the traffic light change information is received from the external server through the communication unit 110 of the vehicle 100, the interface unit 180 can receive the traffic light change information from the control unit 170. [

The control unit 170 can control the overall operation of each unit in the vehicle 100. [ The control unit 170 may be referred to as an ECU (Electronic Control Unit).

The control unit 170 may be implemented in hardware as application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs) ), Controllers, micro-controllers, microprocessors, and other electronic units for performing other functions.

The power supply unit 190 can supply power necessary for the operation of each component under the control of the controller 170. [ In particular, the power supply unit 170 can receive power from a battery (not shown) or the like inside the vehicle.

An AVN (Audio Video Navigation) device (not shown) can exchange data with the controller 170. The controller 170 may receive navigation information from the AVN device or another navigation device. Here, the navigation information may include set destination information, route information according to the destination, map information about the vehicle driving, or vehicle location information.

On the other hand, some of the components shown in FIG. 1 may not be essential to the implementation of the vehicle 100. Thus, the vehicle 100 described herein may have more or fewer components than those listed above.

2 is a view showing the appearance of the vehicle 100 related to the present invention. For convenience of explanation, it is assumed that the vehicle 100 is a four-wheeled vehicle.

2, the vehicle 100 includes a tire 11a-11d rotated by a power source, a steering wheel 12 for adjusting the traveling direction of the vehicle 100, head lamps 13a and 13b, a wiper 14a, 14b.

The control unit 170 of the vehicle 100 according to the embodiment of the present invention generates a peripheral image of the vehicle using the camera 161, detects information in the generated peripheral image, To the driving unit 150. The driving unit 150 may be configured to output the control signal to the driving unit 150. [ For example, the control unit 170 can control the steering apparatus or the like based on the control signal.

On the other hand, the height H of the vehicle 100 is the length from the ground plane to the highest position of the vehicle body, and can be changed within a predetermined range according to the weight or position of the occupant or the load of the vehicle 100. Further, the vehicle 100 may be separated by a minimum ground clearance G between the lowest point of the vehicle body and the road surface. Thus, the vehicle body can be prevented from being damaged by an object having a height lower than the minimum ground clearance G.

It is also assumed that the distance between the front left and right tires 11a and 11b of the vehicle 100 and the distance between the rear left and right tires 11c and 11d are the same. It is assumed that the distance between the inside of the front wheel left tire 11a and the inside of the right tire 11b and the distance between the inside of the rear left tire 11c and the inside of the right tire 11d are the same value T do.

The overall width O of the vehicle 100 can be defined as the maximum distance between the left end of the vehicle 100 and the right end of the vehicle 100 excluding the side mirror (e.g., electric folding side mirror) as shown in the figure.

FIG. 3A illustrates a case where the camera 161 described above with reference to FIG. 1 is a stereo camera.

3A, the camera 161 may include a first camera 310 having a first lens 311, and a second camera 320 having a second lens 321. Also, the first lens 311 and the second lens 312 are spaced apart from each other by a predetermined distance, so that two different images of the same subject can be obtained at a specific point in time.

The camera 161 further includes a first light shield 312 and a second light shield 322 for shielding light incident on the first lens 311 and the second lens 321, .

The camera 161 in the drawing may be a structure detachably attachable to the ceiling or windshield of the vehicle 100.

This camera 161 can acquire a stereo image with respect to the front of the vehicle from the first and second cameras 310 and 320. Also, at least one object (e.g., a pedestrian, a traffic light, a road, a lane, another vehicle) appearing in at least one stereo image based on the disparity information, based on the stereo image, Lt; / RTI > After the object is detected, the movement of the object can be continuously tracked.

Referring to FIGS. 3B and 3C, four cameras 161a, 161b, 161c, and 161d may be mounted at different positions on the outer surface of the vehicle 100. FIG. Each of the four cameras 161a, 161b, 161c, and 161d may be the same as the camera 161 described above.

Referring to FIG. 3B, the plurality of cameras 161a, 161b, 161c, and 161d may be disposed at the front, left, right, and rear of the vehicle 100, respectively. Each of the plurality of cameras 161a, 161b, 161c, and 161d may be included in the camera 161 shown in FIG.

The front camera 161a may be disposed near the windshield, near the ambulance, or near the radiator grill.

The left camera 161b may be disposed in a case surrounding the left side mirror. Alternatively, the left camera 161b may be disposed outside the case surrounding the left side mirror. Alternatively, the left camera 161b may be disposed in one area outside the left front door, the left rear door, or the left fender.

The right camera 161c may be disposed in a case surrounding the right side mirror. Or the right camera 161c may be disposed outside the case surrounding the right side mirror. Alternatively, the right camera 161c may be disposed in one area outside the right front door, the right rear door, or the right fender.

On the other hand, the rear camera 161d may be disposed in the vicinity of a rear license plate or a trunk switch.

The respective images photographed by the plurality of cameras 161a, 161b, 161c, and 161d are transmitted to the control unit 170, and the control unit 170 may synthesize the respective images to generate a peripheral image of the vehicle.

3B, four cameras are mounted on the outer surface of the vehicle 100. However, the present invention is not limited to the number of cameras, and the number of cameras may be different from the position shown in FIG. 3B Lt; / RTI >

3C, the composite image 400 includes a first image area 401 corresponding to an external image photographed by the front camera 161a, a second image area 401 corresponding to an external image photographed by the left camera 161b, A third image area 403 corresponding to an external image photographed by the right camera 161c and a fourth image area 404 corresponding to an external image photographed by the rear camera 161d . The composite image 400 may be named as an around view monitoring (AVM) image.

At the time of generating the composite image 400, the boundary lines 411, 412, 413, and 414 are generated between any two external images included in the composite image 400. These boundary portions can be naturally displayed by image blending processing.

On the other hand, boundary lines 411, 412, 413, and 414 may be displayed at the boundaries between the plurality of images. In addition, a predetermined image may be included in the center of the composite image 400 to indicate the vehicle 100.

Further, the composite image 400 may be displayed on a display device mounted in the interior of the vehicle 100. [

FIG. 4 shows an example of the vehicle 100 described above with reference to FIG. For convenience of explanation, it is assumed that the vehicle 100 is a four-wheeled vehicle.

Referring to FIG. 4, the vehicle 100 may include at least one or more radar devices 162, a plurality of radar devices 163, and an ultrasonic sensor device 164.

The radar 162 may be mounted on one side of the vehicle 100 to emit electromagnetic waves toward the periphery of the vehicle 100 and receive electromagnetic waves reflected from various objects existing around the vehicle 100. [ For example, the radar 162 measures the time of an electromagnetic wave reflected by an object and acquires information related to the distance, direction, altitude, and the like of the object.

The laser 163 is mounted on one side of the vehicle 100 and can emit laser toward the periphery of the vehicle 100. [ The laser emitted by the laser 163 may be scattered or reflected back to the vehicle 100 and the laser 163 may be reflected on the basis of the change in the time, intensity, frequency, , Information on the physical characteristics such as the distance, speed, and shape of the target located in the periphery of the vehicle 100 can be obtained.

The ultrasonic sensor 164 is mounted on one side of the vehicle 100 to generate ultrasonic waves toward the periphery of the vehicle 100. [ Ultrasonic waves generated by the ultrasonic sensor 164 have a high frequency (about 20 KHz or more) and a short wavelength. Such an ultrasonic sensor 164 can be used mainly to recognize an obstacle close to the vehicle 100 and the like.

The radar 162, the RDA 163, and the ultrasonic sensor 164 shown in FIG. 4 may be sensors included in the sensing unit 160 shown in FIG. It is also apparent to those skilled in the art that the radar 162, the lidar 163, and the ultrasonic sensor 164 may be mounted in different numbers in different positions from those shown in Fig. 4, depending on the embodiment.

FIG. 5 shows an example of an internal block diagram of the controller 170 shown in FIG.

5, the control unit 170 may include an image preprocessing unit 510, a disparity calculating unit 520, an object detecting unit 534, an object tracking unit 540, and an application unit 550 .

The image preprocessor 510 receives an image provided from the cameras 161 and 122 shown in FIG. 1 and can perform preprocessing.

In particular, the image preprocessing unit 510 may perform noise reduction, rectification, calibration, color enhancement, color space conversion (CSC) Interpolation, camera gain control, and the like. Thus, a clearer image can be obtained than the stereo image photographed by the cameras 161 and 122.

The disparity calculator 520 receives the image signal processed by the image preprocessing unit 510, performs stereo matching on the received images, and performs disparity calculation based on stereo matching, A disparty map can be obtained. That is, it is possible to obtain the disparity information about the stereo image with respect to the front of the vehicle.

At this time, the stereo matching may be performed on a pixel-by-pixel basis of stereo images or on a predetermined block basis. On the other hand, the disparity map may mean a map in which binaural parallax information of stereo images, i.e., left and right images, is numerically expressed.

The segmentation unit 532 may perform segmenting and clustering on at least one of the images based on the dispetity information from the disparity calculating unit 520. [

Specifically, the segmentation unit 532 can separate the background and the foreground for at least one of the stereo images based on the disparity information.

For example, an area having dispaly information within a disparity map of a predetermined value or less can be calculated as a background, and the corresponding part can be excluded. Thereby, the foreground can be relatively separated.

As another example, an area in which the dispetity information is equal to or greater than a predetermined value in the disparity map can be calculated with the foreground, and the corresponding part can be extracted. Thereby, the foreground can be separated.

Thus, by separating the foreground and the background based on the disparity information information extracted based on the stereo image, it becomes possible to shorten the signal processing speed, signal processing amount, and the like at the time of object detection thereafter.

Next, the object detector 534 can detect the object based on the image segment from the segmentation unit 532. [

That is, the object detecting unit 534 can detect an object for at least one of the images based on the disparity information.

Specifically, the object detecting unit 534 can detect an object for at least one of the images. For example, an object can be detected from a foreground separated by an image segment.

Next, the object verification unit 536 classifies and verifies the separated object.

For this purpose, the object identification unit 536 identifies the object using the neural network identification method, the SVM (Support Vector Machine) method, the AdaBoost identification method using the Haar-like feature, or the Histograms of Oriented Gradients Etc. may be used.

On the other hand, the object checking unit 536 can check the objects by comparing the objects stored in the memory 130 with the detected objects.

For example, the object identifying unit 536 can identify nearby vehicles, lanes, roads, signs, hazardous areas, tunnels, etc. located in the vicinity of the vehicle.

An object tracking unit 540 may perform tracking on the identified object. For example, it sequentially identifies an object in the acquired stereo images, calculates a motion or a motion vector of the identified object, and tracks movement of the object based on the calculated motion or motion vector . Accordingly, it is possible to track nearby vehicles, lanes, roads, signs, dangerous areas, tunnels, etc., located in the vicinity of the vehicle.

Next, the application unit 550 can calculate the risk and the like of the vehicle 100 based on various objects (e.g., other vehicles, lanes, roads, signs, etc.) located around the vehicle 100 . It is also possible to calculate the possibility of a collision with a preceding vehicle, whether the vehicle is slipping or the like.

Then, the application unit 550 can output a message or the like for notifying the user to the user as vehicle driving assistance information, based on the calculated risk, possibility of collision, sleep, or the like. Alternatively, a control signal for attitude control or running control of the vehicle 100 may be generated as the vehicle control information.

The controller 170 may include an image preprocessing unit 510, a dispaly computing unit 520, a segmentation unit 532, an object detection unit 534, an object verification unit 536, an object tracking unit 540, and an application unit 550, as shown in FIG. For example, if the cameras 161 and 122 are cameras providing only two-dimensional images, the disparity calculating unit 520 may be omitted.

6A and 6B are diagrams referred to in the description of the operation of the controller 170 shown in FIG.

6A and 6B are diagrams for explaining the operation method of the controller 170 of FIG. 5, based on the stereo image obtained in the first and second frame periods, respectively.

First, referring to FIG. 6A, when the camera 161 is a stereo camera, the camera 161 acquires a stereo image during a first frame period.

The disparity calculating unit 520 in the control unit 170 receives the stereo images FR1a and FR1b signal-processed by the image preprocessing unit 510 and performs stereo matching on the received stereo images FR1a and FR1b , And a disparity map (620).

The disparity map 620 is obtained by leveling the parallax between the stereo images FR1a and FR1b. The higher the disparity level, the closer the distance to the vehicle, and the lower the disparity level, The distance can be calculated to be far.

On the other hand, when such a disparity map is displayed, it may be displayed so as to have a higher luminance as the disparity level becomes larger, and a lower luminance as the disparity level becomes smaller.

In the figure, first to fourth lanes 628a, 628b, 628c, and 628d have corresponding disparity levels in the disparity map 620, and the construction area 622, the first forward vehicle 624 ) And the second preceding vehicle 626 have corresponding disparity levels, respectively.

The segmentation unit 532, the object detection unit 534 and the object identification unit 536 determine whether or not the segments, the object detection, and the object (s) for at least one of the stereo images FR1a and FR1b based on the disparity map 620 Perform verification.

In the figure, using the disparity map 620, object detection and confirmation for the second stereo image FRlb is performed.

That is, in the image 630, the first to fourth lanes 638a, 638b, 638c, 638d, the construction area 632, the first forward vehicle 634, the second forward vehicle 636, And verification may be performed.

Next, referring to FIG. 6B, during the second frame period, the stereo camera 161 acquires a stereo image.

The disparity calculating unit 520 in the control unit 170 receives the stereo images FR2a and FR2b signal-processed by the image preprocessing unit 510 and performs stereo matching on the received stereo images FR2a and FR2b , And a disparity map (640).

In the figure, the first to fourth lanes 648a, 648b, 648c, and 648d have corresponding disparity levels in the disparity map 640, and the construction area 642, the first forward vehicle 644 ) And the second preceding vehicle 646 have corresponding disparity levels, respectively.

The segmentation unit 532, the object detection unit 534 and the object identification unit 536 determine whether or not the segments, the object detection, and the object (s) for at least one of the stereo images FR2a and FR2b based on the disparity map 640 Perform verification.

In the figure, using the disparity map 640, object detection and confirmation for the second stereo image FR2b is performed.

That is, the first to fourth lanes 658a, 658b, 658c, and 658d, the construction area 652, the first forward vehicle 654, and the second forward vehicle 656 in the image 650 are used for object detection and Verification can be performed.

On the other hand, the object tracking unit 540 may compare the FIG. 6A and FIG. 6B to perform tracking on the identified object.

Specifically, the object tracking unit 540 can track the movement of the object, based on the motion or motion vector of each object identified in FIGS. 6A and 6B. Accordingly, it is possible to perform tracking on the lane, the construction area, the first forward vehicle, the second forward vehicle, and the like, which are located in the vicinity of the vehicle.

FIG. 7 shows an exemplary block diagram of a landmark detection apparatus 200 according to an embodiment of the present invention.

7, the landmark detection apparatus 200 may include an interface unit 210, a memory 220, a processor 230, and a power supply unit 240.

The interface unit 210 may receive data from another unit included in the vehicle 100 or may transmit signals processed or generated by the processor 230 to the outside. For example, the interface unit 270 may include an input unit 120, an output unit 140, a sensing unit 160, a vehicle driving unit 150, and a control unit 170 of the vehicle 100 via wired communication or wireless communication. Data can be transmitted / received to / from at least one of the <

The interface unit 210 may receive sensing information on a mark and a landmark formed on the road on which the vehicle is located. The sensing information may be generated by the sensing unit 160 and the interface unit 210 may receive the sensing information from the sensing unit 160 or the controller 170.

The sensing information for the landmark may include an input image photographed by the camera 161. [ In this case, the input image may be an original color image photographed by the camera 161, or an AVM image converted into a view such that the original image is viewed from above. The AVM image may be a type of the composite image 400 described above with reference to FIG.

The interface unit 210 can receive information about at least one object existing around the vehicle 100. [ For example, when an object such as a pedestrian or other vehicle positioned adjacent to the landmark is detected by the sensing unit 160, the interface unit 210 outputs information about the detected object's position, moving speed, Lt; / RTI >

The memory 220 may store various data for operation of the landmark detection apparatus 200, such as a program for processing or controlling the processor 230. [

Such memory 220 may include at least one of various hardware storage media such as ROM, RAM, EPROM, flash drive, hard drive, etc. to store the various data.

The processor 230 can control the overall operation of each component included in the landmark detection apparatus 200. [ Also, according to an embodiment, the processor 230 may control the operation of the components of the vehicle 100 that are connected through the interface unit 210. For example, the processor 230 may control the display 141 included in the output unit 140 to display parking related information.

The processor 230 may be implemented as one or more processors, such as application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs) for example, at least one of controllers, micro-controllers, microprocessors, and other electronic units for performing other functions.

The processor 230 can detect a landmarked area based on information or data provided from the sensing unit 160 and generate a candidate ROI for the detected ROI.

In one embodiment, when the input image is received by the interface unit 210, the processor 230 preprocesses the input image to detect at least one landmark formed within a predetermined range of the road on which the vehicle 100 is located can do.

Specifically, the processor 230 may detect edges (or contour lines) from the input image at which a predetermined degree of brightness change appears. The landmark is formed on the road such as white or yellow so as to be easily distinguished from other objects. The processor 230 can scan the input image in at least one direction to detect contours appearing in the input image. For example, the processor 230 may convert an input image into a binary image, and apply an edge detection filter to the gray image to generate a binary image. As the edge detection filter, a DoG (Difference of Guassian) filter can be used. Accordingly, in the binary image, the pixels of interest (i.e., the pixels constituting the edges), which are pixels having brightness values larger than a preset threshold value, are displayed in a color (e.g., white) different from the color Can be expressed.

In addition, the processor 230 may label a plurality of pixels of interest included in the binary image with a plurality of objects spaced apart from each other. For example, the processor 230 may search for adjacent pixels of interest by applying a predetermined mask to each of the pixels of interest, and label adjacent pixels of interest with values different from those of interest pixels that are not adjacent to each other. Accordingly, the pixels of interest labeled with the same value can be classified as the same object. At this time, each object may be composed of at least one pixel of interest.

In addition, the processor 230 may remove an object corresponding to noise from among a plurality of objects classified through labeling.

In one embodiment, the processor 230 may remove, from among the plurality of objects, an object having a size less than a predetermined value from the binary image. In one example, the processor 230 may recognize and remove an object that includes a number of pixels of interest less than the first value as noise. At this time, the processor 230 may use a speckle filter. As another example, the processor 230 may create a bounding box of a predetermined shape (e.g., a rectangle) surrounding each object, and may recognize and remove objects surrounded by the bounding box smaller than the first size as noise. Accordingly, an object corresponding to an overly small-sized mark formed by a foreign object, a shadow, etc. of the road can be removed.

In one embodiment, the processor 230 may remove, from among the plurality of objects, an object having a magnitude exceeding a predetermined value from the binary image. In one example, the processor 230 may recognize and remove an object that contains a larger number of pixels of interest than a second value (> the first value) as noise. In another example, the processor 230 creates a bounding box of a predetermined shape (e.g., a rectangle) surrounding each object, and recognizes and removes the object surrounded by the bounding box larger than the second size (> . Accordingly, an object corresponding to an oversized marking such as a parking line on the road can be eliminated.

In addition, the processor 230 may classify the plurality of objects into at least one cluster. At this time, the plurality of objects classified into the cluster may be the remaining objects excluding the object corresponding to the noise. Specifically, the processor 230 may sample a plurality of points from each of the plurality of objects, and determine whether the object is adjacent to another object by using the sampled points.

For example, the processor 230 may generate connection lines between sampled points in one object and sampled points in the other. At this time, the connection line may be generated through delaunay triangulation.

The processor 230 calculates the distance between two objects based on the length of at least one of the generated connection lines. If the distance between the two objects is less than the preset reference distance, the processor 230 can determine that the two objects are adjacent to each other. Alternatively, the processor 230 may determine that two objects are adjacent to each other if two objects are directly connected by at least one of the connecting lines.

If it is determined that one object and another object are adjacent to each other as a result of the determination, the processor 230 calculates the similarity between two objects that are determined to be adjacent to each other. Based on the calculated similarity, Of the population. At this time, the degree of similarity between the two objects may be calculated based on at least one of the hue, the size, and the gradient of each of the two objects. For example, the processor may classify two objects into the same cluster if the same color difference, size difference, and gradient difference between the two objects are given to the same or different weights, respectively, and the sum is greater than a preset reference value.

When the cluster classification is completed through the above-described method, the processor 230 can set the candidate region of interest for each classified cluster on the input image. In other words, the processor 230 may extract a candidate ROI corresponding to each cluster from the input image.

The processor 230 may then determine whether each candidate ROI is a ROI or an ROI. Here, the effective area of interest refers to an area including a mark that provides useful information to a vehicle or a driver. At this time, the processor 230 can determine whether each candidate ROI is a ROI using an image word dictionary (Bag of Visual Word) or classify the category.

For example, of the candidate areas of interest detected from the marks formed on the road, the area corresponding to the mark intentionally formed to provide information related to the running of the vehicle to the driver such as the direction leader is determined as the area of interest, Or the like can be judged as an ineffective region of interest.

Together or separately, the processor 230 may classify the categories of each of the areas of interest. That is, the processor 230 can determine whether the specific validity area is composed of a character, a number, or a symbol. For example, the processor 230 calculates key points and descriptors for the feature points according to a valid interest area through a Scale-Invariant Feature Transform (SIFT), and generates a pre-trained SVM ), It is possible to classify the categories according to the areas of interest.

The power supply unit 240 can supply power necessary for the operation of each component under the control of the processor 230. [ The power supply unit 240 can be supplied with power from a battery or the like inside the vehicle 100.

The operation of the landmark detecting apparatus 200 is not limited to the above-described embodiments with reference to FIG. 7, but will be described in more detail with reference to the following drawings.

FIG. 8 shows a flow chart of an exemplary process S800 performed by the landmark detection apparatus 200 according to an embodiment of the present invention.

In step S810, the landmark detection apparatus 200 can enter the landmark detection mode. Specifically, the processor 230 may enter the landmark detection mode upon occurrence of a preset event. In this case, the event is data or information related to a specific situation that is predetermined so that the processor 230 enters the parking support mode, and information on whether or not the event is generated is input through the input unit 120, the sensing unit 160, 170 via the interface unit 210. [0050] FIG.

For example, the predetermined event may be (i) a reception event of a user input (e.g., voice, touch, click, gesture) indicating entry into a landmark detection mode, (ii) (iii) an event that the start-up of the vehicle is turned on.

However, it is needless to say that the types of events determined in advance for entry into the landmark detection mode are not limited to the above-mentioned examples, and other types of events can be predetermined. Also, step S810 may be omitted depending on the embodiment.

In step S820, the landmark detection apparatus 200 can receive an input image provided from the vehicle 100. [ In one embodiment, the processor 230 may request the control unit 170 to provide an input image through the interface unit 210 when entering the landmark detection mode. In response to the request from the processor 230, the controller 170 controls the AVM images converted from the external images captured by the cameras 161a through 161d (see FIG. 3) or the plurality of external images in different directions into the top view To the parking assist device.

In step S830, the landmark detection apparatus 200 can generate a binary image using the input image. Specifically, the processor 230 may convert the input image into a gray image, and generate a binary image that is displayed such that an edge region appearing in the gray image is distinguished from the remaining region. For example, the processor 230 may generate a DoG image, which is a binary image, by applying a DoG (Difference of Guassian) filter, which is one of the edge detection filters, to the gray image. The DoG image expresses the pixels of interest, which are pixels having a brightness value larger than a predetermined threshold value among all the pixels of the gray image, in a different color from the remaining pixels.

In step S840, the landmark detection apparatus 200 may label the pixels of interest included in the binary image with a plurality of objects. Specifically, the processor 230 may assign the same value to adjacent pixels among the pixels of interest, and may assign different values to pixels that are not adjacent to each other. Accordingly, the pixels of interest included in the specific object have the same specific value, and the pixels of interest included in the object different from the specific object have different values from the specific value.

In addition, the processor 230 may recognize an object that does not satisfy a predetermined condition as a noise among a plurality of objects and remove the noise. For example, as described above, (i) when the number of pixels included in a specific object is less than a first value or exceeds a second value (a first value), and (ii) when the size of a bounding box Is smaller than the first size or exceeds the second size (> the first size), the specific object can be recognized as noise and removed.

In step S850, the landmark detection apparatus 200 may classify the plurality of objects into at least one cluster. For example, the processor 230 creates a connection line between a plurality of objects using the Trine triangulation method, and there exists a connection line for directly connecting two objects (i), which is a predetermined condition, to any two of the plurality of objects And (ii) if the length of the direct connection line connecting the two objects is less than the reference length, the similarity to the two objects can be calculated. If the similarity to two objects is equal to or greater than the reference value, the processor 230 can classify the two objects into the same cluster. The processor 230 may classify the plurality of objects into at least one cluster by applying the predetermined condition to each of the plurality of objects.

In step S860, the landmark detection apparatus 200 may extract a candidate ROI corresponding to the classified community from the input image. That is, the processor 230 may set a candidate region of interest surrounding a portion corresponding to each of the classified clusters in the input image.

In step S870, the landmark detection apparatus 200 can classify the category of the candidate ROI. Specifically, the processor 230 determines whether the mark indicated by the candidate ROI is a landmark that is intentionally created to provide useful information to the driver (i.e., the candidate ROI is a ROI) Or a combination of two or more of letters, numbers, and symbols.

FIG. 9 is a diagram referred to explain a concept of a method of generating a binary image using a DoG filter by the landmark detection apparatus 200 according to an embodiment of the present invention.

9, the processor 230 may convert an input image into a gray image 910 and apply a DoG filter to the gray image 910 to generate a DoG image 920, which is a binary image.

Specifically, DoG filter the gray image 910, by using the Gaussian function G σ (x, y) represented by equation (1) below may flattening (smoothing).

Figure 112015118878081-pat00001

Here,? May be a predetermined dispersion value for flattening the gray image 910. The degree to which the gray image 910 is planarized is determined according to the magnitude of?.

For example, assuming that the gray image 910 is represented by I (x, y), the gray image 910 can be flattened by the following equation (2).

Figure 112015118878081-pat00002

That is, according to equation (2), the convolution (convolution) between the Gaussian function, G σ (x, y) and a gray image 910, the I (x, y) is performed to planarize the image g σ (corresponding to σ x, y) may be generated.

The DoG filter can generate DoG (x, y) which is a DoG image 920 from I (x, y) which is a gray image 910 using Equation (3).

Figure 112015118878081-pat00003

That is, DoG (x, y) is a flattening image 912 corresponding to g 1 (x, y) and a second variance value 2, which correspond to the first variance value 1, may be the image 920 corresponding to the difference of? 2 (x, y).

Accordingly, as shown in the figure, in the DoG image 920, pixels of interest greater than a predetermined brightness value in the gray image 910 may be represented in a different color from the rest of the pixels.

FIG. 10 shows an example of another binary image generated by the landmark detecting apparatus 200 using the DoG filter according to an embodiment of the present invention.

10 (a) illustrates an input image 1000 provided at a specific time when the vehicle is traveling, and FIG. 10 (b) illustrates a gray image 1010 for the input image 1000. FIG. As the camera 161 photographs the ground on which the vehicle is located, marks formed on the ground surface may appear in the input image 1000 and the gray image 1010. For example, referring to the gray image 1010, a 'parking line' mark 1011 is formed on the right side of the vehicle, a 'one way' mark 1012 is formed on the rear side of the vehicle, An arrow mark 1013 in the form of a symbol, an 'exit' mark 1014 in the letter format, and a 'lane' mark 1015 in the form of a line may be formed on the left side of FIG. In addition, a mark 1016 formed by a manhole is formed in front of the vehicle, and marks due to other foreign substances, shadows, illumination, and the like can be formed.

FIG. 10C illustrates a DoG image 1020 for the gray image 1010 shown in FIG. 10B. That is, the image 1020 obtained by applying the DoG filter described above with reference to FIG. 9 to the gray image 1010 is shown. In the DoG image 1020, the marks 1011 to 1051 described above may appear in a first color (e.g., white) and the remaining areas may appear in a second color (e.g., black).

On the other hand, in the binary image such as the DoG image 1020, an object based on a mark, which is not a landmark, may be included as noise due to the characteristics of an edge detection filter such as a DoG filter. For example, in the DoG image 1020 shown in FIG. 10, objects corresponding to the 'parking line' mark 1011, objects corresponding to the mark 1016 by manhole may be noise. It is necessary to appropriately remove such noise. On the contrary, the description will be continued with reference to Fig.

11 is a diagram for explaining a method for the landmark detection apparatus 200 according to an embodiment of the present invention to classify pixels of interest into at least one object in a binary image.

11 (a) illustrates pixels included in one region of the binary image. As shown, the binary image may consist of pixels of interest P1 and non-interest pixels P2. Here, the non-interest pixels P 2 may mean pixels other than the pixels of interest P 1 among all the pixels constituting the binary image. Each of the pixels of interest P1 may be represented by a first color, and each of the non-interest pixels P2 may be represented by a second color.

The processor 230 can determine whether there is another interested pixel (hereinafter referred to as a neighboring pixel) adjacent to a pixel P1 of interest using a predetermined mask and assign the same value to neighboring pixels have. This process can be applied to all the pixels of the binary image, so that a plurality of pixels of interest P1 can be classified into at least one object.

Fig. 11 (b) illustrates the result of classifying the 11 interest pixels shown in Fig. 11 (a) using the first mask. For example, the first mask M1 may be a mask formed to search for another pixel of interest connected in four directions (e.g., up, down, left, and right) with respect to a pixel of interest. The processor 230 may detect a neighboring pixel while applying the first mask M1 to the pixel of interest P1. Accordingly, the 11 pixels of interest P1 can be classified into an object composed of interest pixels assigned '1' and an object composed of interest pixels assigned '2'.

11C illustrates a result of classifying the 11 interest pixels shown in FIG. 11A by using the second mask, unlike FIG. 11B. FIG. For example, the second mask M2 may be a mask formed to look for other pixels of interest connected in eight directions (e.g., up, down, left, right, and diagonal) with respect to any one pixel of interest. The processor 230 may detect a neighboring pixel while applying the second mask M2 to the pixel of interest P1. Accordingly, the same value can be allotted to all of the 11 pixels of interest P1 and classified into a single object.

The processor 230 may calculate the number of pixels of interest for each classified object. For example, in FIG. 11B, the number of pixels of interest included in the object to which '1' is allocated is five, and the number of pixels of interest included in the object to which '2' is allocated is six. Thereafter, when the number of pixels of interest included in each object is smaller than a predetermined first value (e.g., 10) or greater than a second value (e.g., 300), the processor 230 recognizes the object as noise and removes the object .

Alternatively, the processor 230 may calculate the classified object-specific size. For example, the size of the object to which '1' is allocated in FIG. 11B can be calculated as 9, which is the maximum pixel number in the x direction and 3, which is the maximum pixel number in the y direction. As another example, the size of the object to which '2' is allocated in FIG. 11B can be calculated as the maximum number of pixels in the x direction of 2 and the maximum number of pixels in the y direction of 3, which is 6. As another example, the size of the object to which '1' is allocated in FIG. 11C can be calculated as 24, which is the maximum number of pixels in the x direction and 4, which is the maximum number of pixels in the y direction. Thereafter, when the size of each object is smaller than a predetermined first size (e.g., 50) or larger than a second size (e.g., 500), the processor 230 may recognize the object as noise and remove the object.

12 to 14 show an example of a method for the landmark detection apparatus 200 according to an embodiment of the present invention to classify pixels of interest in a binary image into at least one object.

FIG. 12 illustrates a first noise removal image 1200 in which the first type of noise is removed from the DoG image 1020 shown in FIG. 10C through the above-described method with reference to FIG. Here, the first type of noise may include an object having a smaller number of pixels than the first value among the plurality of objects, or a size smaller than the first size.

10C, objects of a small speckle type may not be displayed any more in the first noise-removed image 1200. FIG.

FIG. 13 illustrates a second noise-removed image 1300 in which the noise of the second type is removed from the first noise-removed image 1200 shown in FIG. 12 through the method described above with reference to FIG. Here, the second type of noise may include an object having a larger number of pixels of interest than the second value among the plurality of objects, or a size smaller than the second size. Here, the second value or the second size may be determined based on the size of the vehicle.

12C, and 12, an excessively large object (for example, an object appearing larger than the area of the vehicle or the left and right lengths of the input image 1000) no longer appears in the second noise-removed image 1300 . For example, an object corresponding to the 'parking line' mark 1011 in the gray image 1010 shown in FIG. 10 (b) is recognized as a noise of the second type, and the second noise- It may not appear.

FIG. 14 shows an example of a labeling image 1400 in which a plurality of objects are classified into different colors. In the labeling image 1400, an object made up of a pixel of interest to which a specific value is assigned can be expressed in a different color from an object made up of pixels of interest assigned different values.

12 and 13 illustrate that the noise of the second type is removed after the noise of the first type is removed from the binary image 1020. However, the scope of the present invention is not limited thereto. For example, the first type of noise may be removed after the second type of noise is removed, or the first type and the second type of noise may be removed at the same time.

On the other hand, one of the objects obtained through the processes of FIGS. 12 to 14 may constitute a single landmark itself, or may form a single landmark together with other objects. For example, 'ㅇ', which is an object of the landmark 1012 shown in FIG. 10 (b), indicates' ㅣ ',' d ',' , 'A', and so on, while it should be treated as a cluster different from the arrow object constituting the other landmark 1013. Hereinafter, a method for classifying an object constituting a common landmark into the same cluster as another object will be described.

FIG. 15 is a diagram for explaining an example of a method by which the landmark detection apparatus 200 according to an embodiment of the present invention groups different objects into at least one cluster.

Referring to FIG. 15 (a), the processor 230 may sample at least one point from each object classified in the same manner as in FIG. Hereinafter, the sampled point will be referred to as a 'sample point'.

For example, two sample points Ps are sampled in a first object 1510 of a random binary image, five sample points Ps are sampled in a second object 1520, Suppose that two sampling points Ps are sampled and one sampling point Ps in the fourth object 1540 is sampled.

In this case, the processor 230 connects the 10 sampling points Ps to a plurality of connection lines using the Ampere's triangulation method, and defines 10 sampling points Ps as shown in FIG. 15 (b) Can be divided into a plurality of triangles. At this time, the corner of the triangle may be a connecting line connecting two different sampling points (Ps).

For example, as shown in the drawing, a result of applying the Neuron triangulation method is a connection line L1 for directly connecting the sample point Ps of the first object 1510 and the sample point Ps of the second object 1510, A connection point L2 directly connecting the sampling point Ps of the object 1520 and the sampling point Ps of the third object 1530 and the sampling point Ps of the third object 1530 and the sampling point Ps of the fourth object 1540 The connecting line L3 directly connecting the sampling points Ps of the plurality of sample points Ps may be generated. Also, a connection line L4 connecting different sample points Ps of the same object 1520 can be generated. In addition, a connection line for directly connecting the sampling point Ps of the first object 1510 with the sampling point Ps of the third object 1530 or the fourth object 1540 may not be generated.

The processor 230 may delete a connection line connecting sample points of the same object among a plurality of connection lines as shown in FIG. 15 (b). For example, as shown in FIG. 15C, the connection line L4 connecting the sample points Ps of the second object 1520 can be deleted.

Further, the processor 230 may delete a connection line exceeding the reference length among a plurality of connection lines as shown in FIG. 15 (b) or 15 (c). For example, the length of the connection line L1 connecting the first object 1510 and the second object 1520 and the length of the connection line L2 connecting the second object 1520 and the third object 1530 is the length And the length of the connection line L3 connecting the third object 1530 and the fourth object 1530 exceeds the reference length, the processor 230 determines that the connection line L3 Can be deleted.

Thereafter, the processor 230 may calculate the similarity between a plurality of objects directly or indirectly connected by the remaining connection lines that have not been deleted. For example, the first object 1510 is directly connected to the second object 1520 by the connection line L1, and the second object 1520 is directly connected to the third object 1530 by the connection line L2. The processor 230 may calculate the degree of similarity between the first to third objects 1530. [ If the degree of similarity between the first to third objects 1530 is equal to or greater than a preset reference value, the processor 230 may classify the first to third objects 1530 into a single same cluster. Of course, it is obvious to those skilled in the art that the fourth object 1540 can be classified into a different community from the one to which the first to third objects 1530 belong.

16A to 16C are diagrams for explaining an example of a method for grouping objects of a binary image by the landmark detection apparatus 200 according to an embodiment of the present invention. For ease of understanding, the noise removal image 1300 shown in FIG. 13 will be described with reference to FIG.

FIG. 16A illustrates an image 1600a in which a plurality of connection lines are generated by connecting sample points Ps according to objects in the noise-removed image 1300 through an iterine triangulation method. As described above, a plurality of sample points may be sampled for each object, and a sample point of a specific object may be connected to another sample point of the same object by a connecting line, or may be connected to a sample point of another object.

16B illustrates an image 1600b in which connection lines connecting sampling points of the same object are deleted from connection lines of the image 1600a shown in FIG. 16A. That is, the processor 230 may delete the connection lines connecting different sample points of the same object, which are unnecessary for determining whether to group objects among all the generated connection lines, thereby reducing the total number of connection lines. Accordingly, the amount of computation required to determine whether to group objects can be greatly reduced.

FIG. 16C illustrates an image 1600c in which connecting lines whose length is longer than a reference value among the connecting lines of the image 1600b shown in FIG. 16B are deleted. That is, the processor 230 may generate an image 1600c including only connection lines whose length is less than a reference value among connection lines of the image 1600b shown in FIG. 16B. For example, as shown in the drawing, the image 1600c shows connection lines connecting an object corresponding to 'o' and an object corresponding to 'l' in the 'one way' mark 1012, The connecting lines connecting the object corresponding to the 'arrow' mark 1013 and the corresponding object may no longer appear.

The processor 230 may calculate the degree of similarity between two or more objects that are directly or indirectly connected by at least one connection line. For example, if the one-way 'mark 1012 is used as a reference, the similarity between objects corresponding to' I 'directly connected to the object corresponding to' o 'can be calculated. In addition, it is possible to calculate the similarity between objects corresponding to 'r' directly connected to the object corresponding to 'I'. In addition, it is possible to calculate the similarity between objects corresponding to 'i' indirectly connected to the object corresponding to 'i'. That is, when each object constituting the 'one way' mark 1012 is directly or indirectly connected to each other, the processor 230 determines whether the objects corresponding to consonants and vowels of the 'one way' mark 1012 The degree of similarity can be calculated simultaneously or sequentially. Of course, similarities can be calculated for objects included in other marks through the same method.

For example, the processor 230 may calculate the color (color appearing in the input image), the size and the gradient of each of the object corresponding to 'o' and the object corresponding to 'i' The object corresponding to 'o' and the object corresponding to 'l' can be grouped into the same cluster according to a result of comparing a value obtained by summing a predetermined weight to a difference and a gradient difference and summing the sum with a reference value. Of course, you can also group objects by taking into account additional parameters other than color, size, and gradient.

17A and 17B are diagrams for explaining a method of extracting a region of interest from an input image by the landmark detecting apparatus 200 according to an embodiment of the present invention.

17A illustrates an image 1700a in which a candidate region of interest corresponding to a grouping grouped based on the image 1600c shown in FIG. 16C is set in the input image 1000 shown in FIG. 10A . For example, the processor 230 may set a candidate region of interest in the input image 1000 that surrounds each cluster with a predetermined shape (e.g., a rectangle).

That is, the processor 230 may set the candidate region of interest for each cluster in the input image 1000. On the other hand, with reference to FIG. 12 and FIG. 13, together with a candidate ROI corresponding to a cluster constituting an actual landmark (for example, reference numeral 1012-1014 in FIG. 10B) A candidate region of interest (i.e., an ineffective region of interest) corresponding to the cluster to which the remaining object belongs may be set in the input image 1000 even after the noise removal process as described above is performed, It is necessary.

17B illustrates an image 1700b in which an ineffective region of interest is removed from the image 1700a. That is, the image 1700b in which only the region of interest 1701-1703 is set among all the candidate region of interest corresponding to the clusters in the image 1600c shown in FIG. 16C is shown.

In one embodiment, the processor 230 may remove an ineffective interest region out of all candidate interest regions, and classify the category by a region of interest interest, using an image vocabulary (Bag of Visual Word).

Specifically, the processor 230 calculates key points and descriptors for the feature points by a SIFT scheme (Scale-Invariant Feature Transform) or a SURF scheme (Speed Up Robust Features) can do. The processor 230 then clusters each of the key points into one of the image words included in a predetermined image dictionary (dictionary or codebook) through a K-mean clustering algorithm or the like, A histogram of characteristic vector values can be generated. Here, the feature vector value may correspond to the histogram value for each feature point. For example, the processor 230 may perform vector quantization on key points.

Next, the processor 230 may input the generated histogram to the SVM (Support Vector Machine) pre-trained through the test images to classify the category for each candidate ROI. If the category corresponding to the particular candidate ROI is not found, the processor 230 may process the particular candidate ROI as an invalid ROI.

Meanwhile, the processor 230 may recognize the landmark corresponding to each valid region of interest, taking into consideration the category of each region of interest.

For example, the first landmark corresponding to the first ROI is classified into the first category corresponding to the character, and the second landmark corresponding to the second ROI is classified into the second category corresponding to the character Let's assume.

In this case, the processor 230 can recognize the first landmark by comparing the character templates previously stored in the memory 220 with the first landmark. As an example, the landmark 1012 shown in FIG. 10B can be compared with character templates. Further, the processor 230 can recognize the second landmark by comparing the symbol templates previously stored in the memory 220 with the second landmark. As an example, the landmark 1013 shown in FIG. 10B can be compared with the symbol templates. As a result, the recognition speed for each valid region of interest can be significantly improved.

The embodiments of the present invention described above are not only implemented by the apparatus and method but may be implemented through a program for realizing the function corresponding to the configuration of the embodiment of the present invention or a recording medium on which the program is recorded, The embodiments can be easily implemented by those skilled in the art from the description of the embodiments described above.

It is to be understood that both the foregoing general description and the following detailed description of the present invention are exemplary and explanatory and are intended to be illustrative, The present invention is not limited to the drawings, but all or some of the embodiments may be selectively combined so that various modifications may be made.

100: vehicle
200: Landmark detection device

Claims (14)

An interface for receiving an input image photographed by at least one camera provided in a vehicle; And
And a processor for performing image processing on the input image provided from the interface unit,
The processor comprising:
Generating a binary image corresponding to the input image,
A plurality of pixels of interest included in the binary image are labeled with a plurality of objects spaced apart from each other,
Classifying the plurality of objects into at least one cluster based on a degree of similarity between the plurality of objects,
Extracting candidate interest regions corresponding to the respective clusters from the input image,
Removing the ineffective area of interest from the candidate area of interest using an image word dictionary (Bag of Visual Word)
And recognizes a landmark corresponding to a valid ROI among the candidate ROIs based on the category of the ROI.
The method according to claim 1,
The input image may include:
A vehicle landmark detection device, which is an AVM video.
The method according to claim 1,
The processor comprising:
Converts the input image into a rail image,
And applies the edge detection filter to the gray image to generate the binary image including the plurality of pixels of interest.
The method of claim 3,
The edge detection filter includes:
And a DoG (Difference of Guassian) filter configured to detect pixels having a brightness value larger than a preset threshold value among all the pixels of the gray image.
The method according to claim 1,
The processor comprising:
And removes an object corresponding to noise from among the plurality of objects.
6. The method of claim 5,
The processor comprising:
Calculating the number of pixels of interest per object,
Recognizing, as the noise, an object including a number of pixels of interest smaller than a first value or larger than a second value among the plurality of objects,
Wherein the second value is larger than the first value.
6. The method of claim 5,
The processor comprising:
A bounding box for distinguishing any one of the plurality of objects from the remaining objects for each object,
An object corresponding to a bounding box having a size smaller than the first size or larger than the second size among the plurality of bounding boxes is recognized as the noise,
Wherein the second size is larger than the first size.
The method according to claim 1,
The processor comprising:
Sampling at least one sample point for each object,
And determines whether the object is adjacent to another object on the basis of a connection line between the sampled sample points.
9. The method of claim 8,
The processor comprising:
And determines whether the two objects are adjacent to each other based on a length of a connection line between a sample point of one object and a sample point of another object among two objects included in the plurality of objects.
9. The method of claim 8,
The processor comprising:
And generates the connection line through the delaunay triangulation.
9. The method of claim 8,
The processor comprising:
Calculating a degree of similarity between the two objects based on at least one of a color, a size, and a gradient of each of the two objects when determining that two of the objects are adjacent to each other,
And classifies the two objects into the same cluster if the similarity between the two objects is equal to or greater than a preset reference value.
delete Receiving an input image from a camera provided in the vehicle;
Generating a binary image based on intensity of each pixel of the input image;
Labeling a plurality of pixels of interest included in the binary image with a plurality of objects spaced apart from each other;
Classifying the plurality of objects into at least one cluster based on a degree of similarity between the plurality of objects;
Extracting a candidate ROI corresponding to each cluster from the input image;
Removing an ineffective region of interest from the candidate region of interest using a Bag of Visual Word and classifying the category according to the region of interest; And
And recognizing a landmark corresponding to a valid region of interest of the candidate ROI based on the category of the ROI.
14. The method of claim 13,
Wherein the generating the binary image comprises:
Converting the input image into a rail image; And
Applying the edge detection filter to the gray image to generate the binary image; And detecting the position of the vehicle.
KR1020150172239A 2015-12-04 2015-12-04 Land mark detecting apparatus and land mark detection method for vehicle KR101772178B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150172239A KR101772178B1 (en) 2015-12-04 2015-12-04 Land mark detecting apparatus and land mark detection method for vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150172239A KR101772178B1 (en) 2015-12-04 2015-12-04 Land mark detecting apparatus and land mark detection method for vehicle

Publications (2)

Publication Number Publication Date
KR20170065894A KR20170065894A (en) 2017-06-14
KR101772178B1 true KR101772178B1 (en) 2017-08-25

Family

ID=59218499

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150172239A KR101772178B1 (en) 2015-12-04 2015-12-04 Land mark detecting apparatus and land mark detection method for vehicle

Country Status (1)

Country Link
KR (1) KR101772178B1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020241954A1 (en) * 2019-05-31 2020-12-03 엘지전자 주식회사 Vehicular electronic device and operation method of vehicular electronic device
US11182886B2 (en) 2019-05-24 2021-11-23 Electronics And Telecommunications Research Institute Method and apparatus for image preprocessing

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102089343B1 (en) * 2018-06-26 2020-03-16 주식회사 수올리나 Around view monitoring system and calibration method for around view cameras
WO2020050498A1 (en) 2018-09-04 2020-03-12 씨드로닉스㈜ Method and device for sensing surrounding environment using image segmentation
KR102240839B1 (en) 2018-09-04 2021-04-16 씨드로닉스(주) Autonomous navigation method using image segmentation
US11514668B2 (en) 2018-09-04 2022-11-29 Seadronix Corp. Method and device for situation awareness
US11776250B2 (en) 2018-09-04 2023-10-03 Seadronix Corp. Method and device for situation awareness
EP3862997A4 (en) 2018-10-04 2022-08-10 Seadronix Corp. Ship and harbor monitoring device and method
DE102018133441A1 (en) 2018-12-21 2020-06-25 Volkswagen Aktiengesellschaft Method and system for determining landmarks in the surroundings of a vehicle
KR102423334B1 (en) * 2020-07-23 2022-07-20 숭실대학교산학협력단 Taillight detection method and apparatus
KR102550434B1 (en) * 2020-12-28 2023-07-04 (주)다보이앤씨 Method for identification of object
KR102414632B1 (en) * 2021-06-02 2022-06-30 (주)에이아이매틱스 Method for determining the location of a fixed object using multiple observation information
CN113673614B (en) * 2021-08-25 2023-12-12 北京航空航天大学 Metro tunnel foreign matter intrusion detection device and method based on machine vision

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101111046B1 (en) * 2010-05-13 2012-03-05 한남대학교 산학협력단 A Similar Video Search System through Object Detection Information and A Method thereof

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101111046B1 (en) * 2010-05-13 2012-03-05 한남대학교 산학협력단 A Similar Video Search System through Object Detection Information and A Method thereof

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11182886B2 (en) 2019-05-24 2021-11-23 Electronics And Telecommunications Research Institute Method and apparatus for image preprocessing
WO2020241954A1 (en) * 2019-05-31 2020-12-03 엘지전자 주식회사 Vehicular electronic device and operation method of vehicular electronic device

Also Published As

Publication number Publication date
KR20170065894A (en) 2017-06-14

Similar Documents

Publication Publication Date Title
KR101772178B1 (en) Land mark detecting apparatus and land mark detection method for vehicle
US10877485B1 (en) Handling intersection navigation without traffic lights using computer vision
KR101832466B1 (en) Parking Assistance Apparatus and Vehicle Having The Same
KR101965834B1 (en) Parking Assistance Apparatus and Vehicle Having The Same
KR101838967B1 (en) Convenience Apparatus for Vehicle and Vehicle
KR101834348B1 (en) Drive assistance appratus and control method for the same
KR101708657B1 (en) Vehicle and control method for the same
KR101768500B1 (en) Drive assistance apparatus and method for controlling the same
KR101942793B1 (en) Driver Assistance Apparatus and Vehicle Having The Same
KR101855940B1 (en) Augmented reality providing apparatus for vehicle and control method for the same
EP3569447A1 (en) Driver assistance apparatus
KR102310782B1 (en) Driver Assistance Apparatus, Vehicle Having The Same and Vehicle Safety system
US11970156B1 (en) Parking assistance using a stereo camera and an added light source
KR20180037426A (en) Parking Assistance Apparatus and Vehicle Having The Same
US10882465B2 (en) Vehicular camera apparatus and method
KR101632179B1 (en) Driver assistance apparatus and Vehicle including the same
US10703374B2 (en) Vehicle driving assisting apparatus and vehicle comprising same
KR101832224B1 (en) Appratus and method for assisting a driver based on difficulty level of parking
KR101767507B1 (en) Display apparatus for a vehicle, and control method for the same
KR101850794B1 (en) Parking assist appratus and method for assisting parking
KR20170035238A (en) Vehicle and control method for the same
KR20170005663A (en) Display control apparatus for vehicle and operating method for the same
KR20170033612A (en) Driver Assistance Apparatus and Vehicle Having The Same
KR101929294B1 (en) Parking Assistance Apparatus and Vehicle Having The Same
KR101752798B1 (en) Vehicle and control method for the same

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant