KR20180087532A - An acquisition system of distance information in direction signs for vehicle location information and method - Google Patents

An acquisition system of distance information in direction signs for vehicle location information and method Download PDF

Info

Publication number
KR20180087532A
KR20180087532A KR1020170011214A KR20170011214A KR20180087532A KR 20180087532 A KR20180087532 A KR 20180087532A KR 1020170011214 A KR1020170011214 A KR 1020170011214A KR 20170011214 A KR20170011214 A KR 20170011214A KR 20180087532 A KR20180087532 A KR 20180087532A
Authority
KR
South Korea
Prior art keywords
image
object
step
text
present invention
Prior art date
Application number
KR1020170011214A
Other languages
Korean (ko)
Other versions
KR101944607B1 (en
Inventor
김현태
정진성
조상복
Original Assignee
울산대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 울산대학교 산학협력단 filed Critical 울산대학교 산학협력단
Priority to KR1020170011214A priority Critical patent/KR101944607B1/en
Publication of KR20180087532A publication Critical patent/KR20180087532A/en
Application granted granted Critical
Publication of KR101944607B1 publication Critical patent/KR101944607B1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00624Recognising scenes, i.e. recognition of a whole field of perception; recognising scene-specific objects
    • G06K9/00791Recognising scenes perceived from the perspective of a land vehicle, e.g. recognising lanes, obstacles or traffic signs on road scenes
    • G06K9/00818Recognising traffic signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/60Rotation of a whole image or part thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/001Image restoration
    • G06T5/003Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Abstract

The present invention relates to a system to acquire distance information from a sign board to understand a location of a vehicle, capable of preventing an error caused by failure in global positioning system (GPS) communication; and a method thereof. According to one embodiment of the present invention, the method to acquire distance information from a sign board comprises: a first step of acquiring an image through a camera; a second step of assigning weight to any one color among red, green, and blue, and applying the weight to the image to acquire a first image in which the weighted color is emphasized; a third step of assigning a high value to a point equal to or greater than a predetermined threshold value, and assigning a low value to a point less than the predetermined threshold value to convert the first image into a binarized second image; a fourth step of recognizing the high value existing within a predetermined distance as one object when the number of points having the high value in the second image is equal to or greater than a predetermined number within the predetermined distance; a fifth step of detecting a straight line included in the recognized object; a sixth step of calculating an inclined angle of the detected straight line with respect to horizon; a seventh step of rotating the recognizing object by the inclined angle; an eighth step of correcting resolution of the rotated object; a ninth step of highlighting a text included in the resolution-corrected object by using a structural component through a morphology technique to manipulate a shape of a connection component existing in the image; and a tenth step of comparing the image acquired through the camera with the object having the highlighted text to extract the text from the object.

Description

BACKGROUND OF THE INVENTION 1. Field of the Invention [0001] The present invention relates to a system and a method for acquiring distance information in a guidance sign for identifying a vehicle,

The present invention relates to a system and method for acquiring distance information in a guide sign for locating a vehicle. More particularly, the present invention relates to a method and apparatus for acquiring an input image through a camera, highlighting colors using color detection, binarizing, securing a road guide sign area by a labeling technique, (Tilting, low resolution), and acquiring distance information through template matching.

Autonomous car is an autonomous vehicle that can carry out the main transport functions of an automobile. It is also called an uncrewed vehicle, a driverless car, a self-driving car, and a robotic car.

Automatic driving The vehicle can detect the surrounding environment without human intervention, and can be operated by automatic navigation.

Currently, a robot car exists as a prototype.

Driving vehicles detect the environment by using radar, LIDAR, GPS, and computer vision technology.

A more advanced control system interprets information that identifies the navigation path as well as the signs associated with the obstacle.

Unmanned vehicles should be able to automatically update the map according to the sensor input to maintain the route even in unregistered environments or conditions.

The Daimler autonomous drive trucks are equipped with long-range front, short-range radar and stereoscopic cameras and adaptive cruise control technology. Active Cruise Control (ACC) and Active Brake Assist ABA) adjusts travel and deceleration using long-range radar and short-range radar, and automatically adjusts the distance between cars.

The long-range radar searches for up to 250 meters ahead at an 18 ° viewing angle, and the short-range radar searches for up to 70 meters at a 130 ° viewing angle.

The three-dimensional camera attached to the front glass of the truck searches for up to 100m in horizontal 45 ° and vertical 27 ° viewing angles and recognizes the lane markings. The 'Highway Pilot' system connects the front radar and the three- Provides functions such as avoidance, speed control, and deceleration.

In addition to detecting the real-time environment of the vehicle itself, it also uses a GPS (Global Positioning System) to operate a precise map-based forecasting system. .

As new technologies related to such autonomous vehicles have emerged, new laws are also becoming essential. In other words, countries around the world are making new laws to speed up the autonomous vehicle age, and they are also handing out one of the existing regulations, which is a stumbling block.

Autonomous vehicles require a license like a person. Autonomous vehicles are legally allowed to run on US roads in the last year of 2011, and the first time they acquired a trial license is in May 2012. Google and the US government actively engage in dialogue with the United States in Nevada.

In Nevada, two people must ride together when Google is testing a self-propelled vehicle. This is because the monitor installed inside the vehicle must monitor the situation and if the problem occurs, the vehicle must be operated to prevent accidents. After Nevada, Florida allowed autonomous cars.

In California, more dramatic legislative events were held. In October 2012, California Governor Jerry Brown visited a Google campus in Mountain View, California to sign an autonomous vehicle safety standard bill. The state of Michigan after the state of California has approved the autonomous vehicle test in December 2013. In the United States, you can meet autonomous vehicles that run legally in four states.

The UK Department of Transportation has also announced plans to allow autonomous vehicles in the summer of 2014. It was planned to use more than three cities as an experimental stage for autonomous vehicles. The city also plans to invest KRW 17.4 billion in the budget to support the experiment.

In the UK, the test runs of real autonomous vehicles began in February 2015. London, Greenwich, Milton Keynes and Coventry. In the designated area of Greenwich, an unmanned shuttle with pedestrian detection function will also be operated.

In Korea, the autonomous vehicle special zone will be established under the leadership of the Ministry of Land, Transport and Maritime Affairs and the Ministry of Commerce, Industry and Energy. Thanks to autonomous driving technology as a future growth engine. The ministries will secure special zones and exclusive zones for self-driving car trial operations within this year.

In this way, the time for introduction and use of the autonomous vehicle is approaching. However, since the autonomous vehicle relies solely on the GPS information in order to grasp the distance between the current vehicle and the intersection, the environment factor ), There is a lot of room for error.

In order to solve this problem, a system for recognizing the QR code and confirming the current position has been suggested. Although it is possible to grasp the current position of the vehicle through the image, there is a problem that the sign must be newly installed and changed Lt; / RTI >

As a result, the method of recognizing the QR code can not recognize the information of the currently installed road guide sign, does not apply the technical correction according to the location and resolution of the camera, and can not confirm the distance information in the road sign There is a need for a solution to this problem.

Korea Patent Office Registration No. 10-0403741 Korean Intellectual Property Office Registration No. 10-0839578

SUMMARY OF THE INVENTION The present invention seeks to provide a user with a system and method for acquiring distance information within a guide sign for locating a vehicle.

More particularly, the present invention relates to a method and apparatus for acquiring an input image through a camera, highlighting colors using color detection, binarizing, securing a road guide sign area by a labeling technique, (Tilting, low resolution), and acquiring distance information through template matching.

It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are not intended to limit the invention to the precise form disclosed. It can be understood.

According to an aspect of the present invention, there is provided a distance information acquisition method in a guide sign, comprising: a first step of acquiring an image through a camera; A second step of obtaining a first image in which the weighted color is emphasized by applying a weight to a color of any one of red, green, and blue to apply to the image; A high value is assigned to a point higher than a preset threshold value and a low value is given to a point lower than the preset threshold value to convert the first image into a binarized second image A third step; A fourth step of recognizing a high value existing within the predetermined distance as one object when a point having a high value of the second image is a predetermined number or more within a predetermined distance; A fifth step of detecting a straight line included in the recognized object; A sixth step of calculating a tilted angle of the detected straight line with respect to a horizontal direction; A seventh step of rotating the recognized object by the inclined angle; An eighth step of correcting the resolution of the rotated object; A ninth step of highlighting a text in the object whose resolution is corrected through a morphology technique of manipulating the shape of a connecting element existing in the image using a structural element; And extracting a text in the object by comparing the image obtained through the camera with the object in which the text is highlighted.

In addition, in the second step, the weighted color may be green.

In addition, in the third step, a high value may be given to a bright point above the preset threshold value, and a low value may be given to a bright point below the preset threshold value.

In the fourth step, the recognized object may be plural.

In the fifth step, the straight line may be detected using Hough Transformation.

The eighth step includes the steps of: acquiring a frequency spectrum by Fourier transforming an image related to the rotated object; Passing the obtained frequency spectrum through a low pass filter (LPF); And recovering an image related to the rotated object by inverse Fourier transforming the frequency spectrum passed through the LPF.

In the eighth step, the structural element is applied to an image related to the object corrected with the resolution so that a point at which the high value is given is highlighted, and the highlighted text is compared with the seventh step And can be displayed more clearly.

The ninth step may include separating a plurality of texts existing in the image obtained through the camera using a blank space; Comparing the plurality of separated texts with an object in which the text is highlighted; And extracting a text having a high degree of similarity with the separated plurality of texts from the object in which the text is highlighted.

In addition, the recognized object may be a guide sign associated with the vehicle.

According to another aspect of the present invention, there is provided a distance information acquisition device in a guide sign, comprising: a camera for acquiring an image; And a color of one of red (RED), green (GREEN), and blue (BLUE) is applied to the image to obtain a first image in which the weighted color is emphasized, And a point lower than the predetermined threshold value is assigned a low value to convert the first image into a binarized second image and the second image is converted into a second image, When a point having a high value of an image is equal to or greater than a predetermined number within a predetermined distance, a high value existing within the predetermined distance is recognized as one object, a straight line included in the recognized object is detected, Calculating a tilted angle of the detected straight line, rotating the recognized object by the tilted angle, correcting the resolution of the rotated object, A text in a corrected object is highlighted through a morphology technique which is a technique of manipulating the shape of an existing connection element, and an image obtained through the camera is compared with an object in which the text is highlighted And a control unit for extracting text in the object.

Also, the weighted color is green, and the control unit gives a high value to a point brighter than the predetermined threshold value, and gives a low value to a point brighter than the predetermined threshold value .

Further, the recognized object may be a plurality of guide signs related to the vehicle.

The control unit detects the straight line using Hough Transform, Fourier transforms an image related to the rotated object, obtains a frequency spectrum, and outputs the obtained frequency spectrum to an LPF Pass filter, and performs inverse Fourier transform on the frequency spectrum passed through the LPF to recover an image related to the rotated object, thereby correcting the resolution of the rotated object.

In addition, the controller may apply the structure element to an image related to the object corrected with the resolution so that a point at which the high value is given is highlighted, so that the text in the object having the resolution corrected can be highlighted have.

The control unit separates a plurality of texts existing in an image obtained through the camera using a margin, compares the separated plurality of texts with an object in which the text is highlighted, The text in the object can be extracted by extracting a text having a high degree of similarity to the plurality of separated texts.

The present invention can provide a system and method for acquiring distance information in a guide sign for locating a vehicle to a user.

More particularly, the present invention relates to a method and apparatus for acquiring an input image through a camera, highlighting colors using color detection, binarizing, securing a road guide sign area by a labeling technique, (Tilting, low-resolution) and acquiring distance information through template matching can be provided to the user.

As a result, the present invention can provide a user with a system for acquiring distance information between a vehicle and an intersection in real time using only an image, and can prevent an error due to a GPS communication failure.

In addition, the present invention can constitute a system with a single camera sensor, can be additionally applied to technologies necessary for autonomous travel, and has a wide range of applications.

It should be understood, however, that the effects obtained by the present invention are not limited to the above-mentioned effects, and other effects not mentioned may be clearly understood by those skilled in the art from the following description It will be possible.

1 is a block diagram of a system for acquiring distance information in a guide sign for locating a vehicle proposed by the present invention.
FIG. 2 is a flowchart illustrating a method of setting a route of a vehicle by comparing distance information obtained from a system proposed by the present invention with information received from the outside.
3 is a flowchart illustrating a method of acquiring distance information in a guide sign for locating a vehicle proposed by the present invention.
FIG. 4 shows an example of comparison between the existing color detection result and the color detection result proposed by the present invention, in connection with the present invention.
5 shows an example of a source image and a histogram in connection with the binarization technique of the present invention.
6 illustrates an example of storing a first pixel search and a stack in association with the labeling technique of the present invention.
Figure 7 shows the type of search mask in relation to the labeling technique of the present invention.
FIG. 8 shows a specific example of searching for neighboring pixels, in relation to the labeling technique of the present invention.
Figure 9 shows the labeling results in relation to the labeling technique of the present invention.
FIG. 10 shows an example of a straight line having a point in applying the Hough transform technique to detect a straight line in an image in the method proposed by the present invention.
FIG. 11 shows an example of a straight line having two points in applying the Hough transform technique to detect a straight line in an image in the method proposed by the present invention.
FIG. 12 shows an example of a process of converting into a Hough space in order to detect a straight line in an image in the method proposed by the present invention.
Fig. 13 shows an example in which the original and the result are compared with each other in connection with the detection of the straight line of the road signboard according to the present invention.
Fig. 14 shows a sign angle calculation process in the tilt correction of the present invention.
Fig. 15 shows an example of rotation for arbitrary points in the tilt correction of the present invention.
16 compares the results before and after the compensation in the tilt correction of the present invention.
FIG. 17 shows an example of an original image, a frequency spectrum, and a zone display in connection with the resolution correction according to the present invention.
18 shows an example of a shifted spectral and zonal representation in connection with the resolution correction according to the present invention.
Fig. 19 shows filtered result values with respect to resolution correction according to the present invention.
20 shows an example of comparison of the results before and after correction with respect to the resolution correction according to the present invention.
Fig. 21 shows the types of structural elements with respect to the morphology proposed by the present invention.
22 shows an example of a conventional image and a structural element with respect to a morphology proposed by the present invention.
FIG. 23 shows an example in which the resultant value of a character is compared with an original by applying a morphology proposed by the present invention.
FIG. 24 shows a specific example of determining a margin in template matching proposed by the present invention.
25 is a diagram for explaining a template matching method proposed by the present invention.
FIG. 26 is a table showing the similarity degree numerical values of the distance in the road guide sign according to the method of acquiring the distance information in the guide sign for identifying the position of the vehicle proposed by the present invention.
FIG. 27 is a table showing the detection rates of road signs generally detected according to the method of acquiring the distance information in the guide signs for locating the vehicle proposed by the present invention.
FIG. 28 is a table summarizing the detection rate of a sign on a cloudy and rainy day according to a method of acquiring distance information in a guide sign for identifying the position of a vehicle proposed by the present invention.

It is noted that the technical terms used in the present invention are used only to describe specific embodiments and are not intended to limit the present invention. In addition, the technical terms used in the present invention should be construed in a sense generally understood by a person having ordinary skill in the art to which the present invention belongs, unless otherwise defined in the present invention, Should not be construed to mean, or be interpreted in an excessively reduced sense. In addition, when a technical term used in the present invention is an erroneous technical term that does not accurately express the concept of the present invention, it should be understood that technical terms can be understood by those skilled in the art. In addition, the general terms used in the present invention should be interpreted according to a predefined or prior context, and should not be construed as being excessively reduced.

Furthermore, the singular expressions used in the present invention include plural expressions unless the context clearly dictates otherwise. In the present invention, terms such as "comprising" or "comprising" and the like should not be construed as encompassing various elements or various steps of the invention, Or may further include additional components or steps.

Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings, wherein like reference numerals refer to like or similar elements throughout the several views, and redundant description thereof will be omitted.

In the following description, well-known functions or constructions are not described in detail since they would obscure the invention in unnecessary detail. It is to be noted that the accompanying drawings are only for the purpose of facilitating understanding of the present invention, and should not be construed as limiting the scope of the present invention with reference to the accompanying drawings.

As described above, the time for introduction and availability of an autonomous vehicle is approaching, and an autonomous vehicle relies on GPS information only in order to grasp the distance between a current vehicle and an intersection. Therefore, an environmental factor (between high- Communication failure).

In order to solve this problem, a system for recognizing the QR code and confirming the current position has been suggested. Although it is possible to grasp the current position of the vehicle through the image, there is a problem that the sign must be newly installed and changed Lt; / RTI >

As a result, the method of recognizing the QR code can not recognize the information of the currently installed road guide sign, does not apply the technical correction according to the location and resolution of the camera, and can not confirm the distance information in the road sign There is a need for a solution to this problem.

SUMMARY OF THE INVENTION Accordingly, the present invention is directed to provide a system and method for acquiring distance information in a guide sign for locating a vehicle to solve the problem.

More particularly, the present invention relates to a method and apparatus for acquiring an input image through a camera, highlighting colors using color detection, binarizing, securing a road guide sign area by a labeling technique, (Tilting, low resolution), and acquiring distance information through template matching.

Before describing the technical features proposed by the present invention, a block diagram of an apparatus or system for acquiring distance information in a guide sign for locating a vehicle proposed by the present invention will be described with reference to FIG.

1, a system 100 for acquiring distance information in a guide sign includes a wireless communication unit 110, an A / V (Audio / Video) input unit 120, a user input unit 130, a sensing unit 140, An output unit 150, a memory 160, an interface unit 170, a control unit 180, a power supply unit 190, and the like.

However, the components shown in Fig. 1 are not essential, so that a system for acquiring distance information in a guide sign having more or fewer components may be implemented.

Hereinafter, the components will be described in order.

The wireless communication unit 110 may include one or more modules for enabling wireless communication between a system for acquiring distance information in a guide sign and a wireless communication system or a network in which a device and a device are located.

For example, the wireless communication unit 110 may include a mobile communication module 112, a wireless Internet module 113, a short distance communication module 114, a location information module 115, and the like.

The mobile communication module 112 transmits and receives a radio signal to at least one of a base station, an external device, and a server on a mobile communication network.

And various types of data according to transmission / reception of text / multimedia messages.

The wireless Internet module 113 refers to a module for wireless Internet access, and may be built in or enclosed in a system for acquiring distance information in a guide sign. WLAN (Wi-Fi), Wibro (Wireless broadband), Wimax (World Interoperability for Microwave Access), HSDPA (High Speed Downlink Packet Access) and the like can be used as wireless Internet technologies.

The short-range communication module 114 refers to a module for short-range communication. Bluetooth, Radio Frequency Identification (RFID), Infrared Data Association (IRDA), Ultra-WideBand (UWB), ZigBee, Wireless Fidelity, Wi-Fi Can be used.

The position information module 115 is a module for obtaining the position of the system for acquiring the distance information in the guide sign, and a representative example thereof is a Global Position System (GPS) module.

Referring to FIG. 1, an A / V (Audio / Video) input unit 120 is for inputting an audio signal or a video signal, and may include a camera 121 and a microphone 122. The camera 121 processes an image frame such as a still image or a moving image obtained by the image sensor in the photographing mode. The processed image frame can be displayed on the display unit 151. [

The image frame processed by the camera 121 may be stored in the memory 160 or transmitted to the outside through the wireless communication unit 110. [ Two or more cameras 121 may be provided depending on the use environment.

The microphone 122 receives an external sound signal by a microphone in a recording mode, a voice recognition mode, or the like, and processes it as electrical voice data. The processed voice data can be converted into a form that can be transmitted to the mobile communication base station through the mobile communication module 112 and output. Various noise reduction algorithms may be implemented in the microphone 122 to remove noise generated in receiving an external sound signal.

Next, the far-infrared camera 122 projects the infrared ray of the band that the person can not see.

The light of the FIR is also referred to as LWIR (Long Wavelength Infra Red). Infrared light has a wavelength of 8 μm to 15 μm in the wavelength of the light. In the FIR band, the temperature can be distinguished because the wavelength varies with temperature.

The body temperature of the person (pedestrian) which is the object of the far infrared ray camera 122 has a wavelength of 10 mu m and it is possible to photograph images, moving images, and the like of a specific object even at night through the far infrared ray camera 122. [

Next, the user input unit 130 generates input data for controlling the operation of the system in which the user learns the distance information in the guide sign. The user input unit 130 may include a key pad dome switch, a touch pad (static / static), a jog wheel, a jog switch, and the like.

The sensing unit 140 may be configured to detect the position of the system for acquiring the distance information in the guide signboard, the position of the system for acquiring the distance information in the guide signboard, the presence or absence of user contact, the orientation of the system for acquiring the distance information in the guide sign, Such as acceleration / deceleration of a system for acquiring distance information, and the like, and generates a sensing signal for controlling the operation of the system for acquiring distance information in the guide sign.

The sensing unit 140 may sense whether the power supply unit 190 is powered on, whether the interface unit 170 is connected to an external device, and the like.

The output unit 150 is for generating an output relating to visual, auditory or tactile sense and includes a display unit 151, an acoustic output module 152, an alarm unit 153, a haptic module 154, 155, a head-up display (HUD), a head mounted display (HMD), and the like.

The display unit 151 displays (outputs) information processed in the system for acquiring the distance information in the guide sign.

The display unit 151 may be a liquid crystal display (LCD), a thin film transistor-liquid crystal display (TFT LCD), an organic light-emitting diode (OLED), a flexible display display, and a 3D display.

Some of these displays may be transparent or light transmissive so that they can be seen through. This can be referred to as a transparent display, and a typical example of the transparent display is TOLED (Transparent OLED) and the like. The rear structure of the display unit 151 may also be of a light transmission type. With this structure, the user can see an object located behind the system body that acquires the distance information in the guide sign through the area occupied by the display unit 151 of the system body acquiring the distance information in the guide sign.

There may be two or more display units 151 depending on the implementation of the system for acquiring the distance information within the guide sign. For example, in a system for acquiring distance information in a guide sign, a plurality of display portions may be spaced apart from one another or may be disposed integrally with each other, and may be disposed on different surfaces.

(Hereinafter, referred to as a 'touch screen') in which a display unit 151 and a sensor for sensing a touch operation (hereinafter, referred to as 'touch sensor') form a mutual layer structure, It can also be used as an input device. The touch sensor may have the form of, for example, a touch film, a touch sheet, a touch pad, or the like.

The touch sensor may be configured to convert a change in a pressure applied to a specific portion of the display unit 151 or a capacitance generated in a specific portion of the display unit 151 into an electrical input signal. The touch sensor can be configured to detect not only the position and area to be touched but also the pressure at the time of touch.

If there is a touch input to the touch sensor, the corresponding signal (s) is sent to the touch controller. The touch controller processes the signal (s) and transmits the corresponding data to the controller 180. Thus, the control unit 180 can know which area of the display unit 151 is touched or the like.

The proximity sensor 141 may be disposed within an interior region of the system for acquiring distance information in the guide sign wrapped by the touch screen or near the touch screen. The proximity sensor refers to a sensor that detects the presence or absence of an object approaching a predetermined detection surface or a nearby object without mechanical contact using the force of an electromagnetic field or infrared rays. The proximity sensor has a longer life span than the contact sensor and its utilization is also high.

Examples of the proximity sensor include a transmission type photoelectric sensor, a direct reflection type photoelectric sensor, a mirror reflection type photoelectric sensor, a high frequency oscillation type proximity sensor, a capacitive proximity sensor, a magnetic proximity sensor, and an infrared proximity sensor. And to detect the proximity of the pointer by the change of the electric field along the proximity of the pointer when the touch screen is electrostatic. In this case, the touch screen (touch sensor) may be classified as a proximity sensor.

Hereinafter, for convenience of explanation, the act of recognizing that the pointer is positioned on the touch screen while the pointer is not in contact with the touch screen is referred to as "proximity touch & The act of actually touching the pointer on the screen is called "contact touch. &Quot; The position where the pointer is proximately touched on the touch screen means a position where the pointer is vertically corresponding to the touch screen when the pointer is touched.

The proximity sensor detects a proximity touch and a proximity touch pattern (e.g., a proximity touch distance, a proximity touch direction, a proximity touch speed, a proximity touch time, a proximity touch position, a proximity touch movement state, and the like). Information corresponding to the detected proximity touch operation and the proximity touch pattern may be output on the touch screen.

The audio output module 152 may output audio data received from the wireless communication unit 110 or stored in the memory 160 in a recording mode, a voice recognition mode, a broadcast receiving mode, or the like. The sound output module 152 also outputs an acoustic signal related to a function performed in the system for acquiring the distance information in the guide sign. The audio output module 152 may include a receiver, a speaker, a buzzer, and the like.

The alarm unit 153 outputs a signal for notifying the occurrence of an event in the system for acquiring the distance information in the guide sign.

The alarm unit 153 may output a signal for notifying the occurrence of an event in a form other than the video signal or the audio signal, for example, vibration.

The video signal or the audio signal may be output through the display unit 151 or the audio output module 152 so that they may be classified as a part of the alarm unit 153.

The haptic module 154 generates various tactile effects that the user can feel. A typical example of the haptic effect generated by the haptic module 154 is vibration. The intensity and pattern of the vibration generated by the hit module 154 can be controlled.

For example, different vibrations may be synthesized and output or sequentially output.

In addition to the vibration, the haptic module 154 may include a pin arrangement vertically moving with respect to the contact skin surface, a spraying force or a suction force of the air through the injection port or the suction port, a touch on the skin surface, contact with an electrode, And various tactile effects such as an effect of reproducing a cold sensation using an endothermic or exothermic element can be generated.

The haptic module 154 can be implemented not only to transmit the tactile effect through the direct contact but also to allow the user to feel the tactile effect through the muscular sensation of the finger or arm. The haptic module 154 may be equipped with two or more according to the configuration of the system for acquiring the distance information in the guide sign.

The projector module 155 is a component for performing an image project function using a system for acquiring distance information in a guide sign and displays a display on the display 151 in accordance with a control signal of the controller 180. [ It is possible to display an image on the external screen or on the wall at the same or at least partially different image.

Specifically, the projector module 155 includes a light source (not shown) that generates light (for example, laser light) for outputting an image to the outside, a light source And a lens (not shown) for enlarging and outputting the image at a predetermined focal distance to the outside. Further, the projector module 155 may include a device (not shown) capable of mechanically moving the lens or the entire module to adjust the image projection direction.

The projector module 155 can be divided into a CRT (Cathode Ray Tube) module, an LCD (Liquid Crystal Display) module and a DLP (Digital Light Processing) module according to the type of the display means. In particular, the DLP module may be advantageous for miniaturization of the projector module 151 by enlarging and projecting an image generated by reflecting light generated from a light source on a DMD (Digital Micromirror Device) chip.

Preferably, the projector module 155 may be provided longitudinally on the side, front or back side of the system for acquiring the distance information in the guide sign. Of course, it is needless to say that the projector module 155 can be provided at any position in the system for acquiring the distance information in the guide sign, if necessary.

In addition, the head-up display (HUD) 156 refers to a device for projecting the current vehicle speed, remaining fuel amount, navigation route information, and the like in a vehicle or the like as a graphic image on a window portion in front of the driver.

The information obtained through the far infrared camera 122 may also be output through the head-up display 156.

In addition, a head mounted display (HMD) 157 is a typical device capable of outputting virtual reality information.

Virtual reality is a process of creating a 3D environment in which a specific environment or situation is created through a computer and creating a human-computer interaction that makes the person using the 3D content appear as if they are interacting with the actual surroundings and environment. And the like.

Generally, the three-dimensional sensation perceived by a person depends on the degree of thickness change of the lens according to the position of the observed object, the angle difference between the eyes and the object, the position and shape difference of the object on the left and right eyes, , And various other psychological and memory effects.

Among them, binocular disparity is the most important factor for a person to feel the stereoscopic effect, as the eyes of a person are about 6.5 cm apart in the horizontal direction. In other words, the binocular parallax causes the angle of the object to be viewed with the difference. Due to this difference, the images coming into each eye have different phases. When these two images are transmitted to the brain through the retina, The 3D stereoscopic image of the original can be felt.

These stereoscopic 3D contents have already been widely used in various media fields and have been well received by consumers. For example, 3D movies, 3D games, and experience displays are examples.

As described above, in addition to the universalization of virtual reality technology 3D contents, there is a need to develop a technology capable of providing a more immersive virtual reality service.

2. Description of the Related Art Generally, an image display device forms a focal point so that a virtual large-sized screen can be formed at a long distance by using a precision optical device, which is generated in a position very close to an eye, so that a user can view an enlarged virtual image An image display device.

In addition, the image display device is configured to include a see-close type in which only the image light emitted from the display device can not be seen in the surrounding environment, and a see-close mode in which the image light emitted from the display device (See-through) that can be seen.

The head mounted display (HMD) 157 according to the present invention refers to various digital devices such as glasses that are worn on the head to receive multimedia contents. Various wearable computers (Wearable Computers) have been developed in accordance with the trend of weight reduction and miniaturization of digital devices, and HMDs are also widely used. The HMD 157 can be combined with augmented reality technology, N-screen technology, etc., beyond a simple display function, to provide various convenience to the user.

For example, when a microphone and a speaker are mounted on the HMD 157, the user can perform a telephone conversation while wearing the HMD 157. [ For example, when the far infrared ray camera 122 is mounted on the HMD 157, the user can capture an image of a desired direction in a state in which the HMD 157 is worn by the user.

In addition, the memory unit 160 may store a program for processing and controlling the control unit 180 and temporarily store the input / output data (e.g., message, audio, still image, For example. The frequency of use of each of the data may be stored in the memory unit 160 as well. In addition, the memory unit 160 may store data on vibration and sound of various patterns output when the touch is input on the touch screen.

The memory 160 may be a flash memory type, a hard disk type, a multimedia card micro type, a card type memory (for example, SD or XD memory), a RAM (Random Access Memory), SRAM (Static Random Access Memory), ROM (Read Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), PROM A disk, and / or an optical disk. The system for acquiring the distance information within the guide sign may operate in connection with a web storage that performs the storage function of the memory 160 on the Internet.

The interface unit 170 serves as a path for communication with all external devices connected to the system for acquiring the distance information in the guide sign. The interface unit 170 receives data from an external device, receives power from the external device, transmits the received distance information to each component in the system for acquiring distance information in the guide sign, or data in the system for acquiring distance information in the guide sign is external To be transmitted to the device. For example, a wired / wireless headset port, an external charger port, a wired / wireless data port, a memory card port, a port for connecting a device having an identification module, an audio I / O port, A video input / output (I / O) port, an earphone port, and the like may be included in the interface unit 170.

The identification module is a chip for storing various information for authenticating the usage right of the system for acquiring the distance information in the guide sign, and includes a user identification module (UIM), a subscriber identity module (SIM) A Universal Subscriber Identity Module (USIM), and the like. Devices with identification modules (hereinafter referred to as "identification devices") can be manufactured in a smart card format. Thus, the identification device can be connected to the system for acquiring the distance information in the guide sign through the port.

The interface unit may be a path through which power from the cradle is supplied to a system for acquiring distance information in the guide sign when a system for acquiring distance information in the guide sign is connected to an external cradle, Various input command signals may be transmitted to the mobile device. The various command signals or the power source input from the cradle may be operated as a signal for recognizing that the mobile device is correctly mounted on the cradle.

The controller 180 typically controls the overall operation of the system to acquire distance information within the signage.

The power supply unit 190 receives external power and internal power under the control of the controller 180 and supplies power necessary for operation of the respective components.

The various embodiments described herein may be embodied in a recording medium readable by a computer or similar device using, for example, software, hardware, or a combination thereof.

According to a hardware implementation, the embodiments described herein may be implemented as application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays May be implemented using at least one of a processor, controllers, micro-controllers, microprocessors, and other electronic units for performing other functions. In some cases, The embodiments described may be implemented by the control unit 180 itself.

According to a software implementation, embodiments such as the procedures and functions described herein may be implemented with separate software modules. Each of the software modules may perform one or more of the functions and operations described herein. Software code can be implemented in a software application written in a suitable programming language. The software code is stored in the memory 160 and can be executed by the control unit 180. [

On the other hand, a method of setting the route of the vehicle by comparing the distance information obtained by the system 100 proposed by the present invention with the information received from the outside will be described based on the above-described configuration of the present invention.

FIG. 2 is a flowchart illustrating a method of setting a route of a vehicle by comparing distance information obtained from a system proposed by the present invention with information received from the outside.

Referring to Fig. 2, first, the autonomous driving vehicle (S1) proceeds from the outside to receive driving information from the outside.

At this time, the autonomous vehicle relies only on the GPS information in order to grasp the distance between the current vehicle and the intersection, and there is a lot of error due to environmental factors (high-rise buildings, communication disorder).

Accordingly, after step S1, the step S2 of obtaining the distance information in the road guide sign through the RGB camera 121 is performed.

Thereafter, the process proceeds to step S3 in which the received travel information is compared with the obtained distance information to set a route.

Therefore, since the route is set together with the distance information acquired in the current state in addition to the GPS information, the autonomous vehicle can more accurately perform a given task.

In particular, the present invention proposes an efficient method for obtaining the distance information (S2) in the road information signboard through the RGB camera 121. FIG.

Fig. 3 shows a flowchart for explaining a method (S2) for acquiring distance information in a guide sign for locating a vehicle proposed by the present invention.

Referring to FIG. 3, the method S2 starts with step S21 of acquiring an input image through the RGB camera 121. FIG.

Thereafter, the control unit 180 performs step S22 of highlighting the color of the input image through the color detection technique.

In step S22, an input image is acquired through the RGB camera 121, and the controller 180 enhances colors using color detection.

It is possible to perform color detection in a manner robust to noise through step S22.

The color detection method proposed by the present invention is a method of detecting an object by highlighting a color, and uses RGB specific values among color models such as RGB, ycbcr, and HSI, as shown in Equation 1 below.

Figure pat00001

In Equation (1)

Figure pat00002
Is red data,
Figure pat00003
Is green data,
Figure pat00004
Is blue data,
Figure pat00005
Means an image in which green is highlighted.

At this time, the controller 180 removes noise by defining a non-linear characteristic equation from the eigenvalues as shown in the following Equations (2) and (3).

Figure pat00006

Figure pat00007

In the above Equations 2 and 3

Figure pat00008
Is a nonlinear characteristic equation,
Figure pat00009
Is a weight,
Figure pat00010
Means an improved image in which green is highlighted.

The color detection method proposed by the present invention applies only to a color channel to be detected, thereby reducing the amount of computation and operating quickly.

This is because, in order to extract a road sign, color detection is speedy but needs noise improvement, it can improve noise and speed up.

FIG. 4 shows an example of comparison between the existing color detection result and the color detection result proposed by the present invention, in connection with the present invention.

FIG. 4A shows the result of the conventional color detection, and FIG. 4B shows the result of applying the color detection scheme proposed by the present invention.

Referring to FIG. 4, it can be seen that the result according to the color detection method according to the present invention is strong and quick in noise.

In addition, the control unit 180 performs a step S23 of binarizing the highlighted image.

In step S23 according to the present invention, the threshold value of binarization is determined using " Otsu binarization ".

Specifically, Otsujinhwa refers to finding a valley in the valley and setting the point to the threshold value of binarization.

5 shows an example of a source image and a histogram in connection with the binarization technique of the present invention.

Fig. 5 (a) shows the original image, and Fig. 5 (b) shows the histogram.

In step S23, as shown in FIG. 5B, the threshold value can be obtained through the following equation (4) after performing the histogram (class construction).

Figure pat00011

In Equation 4,

Figure pat00012
Quot; means a ratio of pixels darker than a threshold value,
Figure pat00013
Quot; means a ratio of pixels lighter than a threshold value,
Figure pat00014
Represents the variance of class n.

In the present invention, the binarization threshold becomes an optimal threshold value as the variance of both classes becomes smaller.

That is, by finding the minimum value in Equation (4), an optimal threshold value can be found.

In the present invention, the purpose of binarization is to calculate the binarization threshold, and the binarization threshold value for each image can be automatically calculated.

In addition, the controller 180 performs a step S24 of applying a labeling or labeling technique to the binarized image.

Labeling is a 4-neighbor search method and an 8-neighbor search method for recognizing a clustered object as a single object by searching a binarized image.

FIG. 6 illustrates an example of storing the first pixel search and the stack in association with the labeling technique of the present invention. FIG. 7 illustrates the types of the search mask in relation to the labeling technique of the present invention. Shows a specific example of searching for surrounding pixels in connection with the labeling technique of the present invention, and Fig. 9 shows the labeling result in relation to the labeling technique of the present invention.

With regard to the method of implementing labeling, as shown in FIG. 6A, the control unit 180 can search up to the point (2,2) where the pixel value exists.

Then, the control unit 180 can store the coordinate values of the pixel as shown in FIG. 6 (b) through the memory 160. FIG.

Thereafter, the control unit 180 searches for neighboring pixels as shown in FIG. 8 (a) using the 8-neighbor of FIG. 7 (b).

Then, the controller 1809 stores the coordinate values of the pixels through the memory 160 as shown in FIG. 8B.

By repeating this process, the control unit 180 can obtain the result as shown in FIG.

The reason for applying the labeling technique according to the present invention is to store the sign area, and thus, the zoning can be constructed.

Thereafter, the control unit 180 performs a step S25 of determining the road guide sign area in the image to which the labeling technique is applied.

In step S25, Hough Transformation may be applied first.

Hough Transformation is a method used to detect a straight line in an image, and a method of expressing a straight line in a two-dimensional space can be expressed by Equation (5) below.

Figure pat00015

In Equation (5)

Figure pat00016
Is the slope of the straight line,
Figure pat00017
Means the intercept of y.

FIG. 10 shows an example of a straight line having a point in applying the Hough transform technique to detect a straight line in an image in the method proposed by the present invention.

10 (a) shows the position of the point. In the case of Expression (5), a straight line that a point can have is shown in FIG. 10 (b).

11 shows an example of a straight line having two points in applying the Hough transform technique to detect a straight line in an image in the method proposed by the present invention.

If there are two points as shown in FIG. 11 (a) and the expression (6) is expressed as shown in the following Equation 6, the result shown in FIG. 11 (b) can be obtained.

Figure pat00018

Here, since the intersection point is the same as the straight line passing through two points, the more intersection points, the higher the probability that a straight line exists.

In addition, when the straight line is parallel to the y-axis, the range of the slope becomes infinite, so it is necessary to change to a finite space.

11 (b) can be transformed as shown in FIG. 12 (b) by using the following equation (7).

Figure pat00019

FIG. 12 shows an example of a process of converting into a Hough space in order to detect a straight line in an image in the method proposed by the present invention.

Fig. 12 (a) shows an existing space, and Fig. 12 (b) shows a conversion into a Hough space.

As a result, it is possible to detect the straight line at the upper part of the road sign board as shown in Fig.

Fig. 13 shows an example in which the original and the result are compared with each other in connection with the detection of the straight line of the road signboard according to the present invention.

In step S25, the Hough Transformation eventually has the advantage of correcting the slope using the trigonometric method in order to detect the straight line of the road signboard.

In addition, the control unit 180 performs a step S26 of correcting the inclination of the road guide sign region.

The control unit 180 can obtain the tilted angle of the sign by using the trigonometric method.

Fig. 14 shows a sign angle calculation process in the tilt correction of the present invention.

As shown in FIG. 14, the control unit 180 can obtain the values of X and Y by drawing a base line to obtain points 1 and 2.

Then, the controller can calculate the angle using the following equation (8).

Figure pat00020

In Equation (8)

Figure pat00021
Means the inclined angle of the road sign.

The rotation about an arbitrary point is shown in Fig. 15 below.

Fig. 15 shows an example of rotation for arbitrary points in the tilt correction of the present invention.

Further, the controller 180 may rotate the image through the following Equations (9) and (10).

Figure pat00022

Figure pat00023

Here, x and y represent the values of the two-dimensional matrix, and the results shown in Fig. 16 can be obtained.

16 compares the results before and after the compensation in the tilt correction of the present invention.

The step of correcting the inclination of the road guide sign region is performed in order to correct the inclination of the road guide sign, and the accuracy of the template matching can be improved by correcting the inclination accurately.

In addition, the controller 180 performs a step S27 of correcting the resolution of the road sign board area.

The step of correcting the resolution (S27) is performed in such a manner that the control unit 180 calculates the image in the frequency domain by the Fourier transform.

17 is a frequency spectrum image obtained by Fourier transform.

17A shows an original image, FIG. 17B shows a frequency spectrum, and FIG. 17C shows an example of a zone display.

Also, the control unit 180 shifts as shown in FIG. 18 to facilitate filtering.

18 (a) shows an example of a shifted spectrum, and FIG. 18 (b) shows an example of a zone display.

Thereafter, the control unit 180 can exclude the LPF (Low Pass Filter) region and generate the remaining region as shown in FIG. 19, which is a zero-padding method.

That is, FIG. 19 shows the filtered result values with respect to the resolution correction according to the present invention.

The control unit 180 can obtain an image with an increased resolution by filling the remaining area with 0 as much as a desired resolution (image size) and then performing reverse conversion again.

20 is an example of capturing and enlarging a part of the characters in the sign after the resolution is corrected.

20 (a) shows the correction before the correction in relation to the resolution correction, and FIG. 20 (b) shows the result after the correction.

By increasing the resolution of the road signboard through step S27, it is possible to increase the accuracy of template matching to be applied subsequently.

Thereafter, the control unit 180 performs step S28 of applying the template matching technique.

In step S28, morphology is applied, and morphology is a technique of manipulating the shape of the connecting element in the image using the structure element.

Typical computational methods of morphology are erosion, expansion, opening, and closing.

Fig. 21 shows the types of the structural elements with reference to the morphology proposed by the present invention, and Fig. 22 shows examples of existing images and structural elements with respect to the morphology proposed by the present invention.

In FIG. 22, (a) denotes an existing image, (b) denotes a result of erosion, and (c) denotes a structural element.

For example, erosion may be performed by placing the structural element of Figure 22 (c) at the point 1 of Figure 22 (a) and storing 0 if it is not the same as the structural element (1,1,1) Method.

In a word, it has an AND structure.

Also, the expansion is an OR structure in which 1 is stored when any one of them is 1.

In addition, open is an operation that expands after erosion and close is erosion after expansion.

The algorithm proposed by the present invention emphasizes characters by Top-hat and Black-hat applications as shown in Equations (11), (12) and (13) below.

Equation (11) is Top-hat obtained by subtracting the open operation from the original image, Equation (12) is Black-hat obtained by subtracting the original image from the closed image, and Equation (13) is a new expression emphasizing the character created by applying it.

Figure pat00024

Figure pat00025

Figure pat00026

Fig. 23 shows the result of highlighting the characters.

Fig. 23 (a) shows the original, and Fig. 23 (b) shows the results obtained by highlighting the characters by applying the morphology proposed by the present invention.

And an original are compared with each other.

In step S28, a template matching process is performed through the controller 180 after the morphology.

Template matching is a method of finding the object by determining the similarity between the original image and the stored binary image (target image).

That is, the method used for determining the degree of similarity is a method of obtaining the correlation coefficient, which is expressed by Equation (14).

Figure pat00027

In Equation (14), the degree of similarity has a value of 0 to 1, and the closer to 1, the higher the degree of similarity.

FIG. 24 shows a specific example of determining a margin in template matching proposed by the present invention, and FIG. 25 is a diagram for explaining a template matching method proposed by the present invention.

As shown in Fig. 24, the control unit 180 determines the margins between characters and displays them as shown in Fig. 25, and compares them.

26 is a table showing the similarity degree numerical values of the distance in the road guide sign according to the method of acquiring the distance information in the guide sign for identifying the position of the vehicle proposed by the present invention.

Through the step S28 of applying the template matching technique, it is possible to acquire the distance information of the road guide sign, and the character area is largely divided by the margin judgment, so that the calculation speed is fast (x divided by the syllable unit).

27 is a table showing the detection rates of the road signs generally detected according to the method of acquiring the distance information in the guide signs for locating the vehicle proposed by the present invention.

FIG. 28 is a table summarizing the sign detection rates of cloudy and rainy days according to a method of acquiring the distance information in the guide signs for locating the vehicle proposed by the present invention.

Finally, the control unit 180 proceeds to step S29 of acquiring the distance information in the road guide sign.

The control unit 180 performs a step S3 of setting the route by comparing the received travel information with the obtained distance information using the distance information acquired in the step S29.

When the above-described configuration and method of the present invention is applied, a system and method for acquiring distance information in a guide sign for locating a vehicle can be provided to a user.

More particularly, the present invention relates to a method and apparatus for acquiring an input image through a camera, highlighting colors using color detection, binarizing, securing a road guide sign area by a labeling technique, (Tilting, low-resolution) and acquiring distance information through template matching can be provided to the user.

As a result, the present invention can provide a user with a system for acquiring distance information between a vehicle and an intersection in real time using only an image, and can prevent an error due to a GPS communication failure.

In addition, the present invention can constitute a system with a single camera sensor, can be additionally applied to technologies necessary for autonomous travel, and has a wide range of applications.

The above-described embodiments of the present invention can be implemented by various means. For example, embodiments of the present invention may be implemented by hardware, firmware, software, or a combination thereof.

In the case of hardware implementation, the method according to embodiments of the present invention may be implemented in one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs) , FPGAs (Field Programmable Gate Arrays), processors, controllers, microcontrollers, microprocessors, and the like.

In the case of an implementation by firmware or software, the method according to embodiments of the present invention may be implemented in the form of a module, a procedure or a function for performing the functions or operations described above. The software code can be stored in a memory unit and driven by the processor. The memory unit may be located inside or outside the processor, and may exchange data with the processor by various well-known means.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS The foregoing description of the preferred embodiments of the invention disclosed herein has been presented to enable any person skilled in the art to make and use the present invention. While the present invention has been particularly shown and described with reference to preferred embodiments thereof, it will be understood by those skilled in the art that various changes and modifications may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. For example, those skilled in the art can utilize each of the configurations described in the above-described embodiments in a manner of mutually combining them. Accordingly, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

The present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Accordingly, the above description should not be construed in a limiting sense in all respects and should be considered illustrative. The scope of the present invention should be determined by rational interpretation of the appended claims, and all changes within the scope of equivalents of the present invention are included in the scope of the present invention. The present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. In addition, claims that do not have an explicit citation in the claims may be combined to form an embodiment or be included in a new claim by amendment after the filing.

Claims (17)

  1. A first step of acquiring an image through a camera;
    A second step of obtaining a first image in which the weighted color is emphasized by applying a weight to a color of any one of red, green, and blue to apply to the image;
    A high value is assigned to a point higher than a preset threshold value and a low value is given to a point lower than the preset threshold value to convert the first image into a binarized second image A third step;
    A fourth step of recognizing a high value existing within the predetermined distance as one object when a point having a high value of the second image is a predetermined number or more within a predetermined distance;
    A fifth step of detecting a straight line included in the recognized object;
    A sixth step of calculating a tilted angle of the detected straight line with respect to a horizontal direction;
    A seventh step of rotating the recognized object by the inclined angle;
    An eighth step of correcting the resolution of the rotated object;
    A ninth step of highlighting a text in the object whose resolution is corrected through a morphology technique of manipulating the shape of a connecting element existing in the image using a structural element; And
    And extracting text in the object by comparing an image obtained through the camera with an object in which the text is highlighted.
  2. The method according to claim 1,
    Wherein in the second step, the weighted hue is green. ≪ Desc / Clms Page number 13 >
  3. The method according to claim 1,
    Wherein in the third step, a high value is given to a point brighter than the preset threshold value, and a low value is given to a point brighter than the preset threshold value.
  4. The method according to claim 1,
    Wherein in the fourth step, the recognized object is a plurality of objects.
  5. The method according to claim 1,
    Wherein in the fifth step, the straight line is detected using Hough Transformation.
  6. The method according to claim 1,
    In the eighth step,
    Obtaining a frequency spectrum by Fourier transforming an image related to the rotated object;
    Passing the obtained frequency spectrum through a low pass filter (LPF); And
    And restoring an image related to the rotated object by inverse Fourier transforming the frequency spectrum passed through the LPF.
  7. The method according to claim 1,
    The ninth step is a step of applying the structural element to an image related to the object whose resolution is corrected so that a point at which the high value is given is highlighted,
    Wherein the highlighted text is displayed more clearly in comparison with the seventh step.
  8. The method according to claim 1,
    In the tenth step,
    Separating a plurality of texts existing in an image obtained through the camera using a margin;
    Comparing the plurality of separated texts with an object in which the text is highlighted; And
    And extracting a text having a high degree of similarity with the separated plurality of texts from the object in which the text is highlighted.
  9. The method according to claim 1,
    Wherein the recognized object is a guide sign associated with the vehicle.
  10. The method comprising the steps of: an autonomous driving vehicle receiving first information related to driving from outside;
    The self-driving vehicle extracting text in the object according to any one of claims 1 to 9; And
    And determining the operation of the autonomous vehicle using the first information and the extracted text.
  11. A camera for acquiring an image; And
    A first image having a weighted color emphasis is obtained by weighting a color of any one of red (RED), green (GREEN), and blue (BLUE)
    Assigning a high value to a point higher than a preset threshold value and applying a low value to a point lower than the preset threshold value to convert the first image into a binarized second image ,
    Recognizes a high value existing within the predetermined distance as one object when a point having a high value of the second image is equal to or greater than a predetermined number within a certain distance,
    Detecting a straight line included in the recognized object,
    Calculating a tilted angle of the detected straight line based on the horizontal,
    Rotating the recognized object by the inclined angle,
    Corrects the resolution of the rotated object,
    A text in a corrected object is highlighted through a morphology technique which is a technique for manipulating the shape of a connecting element existing in the image using a structural element,
    And a control unit for extracting text in the object by comparing an image obtained through the camera with an object in which the text is highlighted.
  12. 12. The method of claim 11,
    The weighted color is GREEN,
    Wherein,
    Wherein a high value is given to a bright spot above the preset threshold value and a low value is given to a bright spot below the preset threshold value.
  13. 12. The method of claim 11,
    Characterized in that the recognized object is a plurality of guide signs associated with the vehicle.
  14. 12. The method of claim 11,
    Wherein,
    The straight line is detected using Hough Transformation,
    A frequency spectrum obtained by Fourier transforming an image related to the rotated object, passing the obtained frequency spectrum through a low pass filter (LPF), and performing a frequency spectrum inverse Fourier inverse transform Wherein the resolution of the rotated object is corrected by restoring an image related to the rotated object by transforming the rotated object.
  15. 12. The method of claim 11,
    Wherein,
    Wherein the structure element is applied to an image related to the object corrected with the resolution so as to highlight the point at which the high value is given, thereby highlighting the text in the corrected object. Distance information acquisition device.
  16. 12. The method of claim 11,
    Wherein,
    A plurality of texts existing in an image obtained through the camera are separated by using a blank space, and the separated plurality of texts and an object in which the text is highlighted are compared with each other, And extracts text in the object by extracting text having high similarity with the text.
  17. A wireless communication unit for receiving first information related to driving from outside;
    A distance information acquisition device in a guide sign for extracting text in the object according to any one of claims 11 to 17; And
    And a controller for determining the operation of the autonomous vehicle using the first information and the extracted text.
KR1020170011214A 2017-01-24 2017-01-24 An acquisition system of distance information in direction signs for vehicle location information and method KR101944607B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020170011214A KR101944607B1 (en) 2017-01-24 2017-01-24 An acquisition system of distance information in direction signs for vehicle location information and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020170011214A KR101944607B1 (en) 2017-01-24 2017-01-24 An acquisition system of distance information in direction signs for vehicle location information and method

Publications (2)

Publication Number Publication Date
KR20180087532A true KR20180087532A (en) 2018-08-02
KR101944607B1 KR101944607B1 (en) 2019-02-01

Family

ID=63251545

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020170011214A KR101944607B1 (en) 2017-01-24 2017-01-24 An acquisition system of distance information in direction signs for vehicle location information and method

Country Status (1)

Country Link
KR (1) KR101944607B1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100403741B1 (en) 2001-09-07 2003-10-30 도원텔레콤 주식회사 Traffic information system using image detector and method thereof
KR20050013445A (en) * 2003-07-28 2005-02-04 엘지전자 주식회사 Position tracing system and method using digital video process technic
KR100839578B1 (en) 2006-12-08 2008-06-19 한국전자통신연구원 Vehicle Navigation Apparatus and Method of Controlling Image-based Route Guidance
JP2011024051A (en) * 2009-07-16 2011-02-03 Canon Inc Image processing apparatus and method
KR101409340B1 (en) * 2013-03-13 2014-06-20 숭실대학교산학협력단 Method for traffic sign recognition and system thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100403741B1 (en) 2001-09-07 2003-10-30 도원텔레콤 주식회사 Traffic information system using image detector and method thereof
KR20050013445A (en) * 2003-07-28 2005-02-04 엘지전자 주식회사 Position tracing system and method using digital video process technic
KR100839578B1 (en) 2006-12-08 2008-06-19 한국전자통신연구원 Vehicle Navigation Apparatus and Method of Controlling Image-based Route Guidance
JP2011024051A (en) * 2009-07-16 2011-02-03 Canon Inc Image processing apparatus and method
KR101409340B1 (en) * 2013-03-13 2014-06-20 숭실대학교산학협력단 Method for traffic sign recognition and system thereof

Also Published As

Publication number Publication date
KR101944607B1 (en) 2019-02-01

Similar Documents

Publication Publication Date Title
US10132633B2 (en) User controlled real object disappearance in a mixed reality display
EP3008708B1 (en) Vision augmented navigation
US20170153714A1 (en) System and method for intended passenger detection
JP6487231B2 (en) Generating an extended field of view
US20180015878A1 (en) Audible Notification Systems and Methods for Autonomous Vehhicles
US10055650B2 (en) Vehicle driving assistance device and vehicle having the same
US9171214B2 (en) Projecting location based elements over a heads up display
US9405122B2 (en) Depth-disparity calibration of a binocular optical augmented reality system
EP2826689B1 (en) Mobile terminal
JP6062041B2 (en) A method for generating a virtual display surface from a video image of a landscape based on a road
US8970451B2 (en) Visual guidance system
US20180136332A1 (en) Method and system to annotate objects and determine distances to objects in an image
EP2703873B1 (en) Information providing method and information providing vehicle therefor
US10078377B2 (en) Six DOF mixed reality input by fusing inertial handheld controller with hand tracking
US9122053B2 (en) Realistic occlusion for a head mounted augmented reality display
US10029700B2 (en) Infotainment system with head-up display for symbol projection
EP2813922B1 (en) Visibility improvement method based on eye tracking, machine-readable storage medium and electronic device
US20170010679A1 (en) Communication between autonomous vehicle and external observers
US8965741B2 (en) Context aware surface scanning and reconstruction
US10279741B2 (en) Display control apparatus, method, recording medium, and vehicle
JP6440115B2 (en) Display control apparatus, display control method, and display control program
KR20170016174A (en) Driver assistance apparatus and control method for the same
US8098170B1 (en) Full-windshield head-up display interface for social networking
US9269007B2 (en) In-vehicle display apparatus and program product
US9395543B2 (en) Wearable behavior-based vision system

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E90F Notification of reason for final refusal
E701 Decision to grant or registration of patent right