WO2022203125A1 - Safety apparatus for travel in tunnels and on all roads - Google Patents

Safety apparatus for travel in tunnels and on all roads Download PDF

Info

Publication number
WO2022203125A1
WO2022203125A1 PCT/KR2021/008418 KR2021008418W WO2022203125A1 WO 2022203125 A1 WO2022203125 A1 WO 2022203125A1 KR 2021008418 W KR2021008418 W KR 2021008418W WO 2022203125 A1 WO2022203125 A1 WO 2022203125A1
Authority
WO
WIPO (PCT)
Prior art keywords
accident
vehicle
congestion
intensity
point
Prior art date
Application number
PCT/KR2021/008418
Other languages
French (fr)
Korean (ko)
Inventor
이학승
Original Assignee
주식회사 에스투에이치원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 에스투에이치원 filed Critical 주식회사 에스투에이치원
Publication of WO2022203125A1 publication Critical patent/WO2022203125A1/en

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • EFIXED CONSTRUCTIONS
    • E01CONSTRUCTION OF ROADS, RAILWAYS, OR BRIDGES
    • E01FADDITIONAL WORK, SUCH AS EQUIPPING ROADS OR THE CONSTRUCTION OF PLATFORMS, HELICOPTER LANDING STAGES, SIGNS, SNOW FENCES, OR THE LIKE
    • E01F9/00Arrangement of road signs or traffic signals; Arrangements for enforcing caution
    • E01F9/50Road surface markings; Kerbs or road edgings, specially adapted for alerting road users
    • E01F9/576Traffic lines
    • E01F9/582Traffic lines illuminated
    • EFIXED CONSTRUCTIONS
    • E01CONSTRUCTION OF ROADS, RAILWAYS, OR BRIDGES
    • E01FADDITIONAL WORK, SUCH AS EQUIPPING ROADS OR THE CONSTRUCTION OF PLATFORMS, HELICOPTER LANDING STAGES, SIGNS, SNOW FENCES, OR THE LIKE
    • E01F9/00Arrangement of road signs or traffic signals; Arrangements for enforcing caution
    • E01F9/60Upright bodies, e.g. marker posts or bollards; Supports for road signs
    • E01F9/604Upright bodies, e.g. marker posts or bollards; Supports for road signs specially adapted for particular signalling purposes, e.g. for indicating curves, road works or pedestrian crossings
    • E01F9/615Upright bodies, e.g. marker posts or bollards; Supports for road signs specially adapted for particular signalling purposes, e.g. for indicating curves, road works or pedestrian crossings illuminated
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B5/00Visible signalling systems, e.g. personal calling systems, remote indication of seats occupied
    • G08B5/22Visible signalling systems, e.g. personal calling systems, remote indication of seats occupied using electric transmission; using electromagnetic transmission
    • G08B5/36Visible signalling systems, e.g. personal calling systems, remote indication of seats occupied using electric transmission; using electromagnetic transmission using visible light sources
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • H05B47/16Controlling the light source by timing means
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Definitions

  • the embodiments below relate to a technology for providing road safety devices installed in tunnels and on all roads.
  • Tunnels are formed through mountains or underground for rapid passage of vehicles traveling on roads or the like.
  • Such a tunnel has the advantage of minimizing the driving time and traffic inconvenience by reducing the mileage of the vehicle, but has a disadvantage in that the interior of the tunnel is dark and the driving environment of the vehicle is poor due to poor ventilation.
  • an object of the present invention is to provide a safety device for driving in a tunnel and on all roads that controls so that the light of the second color blinks in the rear of the vehicle traveling direction when it is detected that an accident has occurred.
  • the safety case is installed on the road at regular intervals in a certain section so as not to be in contact with the vehicle by being mounted on the road median and the guard rail portion; a lighting unit installed in the safety case to irradiate light in a forward direction of the vehicle; a warning unit installed in the safety case so that the light flickers in the rear of the vehicle traveling direction; Detects the movement of a vehicle traveling on the road, detects whether there is congestion in road driving through the movement of the vehicle, and detects that there is congestion in road driving. a sensor unit that detects whether it has occurred; and when the sensor unit detects that the vehicle is driving on the road, the lighting unit controls light to be irradiated.
  • the sensor unit includes a position of a vehicle and a distance between the vehicles through an image sensor to detect and analyze, obtain image information of a plurality of vehicles located in a section with congestion on the road, classify the obtained image information for each vehicle to extract image information of the vehicle to be analyzed, and Check the target area image and vehicle status information from the image information, check whether there is a vehicle having an appearance problem based on the vehicle status information, generate a determination result as to whether an accident has occurred, and the determination result Accordingly, when it is determined that an accident has occurred, it is detected that congestion has occurred on the road due to the occurrence of an accident, and when it is determined that no accident has occurred, it is detected that congestion has occurred on the road due to simple congestion.
  • a device is provided.
  • the sensor unit detects a congestion section in which the driving speed of the vehicle is less than or equal to a reference speed, and the control unit, a simple congestion occurrence point that is the starting point of the congestion section Calculates the length of the congestion section from to the end point of the simple congestion, which is the end point of the congestion section, and when the first reference distance is set through the congestion pattern for each time period for a predetermined period, the length of the congestion section is greater than the first reference distance If it is checked whether the length of the congestion section is shorter than the first reference distance, it is determined that the congestion section is a general congestion phenomenon, and the length of the congestion section is 4 times the length of the congestion section backward from the simple congestion occurrence point.
  • the light of the first color is controlled to blink through a first blinking speed with the intensity of the first intensity. Determined as a phenomenon, it is checked whether the length of the congestion section is shorter than a second reference distance set to a value longer than the first reference distance, and when it is confirmed that the length of the congestion section is shorter than the second reference distance, the congestion The section is determined as a serious congestion phenomenon, and from the simple congestion occurrence point to a position three times the length of the congestion section backward, the light of the first color has a second intensity stronger than the first intensity.
  • Controlled to blink through a blinking speed and when it is confirmed that the length of the congestion section is longer than the second reference distance, the congestion section is judged as a very serious congestion phenomenon, and the length of the congestion section backward from the simple congestion occurrence point is It is possible to control the light of the first color to blink through a second blinking speed faster than the first blinking speed with the intensity of the second intensity up to twice the distance.
  • the sensor unit when it is detected that congestion has occurred on the road due to the occurrence of an accident, detects the number of vehicles in the accident, and the control unit, whether the number of vehicles in the accident is smaller than a preset first reference value If it is confirmed that the number of vehicles in the accident is smaller than the first reference value, it is determined that the accident occurred on the road is a small accident, and the first is a position that is twice the first reference distance backward from the accident point.
  • the light of the second color is controlled to flash through the first flashing speed with the intensity of the first intensity, and when it is confirmed that the number of accident-causing vehicles is greater than the first reference value, an accident occurring on the road It is determined as a medium-to-large accident, and it is checked whether the number of accident vehicles is smaller than a second reference value set to a value higher than the first reference value, and if it is confirmed that the number of accident vehicles is smaller than the second reference value, on the road It is determined that the accident occurred is a medium-scale accident, and the light of the second color flashes the first with the intensity of the second intensity from the point of occurrence to a second point that is three times the first reference distance to the rear from the point of occurrence.
  • the control unit if it is determined that the accident occurring on the road is a medium-scale accident, and it is confirmed that the blinking time of the second color light from the accident occurrence point to the second point is longer than the reference time, from the accident occurrence point control so that the light of the second color flashes through the first flashing speed with the intensity of the second intensity from the first point to the first point, and the light of the second color from the first point to the second point Controlled to blink through the first blinking speed with an intensity of 1 intensity, and the accident occurring on the road is determined as a large-scale accident, and the blinking time for the blinking of the light of the second color from the accident occurrence point to the third point is If it is confirmed that it is longer than the reference time, the light of the second color is controlled to blink through the second blinking speed with the intensity of the second intensity from the accident occurrence point to the first point, and from the first point control so that the light of the second color blinks through the first blinking speed with the intensity of the second intensity from the second point to the second point
  • the controller acquires 3D data on the surface of the first vehicle through the lidar, and obtains 3D data on the surface of the first vehicle through the camera.
  • Obtaining 2D data for a surface separating the union region of the 2D data and the 3D data, extracting first data obtained by merging the 2D data and the 3D data, and encoding the first data to input the first input generating a signal, inputting the first input signal to a first artificial neural network, obtaining a first output signal based on a result of the input of the first artificial neural network, and based on the first output signal, the A first classification result for the surface of the first vehicle is generated, the first data is analyzed to detect cracks generated on the surface of the first vehicle, and cracks generated on the surface of the first vehicle are identified by area, Separating a normal region in which a crack is detected below a preset first set value and a damaged region in which a crack is detected above
  • the light of the first color is controlled to flicker in the rearward direction of the vehicle, and in the tunnel and on all roads.
  • the driver can grasp the traffic situation ahead, thereby preventing a rear collision accident.
  • FIG. 1 is a perspective view schematically showing a road driving safety device according to an embodiment of the present invention.
  • FIG. 2 is a view showing a state in which the safety case of the road driving safety device is installed on the road.
  • FIG. 3 is a diagram schematically showing the configuration of a road driving safety device.
  • FIG. 4 is a perspective view schematically showing a road driving safety device according to another embodiment of the present invention.
  • FIG. 5 is a view for explaining a method of processing an image taken of a vehicle located in a section with congestion on the road in order to determine whether there is a vehicle having a problem in appearance due to the occurrence of an accident according to an embodiment of the present invention; to be.
  • FIG. 6 is a view for explaining a learning method employed to process an image of a vehicle to be analyzed in order to determine whether there is a vehicle having an appearance problem due to an accident occurrence according to an embodiment of the present invention.
  • FIG. 7 is a flowchart illustrating a process of controlling blinking according to a length of a congestion section according to an embodiment.
  • FIG. 8 is a flowchart for explaining a process of controlling blinking according to the number of vehicles having an accident according to an exemplary embodiment.
  • FIG. 9 is a flowchart for explaining a process of controlling the blinking step by step according to a distance during a medium-scale accident according to an embodiment.
  • FIG. 10 is a flowchart for explaining a process of controlling the blinking step by step according to the distance during a large-scale accident according to an embodiment.
  • FIG. 11 is a flowchart illustrating a process of classifying a surface of a vehicle according to an exemplary embodiment.
  • FIG. 12 is a diagram for explaining an artificial neural network according to an embodiment.
  • FIG. 13 is a diagram for explaining a method of learning an artificial neural network according to an embodiment.
  • first or second may be used to describe various elements, these terms should be interpreted only for the purpose of distinguishing one element from another.
  • a first component may be termed a second component, and similarly, a second component may also be termed a first component.
  • spatially relative terms “below”, “beneath”, “lower”, “above”, “upper”, etc. It can be used to easily describe the correlation between a component and other components.
  • a spatially relative term should be understood as a term that includes different directions of components during use or operation in addition to the directions shown in the drawings. For example, when a component shown in the drawing is turned over, a component described as “beneath” or “beneath” of another component may be placed “above” of the other component. can Accordingly, the exemplary term “below” may include both directions below and above. Components may also be oriented in other orientations, and thus spatially relative terms may be interpreted according to orientation.
  • FIG. 1 is a perspective view schematically showing a road driving safety device according to an embodiment of the present invention
  • FIG. 2 is a view showing a state in which a safety case of the road driving safety device is installed on the road
  • FIG. 3 is road driving safety It is a diagram schematically showing the configuration of the device.
  • the road driving safety device 100 includes a safety case 10 installed on a road 11 , and a safety case 10 installed in the safety case 10 .
  • the lighting unit 20 that irradiates light in the traveling direction of the vehicle, the control unit 30 that controls so that power is supplied to the lighting unit 20, and the safety case 10 are installed in the safety case 10 to detect the movement of a vehicle or animal and a sensor unit 40 for operating the lighting unit 20 .
  • the safety case 10 may be installed so as not to be in contact with the vehicle by being mounted on the median and guard rail portions, and may be installed in plurality at regular intervals in a predetermined section.
  • An installation space is formed inside the safety case 10, and in this embodiment, one is installed in the median, one in the left guardrail part, and one in the right guardrail part, so that a total of three are spaced apart from each other. It will be described as an example to be installed in the state.
  • the safety case 10 is not necessarily limited to being installed in three pieces, and two or less or more than three safety cases 10 may be installed in the road median and guard rail portions.
  • the safety case 10 may be installed at regular intervals along each of the median and guardrails, and preferably installed in the median and guardrails in the tunnel and on all roads 11 .
  • the lighting unit 20 is installed in the safety case 10 .
  • the lighting unit 20 may be installed in the safety case 10 to selectively irradiate the lighting forward in the vehicle driving direction.
  • the lighting unit 20 is installed so as to selectively irradiate the lighting in the forward direction of the vehicle driving direction, and in this embodiment, it will be exemplarily described that it is installed as an LED lighting.
  • the lighting unit 20 is not necessarily applied as LED lighting, and it is also possible to apply a predetermined lighting that can easily check the median and guard rail portions.
  • At least three or more safety cases 10 are installed in the median and guard rail portions of the road, and the lighting unit 20 may also be installed in at least three or more in the median and guard rail portions of the road. That is, at least three lighting units 20 may be installed in units of 50 m, and may be installed in units of approximately 150 m.
  • the light-emitting action of the lighting unit 20 may be selectively emitted by the sensing action of the sensor unit 40 to be described later.
  • the sensor unit 40 will be described in more detail.
  • control unit 30 is connected to the lighting unit 20 and may be controlled to emit light forward in the driving direction of the vehicle at night.
  • the control unit 30 may include a solar cell 31 installed on the side of the road 11 , and a connection unit 33 connecting the solar cell 31 and the lighting unit 20 .
  • the solar cell 31 may be installed on the side of the road to convert and store solar light energy into electrical energy during the daytime.
  • a plurality of solar cells 31 may be installed along the side of the road 11 to smoothly supply power to the lighting units 20 .
  • the solar cell 31 may be installed in a fixed state on the side of the road 11 or may be installed in a movable state so that the installed position can be changed.
  • the solar cell 31 and the lighting unit 20 may be connected through a connection unit 33 to supply power to the lighting unit 20 .
  • connection part 33 may be installed to connect one solar cell 31 and a plurality of lighting parts 20 or to connect the solar cell 31 and the lighting part 20 alone. Therefore, by using solar energy during the daytime through the solar cell 31 to store light energy in the form of electrical energy, and by supplying power to the lighting unit 20 through the connection unit 33 at night, the lighting unit 20 is selectively It is possible to make it luminous.
  • the safety case 10 is provided with a sensor unit 40 for selectively operating the lighting unit 20 .
  • the sensor unit 40 may be installed in the safety case 10 to sense a movement of a vehicle or an animal from the side of the road 11 to the central portion of the road.
  • the sensor unit 40 is installed with an infrared sensor, etc., so that it is possible to check the movement of a vehicle or an animal in real time in front of the vehicle driving direction.
  • the sensor unit 40 may adjust the sensing direction of the sensor unit 50 to detect both lanes.
  • the sensor unit 40 may be installed as one in the safety case 10 , and may be installed in plurality while changing the installation direction. Accordingly, the sensor unit 40 can easily confirm that a predetermined object such as a vehicle or an animal moves from the front of the vehicle in the driving direction to the central portion of the road 11 through both sides of the road 11 .
  • the lighting unit 20 may be controlled to emit light.
  • the warning unit 210 may blink to notify the driver of a danger warning. Accordingly, it is possible for the driver to more effectively sense the danger through the warning unit 210 while checking the median section and the guard rail portion through the lighting unit 20 .
  • the lighting unit 20 may be installed in the safety case 10 to irradiate light in the front of the vehicle traveling direction, and the warning unit 210 may be installed so that the light flickers in the rear of the vehicle traveling direction.
  • the sensor unit 40 may be separately installed on the road instead of the safety case 10, for example, it is installed as an image sensor and is installed on the road. It can detect the movement of a driving vehicle and detect whether there is congestion in road driving through the movement of the vehicle. It can be detected whether or not it has occurred.
  • the control unit 30 controls the lighting unit 20 to emit light, and when the sensor unit 40 detects that there is congestion on the road, The warning unit 210 may control the light of different colors to flicker depending on the cause of congestion.
  • the controller 30 controls the light to blink in the first color in the warning unit 210, and when it is detected that an accident has occurred on the road, the warning unit 210 controls the second You can control the light to flicker in two colors.
  • the vehicle driver can confirm in advance that simple congestion has occurred in the tunnel and on all roads, and the light of the second color, For example, when the yellow light flickers in the warning unit 210, it can be confirmed in advance that an accident has occurred in the tunnel and on all roads.
  • the first color for example, the blue light flickers in the warning unit 210
  • FIG. 4 is a view for explaining the occurrence of congestion in road driving according to an embodiment of the present invention.
  • the sensor unit 40 may detect whether there is congestion in road driving through the movement of a vehicle traveling on the road, and when it is detected that there is congestion in road driving, whether simple traffic congestion has occurred due to an increase in vehicles, and It is possible to detect whether a simple congestion has occurred due to an increase in vehicles.
  • Zone 401 may detect that a traffic jam has occurred due to an increase in vehicles.
  • Zone 402 may detect that congestion has occurred due to a traffic accident.
  • the controller 30 controls the light of the first color to flicker from the point where the simple congestion occurs to a position that is a first distance to the rear, and an accident occurs on the road.
  • the light of the second color to flicker from the point of occurrence to a second distance set to be longer than the first distance to the rear rearward.
  • control unit 30 may control the blue light to flicker to a position 50 m rearward from the simple congestion occurrence point, and control the yellow light to flicker to a position 100 m rearward from the accident occurrence point. have.
  • the sensor unit 40 may detect the driving speed of the vehicle, and detect where the congestion section is in which the driving speed of the vehicle is less than or equal to the reference speed, and the control unit 30 warns the warning unit 210 according to the length of the congestion section.
  • the controller 30 may control the light of the first color to blink with a normal intensity, and when the congestion section is 20m, the light of the first color blinks with a stronger intensity can be controlled as much as possible.
  • the sensor unit 40 may detect the number of accidents occurring vehicles based on the image information obtained through the image sensor, and the control unit 30 blinks in the warning unit 210 according to the number of accidents occurring vehicles.
  • the control unit 30 blinks in the warning unit 210 according to the number of accidents occurring vehicles.
  • control unit 30 may control the light of the second color to flicker with normal intensity when the number of vehicles in the accident is 2, and when the number of vehicles in the accident is 4, the light of the second color is more It can be controlled to blink with strong intensity.
  • the control unit 30 may set a third distance that is an accident risk distance according to the number of vehicles in which the accident occurred.
  • the controller 30 may set the third distance to 10 m when the number of accident-inducing vehicles is two, and may set the third distance to 20 m when the number of accident-occurring vehicles is four.
  • the control unit 30 controls the light of the second color to flicker from the point of the accident to a position that is a second distance backward from the point of occurrence, while the light of the second color that flickers from the point where the accident occurred to a position that is a third distance from the point of the accident is the third It can be controlled to change to a colored light and flicker.
  • control unit 30 controls so that the yellow light blinks to a position 100 m rearward from the accident occurrence point, while controlling the red light instead of yellow to blink 30 m rearward from the accident occurrence point, When the accident point approaches, it can be controlled to change to a different color light and blink.
  • the controller 30 may control the flashing speed of the light of the third color to be faster as it approaches the accident occurrence point.
  • control unit 30 controls the light of the third color to flicker at a rate of blinking once per second from the location 20m to 30m away from the accident point rearward, and 10m to 20m away from the accident point rearward.
  • the third color light can be controlled to blink at a rate of blinking twice per second to the location, and the light of the third color can be controlled to blink at a rate of blinking three times per second to a location 10m rearward from the accident site. have.
  • the sensor unit 40 may detect a lane in which an accident has occurred based on image information obtained through the image sensor, and the control unit 30 may detect a lane in which an accident has occurred rather than a light flashing on both sides of a lane in which an accident does not occur. You can control the light flickering on both sides to flicker more strongly.
  • control unit 30 may control the yellow light flickering on both sides of the second lane to flash with strong intensity, and the left side of the first lane and the third lane.
  • the yellow light flickering on the right side can be controlled to flicker with normal intensity.
  • the sensor unit 40 may detect by analyzing the position of the vehicle and the distance between the vehicles through the image sensor, obtain image information of a plurality of vehicles located in a section with congestion on the road, and from the image information It is possible to check the target area image and vehicle state information for each of a plurality of vehicles, and to determine whether there is a vehicle having an appearance problem based on the vehicle state information, thereby generating a determination result as to whether an accident has occurred. .
  • the sensor unit 40 obtains image information of the first vehicle and the second vehicle located in a section where there is congestion on the road, checks the vehicle state information of each of the first vehicle and the second vehicle, and the second It is checked whether there is a vehicle having a problem in the exterior of each of the first vehicle and the second vehicle, and when it is confirmed that there is no problem in the exterior of all vehicles, a determination result may be generated that an accident has not occurred, and the first vehicle and when it is confirmed that there is a problem in the appearance of at least one of the second vehicles, a result of determining that an accident has occurred may be generated.
  • the sensor unit 40 detects that traffic congestion has occurred due to the occurrence of an accident, and when it is determined that no accident has occurred, it is detected that congestion has occurred on the road due to simple congestion can do.
  • FIG. 5 is a view for explaining a method of processing an image taken of a vehicle located in a section with congestion on the road in order to determine whether there is a vehicle having a problem in appearance due to the occurrence of an accident according to an embodiment of the present invention; to be.
  • the sensor unit 40 may include an image sensor installed in the tunnel and on the top of all roads, and acquire image information of a plurality of vehicles located in a section where there is congestion on the road through the image sensor. and extract the image information of the vehicle to be analyzed by classifying the obtained image information by vehicle, check the target area image and vehicle condition information from the extracted image information of the vehicle to be analyzed, and check the appearance of the vehicle based on the vehicle condition information It is possible to check whether there is a vehicle with the , and generate a determination result as to whether an accident has occurred.
  • the sensor unit 40 may analyze the target area image of the analysis target vehicle and extract vehicle state information included in the target area.
  • the sensor unit 40 may determine a target area of the vehicle to be analyzed to obtain an image 501 of the target area.
  • the sensor unit 40 may identify an effective vehicle boundary based on color information and texture information in the target area image 501 .
  • the sensor unit 40 may determine whether it is a vehicle for each area based on a color and a texture.
  • the sensor unit 40 may determine whether a vehicle is in each area by sliding a filter of a predefined unit, and the filter may be designed to output a result according to a color and a texture.
  • the sensor unit 40 may extract an effective vehicle area 502 separated by an effective vehicle boundary from the target area image 501 .
  • the sensor unit 40 may extract appearance characteristics of particle objects in the effective vehicle area 502 .
  • the sensor unit 40 may identify a foreign object 503 among particle objects based on the extracted exterior features, and remove the foreign object 503 from the effective vehicle area 502 . have.
  • the sensor unit 40 identifies an object outside a predefined range based on the appearance, color, and texture information of the vehicle body and glass distributed within the effective vehicle area 502 , and determines the identified object as a foreign object 503 . can do.
  • the sensor unit 40 may extract size features 505 to 507 of particle objects in the effective vehicle area 504 from which the foreign object is removed.
  • the sensor unit 40 may identify particle objects within the effective vehicle area 504 and extract size features 505 to 507 from among information describing the identified particle objects for each size.
  • the sensor unit 40 may extract and classify the size features 505 to 507 by size according to a range serving as a reference for classifying the vehicle body and the glass.
  • the sensor unit 40 may classify the particle objects into one of a body object and a glass object, respectively, based on the extracted size features 505 to 507 .
  • the sensor unit 40 may generate a first ratio in the effective vehicle area 504 of at least one particle object classified as a vehicle body object.
  • the first ratio may correspond to a body ratio within the effective vehicle area 504 .
  • the sensor unit 40 may generate vehicle state information in which the characteristics of the vehicle body are reflected by using the first ratio.
  • the sensor unit 40 may generate a second ratio in the effective vehicle area 504 of at least one particle object classified as a glass object.
  • the second proportion may correspond to the proportion of glass in the effective vehicle area 504 .
  • the sensor unit 40 may generate vehicle state information in which the characteristics of glass are reflected by using the second ratio.
  • the sensor unit 40 may generate a third ratio of the foreign object 503 within the effective vehicle area 502 .
  • the third ratio may mean a ratio occupied by foreign substances in the effective vehicle area 502 .
  • the sensor unit 40 may extract a color feature within the effective vehicle area 504 .
  • the sensor unit 40 may generate car color information based on color characteristics.
  • the sensor unit 40 may generate basic vehicle information based on the first ratio, the second ratio, the third ratio, and the vehicle color information.
  • the sensor unit 40 may generate basic vehicle information based on the first ratio, the second ratio, the third ratio, and the car color information according to the image processing of the effective vehicle area 504 .
  • the sensor unit 40 may identify the target area image 501 based on the location, inquire the environment information of the tunnel in which the vehicle is located, and reflect the current environmental state (illuminance, etc.) in the tunnel. Auxiliary vehicle information may be generated.
  • the sensor unit 40 may generate a feature vector 510 corresponding to the effective vehicle area 502 based on the basic vehicle information and the auxiliary vehicle information.
  • the sensor unit 40 may obtain the output information 512 by applying the feature vector 510 to the pre-trained neural network 511 .
  • the neural network 511 may be trained to estimate vehicle state information from input according to basic vehicle information generated based on features extracted from the image of the vehicle and auxiliary vehicle information that affects depending on the environmental condition in the tunnel, which is the shooting area. have.
  • the sensor unit 40 may generate vehicle state information corresponding to the effective vehicle area 502 based on the output information 512 .
  • the output information 512 may be information including a matching degree for each scratch of the vehicle or may be designed as variables describing a state in which the vehicle is distorted.
  • the output information 512 may be discretely designed according to the classification of the vehicle. For example, output nodes of the output layer of the neural network 511 correspond to each type of vehicle, and the output nodes correspond to each type of vehicle. Probability values can be output for each classification.
  • the learning contents of the neural network 511 will be described with reference to FIG. 7 .
  • FIG. 6 is a view for explaining a learning method employed to process an image of a vehicle to be analyzed in order to determine whether there is a vehicle having an appearance problem due to an accident occurrence according to an embodiment of the present invention.
  • the learning apparatus may train the neural network 604 for estimating information required to obtain vehicle state information from the target area image.
  • the learning device may be a separate entity different from the sensor unit 40 , but is not limited thereto.
  • the learning apparatus may acquire labeled vehicle images 601 .
  • the learning apparatus may obtain pre-labeled information on each vehicle image for each vehicle type, and the vehicle image may be labeled according to a pre-classified vehicle type.
  • the learning apparatus may provide a first ratio corresponding to the vehicle body object, a glass, based on at least one of color information, texture information, and appearance characteristics and size characteristics of the particle objects of the labeled vehicle images 601 .
  • the basic vehicle information 602 may be generated based on the second ratio corresponding to the object, the third ratio corresponding to the foreign object, and the vehicle color information.
  • the learning apparatus may generate the feature vectors 603 of the vehicle to be analyzed based on the basic vehicle information 602 .
  • Auxiliary vehicle information may be employed in generating the feature vectors 603 of the vehicle to be analyzed.
  • the learning apparatus may obtain the output information 605 by applying the feature vectors 603 to the neural network 604 .
  • the learning apparatus may train the neural network 604 based on the output information 605 and the labels 606 .
  • the learning apparatus may train the neural network 604 by calculating errors corresponding to the output information 605 and optimizing the connection relationships of nodes in the neural network 604 to minimize the errors.
  • the sensor unit 40 may acquire vehicle state information from the target area image by using the neural network 604 that has been trained.
  • FIG. 7 is a flowchart illustrating a process of controlling blinking according to a length of a congestion section according to an embodiment.
  • step S701 when it is detected that there is congestion on the road, the sensor unit 40 determines whether there is a vehicle having a problem in appearance based on vehicle state information in order to check the cause of the congestion. By checking, it is possible to generate a determination result as to whether an accident has occurred, and according to the determination result, when it is determined that an accident has not occurred, it is possible to detect that congestion has occurred on the road due to simple congestion.
  • the sensor unit 40 may detect a congestion section in which the driving speed of the vehicle is equal to or less than the reference speed on the road.
  • step S703 the control unit 30 can identify a simple congestion occurrence point, which is the starting point of the congestion section, and a simple congestion end point, which is an end point of the congestion section, and through the distance from the simple congestion occurrence point to the simple congestion end point, the congestion section length can be calculated.
  • the controller 30 may determine whether the length of the congestion section is shorter than the first reference distance.
  • the first reference distance may be set through a congestion pattern for each time period for a predetermined time.
  • control unit 30 may check the congestion pattern for each time period for a month, and when the current time is 7 o'clock, the control unit 30 may set the first reference distance through the 7 o'clock congestion pattern.
  • the controller 30 may set the first reference distance to a longer value as the probability of occurrence of congestion increases.
  • the control unit 30 may set the first reference distance to 20 m if, as a result of checking the congestion pattern at 7 o'clock, the probability of occurrence of congestion is 80%, the current time is 8 In the case of a viewer, when it is confirmed that the probability of occurrence of congestion is 90% as a result of checking the congestion pattern at 8 o'clock, the first reference distance may be set to 30 m.
  • step S704 If it is determined in step S704 that the length of the congestion section is shorter than the first reference distance, in step S705, the controller 30 may determine the congestion section as a general congestion phenomenon.
  • the controller 30 may determine the congestion section as a general congestion phenomenon.
  • step S706 the control unit 30 flashes the light of the first color at a first flashing speed with the intensity of the first intensity from the simple congestion occurrence point, which is the starting point of the congestion section, to a position that is four times the length of the congestion section backward. can be controlled as much as possible.
  • the control unit 30 controls the light of the first color to blink through the first blinking speed with the intensity of the first intensity from the simple congestion occurrence point to a position 120 m rearward. can do.
  • step S707 the controller 30 may determine the congestion section as a special congestion phenomenon.
  • step S708 the controller 30 may determine whether the length of the congestion section is shorter than the second reference distance.
  • the second reference distance may be set to a value longer than the first reference distance.
  • step S709 the controller 30 may determine the congestion section as a serious congestion phenomenon.
  • the controller 30 may determine the congestion section as a serious congestion phenomenon.
  • step S710 the control unit 30 flashes the light of the first color with the intensity of the second intensity at a first flashing speed from the simple congestion occurrence point, which is the starting point of the congestion section, to a position three times the length of the congestion section backwards.
  • the second intensity may be set to a light intensity stronger than the first intensity. For example, when the first intensity is 10lx, the second intensity may be set to 20lx.
  • the control unit 30 controls the light of the first color to flash through the first flashing speed with the intensity of the second intensity from the simple congestion occurrence point to a position 240 m to the rear. can do.
  • step S711 the controller 30 may determine the congestion section as a very serious congestion phenomenon.
  • the controller 30 may determine the congestion section as a very serious congestion phenomenon.
  • step S712 the control unit 30 flashes the light of the first color at a second flashing speed with the intensity of the second intensity from the simple congestion occurrence point, which is the start point of the congestion section, to a position that is twice the length of the congestion section backward.
  • the second flashing speed may be set to be faster than the first flashing speed. For example, when the first flashing speed is a flashing speed once per second, the second flashing speed is two per second. It can be set to flashing speed.
  • the control unit 30 controls the light of the first color to flash through the second flashing speed with the intensity of the second intensity from the simple congestion occurrence point to a position 260 m to the rear. can do.
  • FIG. 8 is a flowchart for explaining a process of controlling blinking according to the number of vehicles having an accident according to an exemplary embodiment.
  • step S801 when the sensor unit 40 detects that there is congestion on the road, to determine the cause of the congestion, based on the vehicle state information, whether there is a vehicle having a problem in appearance By checking, a determination result as to whether an accident has occurred may be generated, and if it is determined that an accident has occurred according to the determination result, it may be detected that congestion has occurred on the road due to the occurrence of the accident.
  • step S802 the sensor unit 40 may detect the number of accidents occurring vehicles.
  • step S803 the controller 30 may determine whether the number of accident-causing vehicles is smaller than a preset first reference value.
  • the first reference value may be set differently depending on the embodiment.
  • step S803 If it is determined in step S803 that the number of vehicles having an accident is smaller than the first reference value, in step S804, the controller 30 may determine the accident occurring on the road as a small accident.
  • the controller 30 may determine the accident occurring on the road as a small accident.
  • step S805 the control unit 30 causes the light of the second color to flash through the first flashing speed with the intensity of the first intensity from the accident point to the first point that is twice the first reference distance rearward. can be controlled
  • the control unit 30 determines that the accident occurring on the road is a small accident, sets a position 100 m rearward from the accident point as the first point, and from the accident point
  • the light of the second color may be controlled to blink through the first blinking speed with the intensity of the first intensity up to the first point.
  • step S803 if it is confirmed in step S803 that the number of vehicles in which the accident occurs is greater than the first reference value, in step S806, the controller 30 may determine the accident occurring on the road as a serious accident.
  • step S807 the control unit 30 may determine whether the number of vehicles having an accident is smaller than a preset second reference value.
  • the second reference value may be set to be higher than the first reference value.
  • step S808 the controller 30 may determine the accident occurring on the road as a medium-scale accident.
  • the controller 30 may determine the accident occurring on the road as a medium-scale accident.
  • step S809 the controller 30 causes the light of the second color to flash through the first flashing speed with the intensity of the second intensity from the accident occurrence point to the second point that is three times the first reference distance to the rear. can be controlled
  • the control unit 30 determines that the accident occurring on the road is a medium-scale accident, sets a position 150 m rearward from the accident point as the second point, and from the accident point
  • the light of the second color may be controlled to blink through the first blinking speed with the intensity of the second intensity up to the second point.
  • step S810 the controller 30 may determine that the accident occurring on the road is a large-scale accident.
  • the controller 30 may determine that the accident occurring on the road is a large-scale accident.
  • step S811 the control unit 30 causes the light of the second color to flash through the second flashing speed with the intensity of the second intensity to the third point that is four times the first reference distance rearward from the accident occurrence point. can be controlled
  • the control unit 30 determines that the accident occurring on the road is a large-scale accident, sets a position 200 m rearward from the accident point as the third point, and from the accident point
  • the light of the second color may be controlled to blink through the second blinking speed with the intensity of the second intensity up to the third point.
  • FIG. 9 is a flowchart for explaining a process of controlling the blinking step by step according to a distance during a medium-scale accident according to an embodiment.
  • step S901 when it is confirmed that the number of vehicles having an accident is greater than a first reference value and smaller than a second reference value, the controller 30 may determine the accident occurring on the road as a medium-scale accident.
  • step S902 the controller 30 may control the light of the second color to blink through the first blinking speed with the intensity of the second intensity from the accident occurrence point to the second point.
  • step S903 the control unit 30 may check whether the accident management is completed after a predetermined time has elapsed. At this time, when it is confirmed that the congestion caused by the accident has been resolved, the control unit 30 may confirm that the accident management has been completed.
  • control unit 30 may control the light of the second color that is blinking from the accident occurrence point to the second point not to blink anymore.
  • step S904 the control unit 30 checks the blinking time during which the light of the second color that is blinking from the accident occurrence point to the second point maintains the blinking state, , it is possible to check whether the blinking time is longer than the preset reference time.
  • the reference time may be set differently depending on the embodiment.
  • step S904 If it is confirmed in step S904 that the blinking time is shorter than the reference time, it returns to step S902, and the control unit 30 sets the first blinking speed with the intensity of the second intensity of the light of the second color from the accident point to the second point. By controlling it to blink through, it is possible to maintain the blinking state.
  • step S905 the control unit 30 flashes the light of the second color from the accident occurrence point to the first point with the intensity of the second intensity through the first blinking speed. and control so that the light of the second color from the first point to the second point blinks with the intensity of the first intensity through the first blinking speed.
  • step S906 the control unit 30 may check whether the accident management is completed after a predetermined time has elapsed.
  • the controller 30 may control the light of the second color that is blinking from the accident occurrence point to the second point not to blink anymore.
  • step S906 If it is confirmed that the accident management is not completed in step S906, the control unit 30 returns to step S905, and the control unit 30 transmits the light of the second color from the accident occurrence point to the first point through the first flashing speed with the intensity of the second intensity.
  • the blinking state may be maintained by controlling the blinking, and controlling the light of the second color from the first point to the second point to blink through the first blinking speed with the intensity of the first intensity.
  • FIG. 10 is a flowchart for explaining a process of controlling the blinking step by step according to the distance during a large-scale accident according to an embodiment.
  • step S1001 when it is confirmed that the number of vehicles having an accident is greater than a second reference value, the controller 30 may determine an accident occurring on the road as a large-scale accident.
  • step S1002 the controller 30 may control the light of the second color to blink through the second blinking speed with the intensity of the second intensity from the accident occurrence point to the third point.
  • step S1003 the control unit 30 may check whether the accident management is completed after a predetermined time has elapsed. At this time, when it is confirmed that the congestion caused by the accident has been resolved, the control unit 30 may confirm that the accident management has been completed.
  • the controller 30 may control the light of the second color that is blinking from the accident occurrence point to the third point not to blink anymore.
  • step S1004 the control unit 30 checks the blinking time during which the light of the second color that is blinking from the accident occurrence point to the third point maintains the blinking state, , it is possible to check whether the blinking time is longer than the preset reference time.
  • the reference time may be set differently depending on the embodiment.
  • step S1004 If it is confirmed that the blinking time is shorter than the reference time in step S1004, the process returns to step S1002, and the control unit 30 determines the second blinking speed with the intensity of the second intensity of the light of the second color from the accident point to the third point. By controlling it to blink through, it is possible to maintain the blinking state.
  • step S1005 the control unit 30 blinks the light of the second color from the accident occurrence point to the first point with the second intensity of the second blinking speed. control so that the light of the second color from the first point to the second point blinks through the first blinking speed with the intensity of the second intensity, and the light of the second color from the second point to the third point It can be controlled to blink through the first blinking speed with an intensity of 1 intensity.
  • step S1006 the control unit 30 may check whether the accident management is completed after a predetermined time has elapsed.
  • the controller 30 may control the light of the second color that is blinking from the accident occurrence point to the third point not to blink anymore.
  • step S1006 If it is confirmed that the accident management is not completed in step S1006, the process returns to step S1005, and the control unit 30 allows the light of the second color from the accident occurrence point to the first point through the second flashing speed with the intensity of the second intensity. control to blink, and control so that the light of the second color from the first point to the second point blinks through the first blinking speed with the intensity of the second intensity, and the light of the second color from the second point to the third point By controlling the flashing through the first flashing speed with the intensity of the first intensity, the flashing state may be maintained.
  • FIG. 11 is a flowchart illustrating a process of classifying a surface of a vehicle according to an exemplary embodiment.
  • the controller 30 may identify a first vehicle, which is any one of a plurality of vehicles located in a section with congestion on the road, as an analysis target vehicle.
  • step S1301 the controller 30 may acquire 3D data on the surface of the first vehicle through the lidar.
  • the 3D data is a 3D image of the surface of the first vehicle.
  • the control unit 30 may be connected to a device equipped with a lidar through wired or wireless.
  • the controller 30 may acquire 2D data on the surface of the first vehicle through the camera.
  • the 2D data is a 2D image of the surface of the first vehicle.
  • the controller 30 may be connected to a device equipped with a camera through wired or wireless.
  • step S1303 the controller 30 may separate the union region of the 2D data and the 3D data to extract the first data obtained by merging the 2D data and the 3D data.
  • the controller 30 can compare the 2D data and the 3D data to identify overlapping union regions, separate the union regions from the 2D data and separate the union regions from the 3D data, and merge the separated union regions.
  • the first data may be extracted.
  • the first data may consist of 4 channels, 3 channels may be 2D data representing an RGB value, and 1 channel may be data representing a 3D depth value.
  • the controller 30 may generate a first input signal by encoding the first data.
  • the controller 30 may generate the first input signal by encoding the pixels of the first data with color information.
  • the color information may include, but is not limited to, RGB color information, brightness information, saturation information, and depth information.
  • the controller 30 may convert the color information into a numerical value, and may encode the first data in the form of a data sheet including the value.
  • the controller 30 may input the first input signal to the first artificial neural network previously learned in the road driving safety device 100 .
  • the first artificial neural network is composed of a feature extraction neural network and a classification neural network, and the feature extraction neural network sequentially stacks a convolutional layer and a pooling layer on an input signal.
  • the convolution layer includes a convolution operation, a convolution filter, and an activation function. The calculation of the convolution filter is adjusted according to the matrix size of the target input, but a 9X9 matrix is generally used.
  • the activation function generally uses, but is not limited to, a ReLU function, a sigmoid function, and a tanh function.
  • the pooling layer is a layer that reduces the size of the input matrix, and uses a method of extracting representative values by tying pixels in a specific area.
  • the average value or the maximum value is often used for the calculation of the pooling layer, but is not limited thereto.
  • the operation is performed using a square matrix, usually a 9x9 matrix.
  • the convolutional layer and the pooling layer are repeated alternately until the corresponding input becomes small enough while maintaining the difference.
  • the classification neural network has a hidden layer and an output layer.
  • Classification of the first artificial neural network for classifying the roughness level of the surface of the first vehicle The neural network consists of five or less hidden layers, and may include a total of 50 or less hidden layer nodes.
  • the activation function of the hidden layer uses a ReLU function, a sigmoid function, and a tanh function, but is not limited thereto.
  • a detailed description of the first artificial neural network will be described later with reference to FIG. 12 .
  • the controller 30 may obtain a first output signal based on a result of the input of the first artificial neural network.
  • the controller 30 may generate a first classification result for the surface of the first vehicle based on the first output signal.
  • the first classification result may include information on which stage the surface of the first vehicle is classified.
  • the control unit 30 As a result of checking the output value of the first output signal, when the output value is 1, the control unit 30 generates a first classification result that the surface of the first vehicle corresponds to stage 1, and when the output value is 2 , a first classification result may be generated as the surface of the first vehicle corresponds to the second stage. It can be seen that the higher the step, the rougher the surface of the first vehicle becomes.
  • step S1105 the controller 30 may analyze the first data to detect a crack generated on the surface of the first vehicle.
  • a crack generated on the surface of the first vehicle In the case of crack detection, only a portion confirmed to be larger than a certain size through image analysis may be detected as a crack occurring on the surface of the first vehicle.
  • control unit 30 may identify cracks generated on the surface of the first vehicle for each region, and distinguish a normal region from a damaged region.
  • the control unit 30 divides the first data into a plurality of areas such as a first area and a second area, and can check how many cracks are detected for each area, and the number of cracks is less than the first set value.
  • the detected area may be divided into a normal area, and an area in which cracks greater than or equal to the first set value are detected may be classified as a damaged area.
  • the first set value may be set differently depending on the embodiment.
  • step S1107 the controller 30 may extract second data from which the damaged area is deleted from the first data.
  • the image in the first data consists of a first region, a second region, and a third region, wherein the first region is divided into a damaged region, and the second region and the third region are divided into a normal region.
  • the controller 30 may extract an image including only the second region and the third region as the second data.
  • the controller 30 may generate a second input signal by encoding the second data.
  • the controller 30 may generate a second input signal by encoding pixels of the second data with color information.
  • the color information may include, but is not limited to, RGB color information, brightness information, saturation information, and depth information.
  • the controller 30 may convert the color information into a numerical value, and may encode the second data in the form of a data sheet including the value.
  • the controller 30 may input the second input signal to the second artificial neural network previously learned in the road driving safety device 100 .
  • the second artificial neural network consists of a feature extraction neural network and a classification neural network, and the feature extraction neural network sequentially stacks a convolutional layer and a pooling layer on an input signal.
  • the convolution layer includes a convolution operation, a convolution filter, and an activation function. The calculation of the convolution filter is adjusted according to the matrix size of the target input, but a 9X9 matrix is generally used.
  • the activation function generally uses, but is not limited to, a ReLU function, a sigmoid function, and a tanh function.
  • the pooling layer is a layer that reduces the size of the input matrix, and uses a method of extracting representative values by tying pixels in a specific area.
  • the average value or the maximum value is often used for the calculation of the pooling layer, but is not limited thereto.
  • the operation is performed using a square matrix, usually a 9x9 matrix.
  • the convolutional layer and the pooling layer are repeated alternately until the corresponding input becomes small enough while maintaining the difference.
  • the classification neural network has a hidden layer and an output layer.
  • Classification of the second artificial neural network for classifying the roughness level of the surface of the first vehicle The neural network consists of five or less hidden layers, and may include a total of 50 or less hidden layer nodes.
  • the activation function of the hidden layer uses a ReLU function, a sigmoid function, and a tanh function, but is not limited thereto.
  • a detailed description of the second artificial neural network will be described later with reference to FIG. 12 .
  • the controller 30 may obtain a second output signal based on a result of the input of the second artificial neural network.
  • the controller 30 may generate a second classification result for the surface of the first vehicle based on the second output signal.
  • the second classification result may include information on which stage the surface of the first vehicle is classified.
  • the control unit 30 As a result of checking the output value of the second output signal, when the output value is 1, the control unit 30 generates a second classification result as that the surface of the first vehicle corresponds to stage 1, and when the output value is 2 , a second classification result may be generated as the surface of the first vehicle corresponds to the second stage.
  • the controller 30 may set a final classification result for the surface of the first vehicle based on the first classification result and the second classification result.
  • the controller 30 may set any one of the first classification result and the second classification result as the final classification result for the surface of the first vehicle.
  • the controller 30 may determine whether there is a problem in the appearance of the first vehicle by using the final classification result, and through this, determine whether an accident has occurred in the first vehicle.
  • FIG. 12 is a diagram for explaining an artificial neural network according to an embodiment.
  • the artificial neural network 1200 may be any one of a first artificial neural network and a second artificial neural network.
  • information on which level of roughness the surface of the first vehicle is classified may be output as a first input signal generated by encoding the first data as an input.
  • the second input signal generated by encoding the second data may be input as an input, and information on which stage of the roughness stage of the first vehicle surface is classified may be output.
  • Encoding according to an embodiment may be performed by storing color information for each pixel of an image in the form of a digitized data sheet, and the color information includes RGB color, brightness information, saturation information, and depth information of one pixel. can, but is not limited to.
  • the artificial neural network 1200 is composed of a feature extraction neural network 1210 and a classification neural network 1220, and the feature extraction neural network 1210 separates the first vehicle region and the background region from the image. may be performed, and the classification neural network 1220 may perform an operation of determining whether the surface of the first vehicle is classified into any roughness stage in the image.
  • the change in each value of color information from the data sheet of the input signal encoding the image is at least 6 out of 8 pixels including one pixel.
  • a bundle of pixels that are detected as having a change of 30% or more may be used as a boundary between the area of the first vehicle and the background area, but is not limited thereto.
  • the feature extraction neural network 1210 proceeds by sequentially stacking a convolutional layer and a pooling layer on the input signal.
  • the convolution layer includes a convolution operation, a convolution filter, and an activation function.
  • the calculation of the convolution filter is adjusted according to the matrix size of the target input, but a 9X9 matrix is generally used.
  • the activation function generally uses, but is not limited to, a ReLU function, a sigmoid function, and a tanh function.
  • the pooling layer is a layer that reduces the size of the input matrix, and uses a method of extracting representative values by tying pixels in a specific area. In general, the average value or the maximum value is often used for the calculation of the pooling layer, but is not limited thereto.
  • the operation is performed using a square matrix, usually a 9x9 matrix.
  • the convolutional layer and the pooling layer are repeated alternately until the corresponding input becomes small enough while maintaining the difference.
  • the classification neural network 1220 checks the surface of the region of the first vehicle separated from the background through the feature extraction neural network 1210, and checks whether it is similar to the predefined roughness step surface state, It is possible to determine whether the surface is classified into which level of roughness level. In order to compare the roughness step-by-step surface state, information stored in the database of the road driving safety device 100 may be utilized.
  • the classification neural network 1220 has a hidden layer and an output layer, and is composed of five or less hidden layers, including a total of 50 or less hidden layer nodes, and the activation function of the hidden layer is a ReLU function and a sigmoid function. and tanh functions, but is not limited thereto.
  • the classification neural network 1220 may include only one output layer node in total.
  • the output of the classification neural network 1220 is an output value of which stage of the roughness stage the surface of the first vehicle is classified, and may indicate which stage of the roughness stage it corresponds to. For example, when the output value is 1, it may indicate that the surface of the first vehicle corresponds to the first stage, and when the output value is 2, it may indicate that the surface of the first vehicle corresponds to the second stage.
  • the artificial neural network 1200 may learn by receiving the first learning signal generated by the corrected correct answer input by the user when the user discovers a problem in the output according to the artificial neural network 1200 .
  • the problem of output according to the artificial neural network 1200 may mean a case in which an output value classified into another stage among the roughness stages is output with respect to the surface of the first vehicle.
  • the first learning signal is created based on the error between the correct answer and the output value, and in some cases, SGD using delta, a batch method, or a method following a backpropagation algorithm may be used.
  • the artificial neural network 1200 performs learning by modifying the existing weights according to the first learning signal, and may use momentum in some cases.
  • a cost function can be used to calculate the error, and a cross entropy function can be used as the cost function.
  • the learning contents of the artificial neural network 1200 will be described with reference to FIG. 13 .
  • FIG. 13 is a diagram for explaining a method of learning an artificial neural network according to an embodiment.
  • the learning apparatus may train the artificial neural network 1200 .
  • the learning apparatus may be a separate entity different from the road driving safety apparatus 100 , but is not limited thereto.
  • the artificial neural network 1200 includes an input layer to which training samples are input and an output layer to output training outputs, and may be learned based on a difference between the training outputs and the first labels.
  • the first labels may be defined based on a representative image registered for each roughness level.
  • the artificial neural network 1200 is connected as a group of a plurality of nodes, and is defined by weights between the connected nodes and an activation function that activates the nodes.
  • the learning apparatus may train the artificial neural network 1200 using a Gradient Decent (GD) technique or a Stochastic Gradient Descent (SGD) technique.
  • the learning apparatus may use a loss function designed by the outputs and labels of the artificial neural network 1200 .
  • the learning apparatus may calculate a training error using a predefined loss function.
  • the loss function may be predefined with a label, an output, and a parameter as input variables, where the parameter may be set by weights in the artificial neural network 1200 .
  • the loss function may be designed in a Mean Square Error (MSE) form, an entropy form, or the like, and various techniques or methods may be employed in an embodiment in which the loss function is designed.
  • MSE Mean Square Error
  • the learning apparatus may find weights affecting the training error by using a backpropagation technique.
  • the weights are relationships between nodes in the artificial neural network 1200 .
  • the learning apparatus may use the SGD technique using labels and outputs to optimize the weights found through the backpropagation technique. For example, the learning apparatus may update the weights of the loss function defined based on the labels, outputs, and weights using the SGD technique.
  • the learning apparatus may obtain the representative images 1301 for each level of the labeled training roughness from the database of the road safety apparatus 100 .
  • the learning apparatus may obtain pre-labeled information on the representative images 1301 for each roughness stage, and the representative images 1301 for each roughness stage may be labeled according to the pre-classified roughness stage.
  • the learning apparatus may acquire 1000 labeled training roughness step-by-step representative images 1301, and based on the labeled training roughness step-by-step representative images 1301, the first training roughness step-by-step vectors ( 1302) can be created.
  • Various methods may be employed to extract the first training roughness step vectors 1302 .
  • the learning apparatus may obtain the first training outputs 1303 by applying the first training roughness step vectors 1302 to the artificial neural network 1200 .
  • the learning apparatus may train the artificial neural network 1200 based on the first training outputs 1303 and the first labels 1304 .
  • the learning apparatus may train the artificial neural network 1200 by calculating the training errors corresponding to the first training outputs 1303 and optimizing the connection relationship of nodes in the artificial neural network 1200 to minimize the training errors. .
  • the embodiments described above may be implemented by a hardware component, a software component, and/or a combination of a hardware component and a software component.
  • the apparatus, methods and components described in the embodiments may include, for example, a processor, a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable gate (FPGA). array), a programmable logic unit (PLU), a microprocessor, or any other device capable of executing and responding to instructions, may be implemented using one or more general purpose or special purpose computers.
  • the processing device may execute an operating system (OS) and one or more software applications running on the operating system.
  • the processing device may also access, store, manipulate, process, and generate data in response to execution of the software.
  • OS operating system
  • the processing device may also access, store, manipulate, process, and generate data in response to execution of the software.
  • the processing device includes a plurality of processing elements and/or a plurality of types of processing elements. It can be seen that may include For example, the processing device may include a plurality of processors or one processor and one controller. Other processing configurations are also possible, such as parallel processors.
  • the method according to the embodiment may be implemented in the form of program instructions that can be executed through various computer means and recorded in a computer-readable medium.
  • the computer-readable medium may include program instructions, data files, data structures, and the like, alone or in combination.
  • the program instructions recorded on the medium may be specially designed and configured for the embodiment, or may be known and available to those skilled in the art of computer software.
  • Examples of the computer-readable recording medium include magnetic media such as hard disks, floppy disks and magnetic tapes, optical media such as CD-ROMs and DVDs, and magnetic such as floppy disks.
  • - includes magneto-optical media, and hardware devices specially configured to store and execute program instructions, such as ROM, RAM, flash memory, and the like.
  • Examples of program instructions include not only machine language codes such as those generated by a compiler, but also high-level language codes that can be executed by a computer using an interpreter or the like.
  • a hardware device may be configured to operate as one or more software modules to perform the operations of the embodiments, and vice versa.
  • the software may comprise a computer program, code, instructions, or a combination of one or more thereof, which configures a processing device to operate as desired or is independently or collectively processed You can command the device.
  • the software and/or data may be any kind of machine, component, physical device, virtual equipment, computer storage medium or device, to be interpreted by or to provide instructions or data to the processing device. , or may be permanently or temporarily embody in a transmitted signal wave.
  • the software may be distributed over networked computer systems and stored or executed in a distributed manner. Software and data may be stored in one or more computer-readable recording media.

Abstract

Provided is a safety apparatus for travel in tunnels and on all roads, the apparatus being mounted on medians and guard rails of a road so as to prevent contact with vehicles. The safety apparatus comprises: safety cases installed on the road at regular intervals in a certain section; lighting units installed in the safety cases so as to emit light forward in the vehicle travel direction; warning units installed in the safety cases so that light blinks rearward in the vehicle travel direction; sensor units which detect the movement of vehicles traveling on the road and detect the presence of traffic congestion on the road through the movement of the vehicles, and which detect whether an accident has occurred and whether simple congestion due to an increase in vehicles has occurred when traffic congestion on the road is detected; and a control unit for controlling so that light is emitted from the lighting units when it is detected by the sensor unit that vehicles are traveling on the road, light blinks in a first color in the warning unit when it is detected by the sensor unit that simple congestion has occurred, and light blinks in a second color in the warning unit when it is detected by the sensor unit that an accident has occurred.

Description

터널 내 및 모든 도로 주행 안전 장치Safety devices for driving in tunnels and on all roads
아래 실시예들은 터널 내와 모든 도로에 설치된 도로 주행 안전 장치를 제공하는 기술에 관한 것이다.The embodiments below relate to a technology for providing road safety devices installed in tunnels and on all roads.
터널은 도로 등을 주행하는 차량의 신속한 통행을 위해서, 산 또는 지하 등을 관통하여 형성된다. 이러한 터널은 차량의 주행거리를 줄여서 주행시간 및 통행 불편을 최소화하는 장점이 있으나, 그 내부가 어둡고 환기불량으로 인해서 차량의 주행환경이 열악하다는 단점이 있다.Tunnels are formed through mountains or underground for rapid passage of vehicles traveling on roads or the like. Such a tunnel has the advantage of minimizing the driving time and traffic inconvenience by reducing the mileage of the vehicle, but has a disadvantage in that the interior of the tunnel is dark and the driving environment of the vehicle is poor due to poor ventilation.
또한, 터널 내부로 차량이 진입하면 갑작스런 환경의 변화로 인해서, 운전자의 여러 감각기능이 저하될 수 있고, 시각적 감각의 왜곡 현상으로 인해 속도감과 거리감을 인지하기 어려운 문제점 또한 있다.In addition, when a vehicle enters the tunnel, due to a sudden change in the environment, various sensory functions of the driver may be deteriorated, and there is also a problem in that it is difficult to recognize the sense of speed and distance due to the distortion of the visual sense.
따라서, 터널에서는 전술한 주행환경 악화로 인해서 교통사고의 발생 확률이 증가했다. 이러한 인적 사고 외에도, 터널은 인공적으로 구성된 실내이므로, 터널 설비 자체의 결함, 파손 또는 붕괴 등을 이유로 통행에 곤란한 환경이 조성될 수 있다. 물론, 이러한 급작스런 사고는 통행 차량에 충분한 통지가 불가능하므로, 대형 사고의 원인이 될 수 있었다.Therefore, in the tunnel, the probability of occurrence of a traffic accident increases due to the deterioration of the aforementioned driving environment. In addition to these human accidents, since the tunnel is artificially constructed indoors, an environment difficult to pass may be created due to defects, damage or collapse of the tunnel facility itself. Of course, such a sudden accident could cause a major accident because it was impossible to give sufficient notice to passing vehicles.
특히, 터널 뿐만 아니라 시야가 확보되지 않은 야간에는 모든 도로에서 주행하는데 있어 안전사고가 발생하는 것을 효과적으로 방지하는 것이 요구된다.In particular, it is required to effectively prevent safety accidents while driving on all roads at night when visibility is not secured as well as in tunnels.
아울러, 가로등이 미설치된 어두운 도로 등을 주행하는 경우, 도로의 주행 방향의 전방에 차량 사고 또는 소정의 장애물이 위치된 경우, 차량 주행자가 미리 확인하는 것이 어려운 바, 장애물을 피하는 과정에서 2차 사고가 발생될 문제점이 있다.In addition, when driving on a dark road on which street lights are not installed, when a vehicle accident or a predetermined obstacle is located in front of the driving direction of the road, it is difficult for the vehicle driver to check in advance, so a secondary accident in the process of avoiding the obstacle There is a problem that will occur.
한편, 고속도로 사고 및 정체 등을 후속 차량이 인지하지 못하여 발생하는 후방 추돌 사고가 매년 되풀이 되고 있으며, 고속도로 특성 상 다중 추돌 및 사망 사고로 이어지는 문제가 있다.On the other hand, rear-end collision accidents that occur because subsequent vehicles do not recognize highway accidents and congestion are repeated every year, and there is a problem that leads to multiple collisions and fatalities due to the characteristics of the highway.
따라서, 전방의 교통 상황을 미리 파악할 수 있도록 하여, 후방 추돌 사고를 방지하고자 하는 기술에 대한 구현 요구가 증대되고 있다.Accordingly, there is an increasing demand for implementing a technology for preventing a rear-end collision accident by allowing the front traffic situation to be grasped in advance.
본 발명의 일 실시예에 따르면, 터널 내 및 모든 도로에 차량 증가로 인해 단순 정체가 발생한 것으로 감지되면, 차량 진행 방향의 후방으로 제1 색의 빛이 점멸되도록 제어하고, 터널 내 및 모든 도로에 사고가 발생한 것으로 감지되면, 차량 진행 방향의 후방으로 제2 색의 빛이 점멸되도록 제어하는 터널 내 및 모든 도로 주행 안전 장치를 제공하기 위한 것을 그 목적으로 한다.According to an embodiment of the present invention, when it is detected that simple congestion has occurred due to an increase in vehicles in the tunnel and on all roads, the light of the first color is controlled to flicker in the rearward direction of the vehicle, and in the tunnel and on all roads. An object of the present invention is to provide a safety device for driving in a tunnel and on all roads that controls so that the light of the second color blinks in the rear of the vehicle traveling direction when it is detected that an accident has occurred.
본 발명의 목적은 이상에서 언급한 목적으로 제한되지 않으며, 언급되지 않은 또 다른 목적들은 아래의 기재로부터 명확하게 이해될 수 있을 것이다.Objects of the present invention are not limited to the objects mentioned above, and other objects not mentioned will be clearly understood from the description below.
본 발명의 일실시예에 따르면, 도로의 중앙분리대와 가드레일 부분에 장착되어서 차량과 접촉이 없도록 하여 일정 구간에 일정 간격으로 도로에 설치되는 안전 케이스; 차량 진행 방향의 전방으로 빛을 조사하도록 상기 안전 케이스에 설치되는 조명부; 차량 진행 방향의 후방으로 빛이 점멸되도록 상기 안전 케이스에 설치되는 경고부; 도로에서 주행중인 차량의 움직임을 감지하고, 차량의 움직임을 통해 도로 주행에 정체가 있는지 여부를 감지하여, 도로 주행에 정체가 있는 것으로 감지되면, 사고가 발생하였는지 여부 및 차량 증가로 인해 단순 정체가 발생하였는지 여부를 감지하는 센서부; 및 상기 센서부에 의해 도로에서 차량이 주행중인 것으로 감지되면, 상기 조명부에서 빛이 조사되도록 제어하고, 상기 센서부에 의해 단순 정체가 발생한 것으로 감지되면, 상기 경고부에서 제1 색으로 빛이 점멸되도록 제어하고, 상기 센서부에 의해 사고가 발생한 것으로 감지되면, 상기 경고부에서 제2 색으로 빛이 점멸되도록 제어하는 제어부를 포함하며, 상기 센서부는, 이미지 센서를 통해 차량의 위치 및 차량 간의 거리를 분석하여 감지하고, 도로에 정체가 있는 구간에 위치하는 복수의 차량들의 이미지 정보를 획득하고, 상기 획득된 이미지 정보를 차량 별로 구분하여 분석 대상 차량의 이미지 정보를 추출하고, 상기 분석 대상 차량의 이미지 정보에서 대상 구역 이미지 및 차량 상태 정보를 확인하고, 상기 차량 상태 정보를 기초로 외관에 문제가 있는 차량이 있는지 여부를 확인하여, 사고가 발생하였는지 여부에 대한 판단 결과를 생성하고, 상기 판단 결과에 따라, 사고가 발생한 것으로 판단되면, 사고 발생으로 도로에 정체가 발생한 것으로 감지하고, 사고가 발생하지 않은 것으로 판단되면, 단순 정체로 도로에 정체가 발생한 것으로 감지하는, 터널 내 및 모든 도로 주행 안전 장치가 제공된다.According to an embodiment of the present invention, the safety case is installed on the road at regular intervals in a certain section so as not to be in contact with the vehicle by being mounted on the road median and the guard rail portion; a lighting unit installed in the safety case to irradiate light in a forward direction of the vehicle; a warning unit installed in the safety case so that the light flickers in the rear of the vehicle traveling direction; Detects the movement of a vehicle traveling on the road, detects whether there is congestion in road driving through the movement of the vehicle, and detects that there is congestion in road driving. a sensor unit that detects whether it has occurred; and when the sensor unit detects that the vehicle is driving on the road, the lighting unit controls light to be irradiated. and a control unit for controlling the light to blink in a second color in the warning unit when it is detected that an accident has occurred by the sensor unit, wherein the sensor unit includes a position of a vehicle and a distance between the vehicles through an image sensor to detect and analyze, obtain image information of a plurality of vehicles located in a section with congestion on the road, classify the obtained image information for each vehicle to extract image information of the vehicle to be analyzed, and Check the target area image and vehicle status information from the image information, check whether there is a vehicle having an appearance problem based on the vehicle status information, generate a determination result as to whether an accident has occurred, and the determination result Accordingly, when it is determined that an accident has occurred, it is detected that congestion has occurred on the road due to the occurrence of an accident, and when it is determined that no accident has occurred, it is detected that congestion has occurred on the road due to simple congestion. A device is provided.
상기 센서부는, 상기 판단 결과에 따라, 단순 정체로 도로에 정체가 발생한 것으로 감지되면, 차량의 주행 속도가 기준 속도 이하인 정체 구간을 감지하고, 상기 제어부는, 상기 정체 구간의 시작점인 단순 정체 발생 지점으로부터 상기 정체 구간의 끝점인 단순 정체 종료 지점까지의 정체 구간 길이를 산출하고, 미리 정해진 기간 동안의 시간대별 정체 패턴을 통해 제1 기준 거리가 설정되면, 상기 정체 구간 길이가 상기 제1 기준 거리 보다 짧은지 여부를 확인하고, 상기 정체 구간 길이가 상기 제1 기준 거리 보다 짧은 것으로 확인되면, 상기 정체 구간을 일반적인 정체 현상으로 판단하여, 상기 단순 정체 발생 지점으로부터 후방으로 상기 정체 구간 길이의 4배 떨어진 위치까지, 상기 제1 색의 빛이 제1 강도의 세기로 제1 점멸 속도를 통해 점멸되도록 제어하고, 상기 정체 구간 길이가 상기 제1 기준 거리 보다 긴 것으로 확인되면, 상기 정체 구간을 특수적인 정체 현상으로 판단하여, 상기 정체 구간 길이가 상기 제1 기준 거리 보다 긴 값으로 설정된 제2 기준 거리 보다 짧은지 여부를 확인하고, 상기 정체 구간 길이가 상기 제2 기준 거리 보다 짧은 것으로 확인되면, 상기 정체 구간을 심각한 정체 현상으로 판단하여, 상기 단순 정체 발생 지점으로부터 후방으로 상기 정체 구간 길이의 3배 떨어진 위치까지, 상기 제1 색의 빛이 상기 제1 강도 보다 강한 제2 강도의 세기로 상기 제1 점멸 속도를 통해 점멸되도록 제어하고, 상기 정체 구간 길이가 상기 제2 기준 거리 보다 긴 것으로 확인되면, 상기 정체 구간을 매우 심각한 정체 현상으로 판단하여, 상기 단순 정체 발생 지점으로부터 후방으로 상기 정체 구간 길이의 2배 떨어진 위치까지, 상기 제1 색의 빛이 상기 제2 강도의 세기로 상기 제1 점멸 속도 보다 빠른 제2 점멸 속도를 통해 점멸되도록 제어할 수 있다.When it is detected that congestion has occurred on the road due to simple congestion according to the determination result, the sensor unit detects a congestion section in which the driving speed of the vehicle is less than or equal to a reference speed, and the control unit, a simple congestion occurrence point that is the starting point of the congestion section Calculates the length of the congestion section from to the end point of the simple congestion, which is the end point of the congestion section, and when the first reference distance is set through the congestion pattern for each time period for a predetermined period, the length of the congestion section is greater than the first reference distance If it is checked whether the length of the congestion section is shorter than the first reference distance, it is determined that the congestion section is a general congestion phenomenon, and the length of the congestion section is 4 times the length of the congestion section backward from the simple congestion occurrence point. Until the position, the light of the first color is controlled to blink through a first blinking speed with the intensity of the first intensity. Determined as a phenomenon, it is checked whether the length of the congestion section is shorter than a second reference distance set to a value longer than the first reference distance, and when it is confirmed that the length of the congestion section is shorter than the second reference distance, the congestion The section is determined as a serious congestion phenomenon, and from the simple congestion occurrence point to a position three times the length of the congestion section backward, the light of the first color has a second intensity stronger than the first intensity. Controlled to blink through a blinking speed, and when it is confirmed that the length of the congestion section is longer than the second reference distance, the congestion section is judged as a very serious congestion phenomenon, and the length of the congestion section backward from the simple congestion occurrence point is It is possible to control the light of the first color to blink through a second blinking speed faster than the first blinking speed with the intensity of the second intensity up to twice the distance.
상기 센서부는, 상기 판단 결과에 따라, 사고 발생으로 도로에 정체가 발생한 것으로 감지되면, 사고 발생 차량의 수를 감지하고, 상기 제어부는, 상기 사고 발생 차량의 수가 미리 설정된 제1 기준치 보다 작은지 여부를 확인하고, 상기 사고 발생 차량의 수가 상기 제1 기준치 보다 작은 것으로 확인되면, 도로에서 발생한 사고를 소규모 사고로 판단하여, 사고 발생 지점으로부터 후방으로 상기 제1 기준 거리의 2배 떨어진 위치인 제1 지점까지, 상기 제2 색의 빛이 상기 제1 강도의 세기로 상기 제1 점멸 속도를 통해 점멸되도록 제어하고, 상기 사고 발생 차량의 수가 상기 제1 기준치 보다 큰 것으로 확인되면, 도로에서 발생한 사고를 중대형 사고로 판단하여, 상기 사고 발생 차량의 수가 상기 제1 기준치 보다 높은 값으로 설정된 제2 기준치 보다 작은지 여부를 확인하고, 상기 사고 발생 차량의 수가 상기 제2 기준치 보다 작은 것으로 확인되면, 도로에서 발생한 사고를 중규모 사고로 판단하여, 상기 사고 발생 지점으로부터 후방으로 상기 제1 기준 거리의 3배 떨어진 위치인 제2 지점까지, 상기 제2 색의 빛이 상기 제2 강도의 세기로 상기 제1 점멸 속도를 통해 점멸되도록 제어하고, 상기 사고 발생 차량의 수가 상기 제2 기준치 보다 큰 것으로 확인되면, 도로에서 발생한 사고를 대규모 사고로 판단하여, 상기 사고 발생 지점으로부터 후방으로 상기 제1 기준 거리의 4배 떨어진 위치인 제3 지점까지, 상기 제2 색의 빛이 상기 제2 강도의 세기로 상기 제2 점멸 속도를 통해 점멸되도록 제어할 수 있다.The sensor unit, according to the determination result, when it is detected that congestion has occurred on the road due to the occurrence of an accident, detects the number of vehicles in the accident, and the control unit, whether the number of vehicles in the accident is smaller than a preset first reference value If it is confirmed that the number of vehicles in the accident is smaller than the first reference value, it is determined that the accident occurred on the road is a small accident, and the first is a position that is twice the first reference distance backward from the accident point. Until the point, the light of the second color is controlled to flash through the first flashing speed with the intensity of the first intensity, and when it is confirmed that the number of accident-causing vehicles is greater than the first reference value, an accident occurring on the road It is determined as a medium-to-large accident, and it is checked whether the number of accident vehicles is smaller than a second reference value set to a value higher than the first reference value, and if it is confirmed that the number of accident vehicles is smaller than the second reference value, on the road It is determined that the accident occurred is a medium-scale accident, and the light of the second color flashes the first with the intensity of the second intensity from the point of occurrence to a second point that is three times the first reference distance to the rear from the point of occurrence. Control to blink through speed, and when it is confirmed that the number of vehicles in the accident is greater than the second reference value, the accident occurring on the road is determined as a large-scale accident, and 4 times the first reference distance backward from the accident point It is possible to control so that the light of the second color blinks through the second blinking speed with the intensity of the second intensity up to a third point, which is a distant location.
상기 제어부는, 도로에서 발생한 사고가 중규모 사고로 판단되어, 상기 사고 발생 지점으로부터 상기 제2 지점까지 상기 제2 색의 빛이 점멸된 점멸 시간이 기준 시간 보다 긴 것으로 확인되면, 상기 사고 발생 지점으로부터 상기 제1 지점까지 상기 제2 색의 빛이 상기 제2 강도의 세기로 상기 제1 점멸 속도를 통해 점멸되도록 제어하고, 상기 제1 지점으로부터 상기 제2 지점까지 상기 제2 색의 빛이 상기 제1 강도의 세기로 상기 제1 점멸 속도를 통해 점멸되도록 제어하고, 도로에서 발생한 사고가 대규모 사고로 판단되어, 상기 사고 발생 지점으로부터 상기 제3 지점까지 상기 제2 색의 빛이 점멸된 점멸 시간이 상기 기준 시간 보다 긴 것으로 확인되면, 상기 사고 발생 지점으로부터 상기 제1 지점까지 상기 제2 색의 빛이 상기 제2 강도의 세기로 상기 제2 점멸 속도를 통해 점멸되도록 제어하고, 상기 제1 지점으로부터 상기 제2 지점까지 상기 제2 색의 빛이 상기 제2 강도의 세기로 상기 제1 점멸 속도를 통해 점멸되도록 제어하고, 상기 제2 지점으로부터 상기 제3 지점까지 상기 제2 색의 빛이 상기 제1 강도의 세기로 상기 제1 점멸 속도를 통해 점멸되도록 제어할 수 있다.The control unit, if it is determined that the accident occurring on the road is a medium-scale accident, and it is confirmed that the blinking time of the second color light from the accident occurrence point to the second point is longer than the reference time, from the accident occurrence point control so that the light of the second color flashes through the first flashing speed with the intensity of the second intensity from the first point to the first point, and the light of the second color from the first point to the second point Controlled to blink through the first blinking speed with an intensity of 1 intensity, and the accident occurring on the road is determined as a large-scale accident, and the blinking time for the blinking of the light of the second color from the accident occurrence point to the third point is If it is confirmed that it is longer than the reference time, the light of the second color is controlled to blink through the second blinking speed with the intensity of the second intensity from the accident occurrence point to the first point, and from the first point control so that the light of the second color blinks through the first blinking speed with the intensity of the second intensity from the second point to the second point, and the light of the second color from the second point to the third point It can be controlled to blink through the first blinking speed with an intensity of 1 intensity.
상기 제어부는, 상기 복수의 차량들 중 어느 하나인 제1 차량이 분석 대상 차량으로 확인되면, 라이다를 통해 상기 제1 차량의 표면에 대한 3D 데이터를 획득하고, 카메라를 통해 상기 제1 차량의 표면에 대한 2D 데이터를 획득하고, 상기 2D 데이터와 상기 3D 데이터의 합집합 영역을 분리하여, 상기 2D 데이터 및 상기 3D 데이터를 병합한 제1 데이터를 추출하고, 상기 제1 데이터를 인코딩 하여 제1 입력 신호를 생성하고, 상기 제1 입력 신호를 제1 인공 신경망에 입력하고, 상기 제1 인공 신경망의 입력의 결과에 기초하여, 제1 출력 신호를 획득하고, 상기 제1 출력 신호에 기초하여, 상기 제1 차량의 표면에 대한 제1 분류 결과를 생성하고, 상기 제1 데이터를 분석하여 상기 제1 차량의 표면에 발생한 균열을 검출하고, 상기 제1 차량의 표면에 발생한 균열을 영역별로 확인하여, 미리 설정된 제1 설정값 미만으로 균열이 검출된 정상 영역과 상기 제1 설정값 이상으로 균열이 검출된 손상 영역을 구분하고, 상기 제1 데이터에서 상기 손상 영역을 삭제한 제2 데이터를 추출하고, 상기 제2 데이터를 인코딩 하여 제2 입력 신호를 생성하고, 상기 제2 입력 신호를 제2 인공 신경망에 입력하고, 상기 제2 인공 신경망의 입력의 결과에 기초하여, 제2 출력 신호를 획득하고, 상기 제2 출력 신호에 기초하여, 상기 제1 차량의 표면에 대한 제2 분류 결과를 생성하고, 상기 제1 분류 결과 및 상기 제2 분류 결과가 동일한 경우, 상기 제1 분류 결과 및 상기 제2 분류 결과 중 어느 하나를 상기 제1 차량의 표면에 대한 최종 분류 결과로 설정할 수 있다.When the first vehicle, which is any one of the plurality of vehicles, is identified as the vehicle to be analyzed, the controller acquires 3D data on the surface of the first vehicle through the lidar, and obtains 3D data on the surface of the first vehicle through the camera. Obtaining 2D data for a surface, separating the union region of the 2D data and the 3D data, extracting first data obtained by merging the 2D data and the 3D data, and encoding the first data to input the first input generating a signal, inputting the first input signal to a first artificial neural network, obtaining a first output signal based on a result of the input of the first artificial neural network, and based on the first output signal, the A first classification result for the surface of the first vehicle is generated, the first data is analyzed to detect cracks generated on the surface of the first vehicle, and cracks generated on the surface of the first vehicle are identified by area, Separating a normal region in which a crack is detected below a preset first set value and a damaged region in which a crack is detected above the first set value, and extracting second data in which the damaged region is deleted from the first data, generating a second input signal by encoding the second data, inputting the second input signal to a second artificial neural network, and obtaining a second output signal based on a result of the input of the second artificial neural network; A second classification result is generated for the surface of the first vehicle based on the second output signal, and when the first classification result and the second classification result are the same, the first classification result and the second classification result Any one of the results may be set as a final classification result for the surface of the first vehicle.
본 발명의 일 실시예에 따르면, 터널 내 및 모든 도로에 차량 증가로 인해 단순 정체가 발생한 것으로 감지되면, 차량 진행 방향의 후방으로 제1 색의 빛이 점멸되도록 제어하고, 터널 내 및 모든 도로에 사고가 발생한 것으로 감지되면, 차량 진행 방향의 후방으로 제2 색의 빛이 점멸되도록 제어함으로써, 전방의 교통 상황을 운전자가 미리 파악할 수 있도록 하여, 후방 추돌 사고를 방지할 수 있다.According to an embodiment of the present invention, when it is detected that simple congestion has occurred due to an increase in vehicles in the tunnel and on all roads, the light of the first color is controlled to flicker in the rearward direction of the vehicle, and in the tunnel and on all roads. When it is sensed that an accident has occurred, by controlling the light of the second color to flicker in the rear of the vehicle traveling direction, the driver can grasp the traffic situation ahead, thereby preventing a rear collision accident.
한편, 실시예들에 따른 효과는 이상에서 언급한 것으로 제한되지 않으며, 언급되지 않은 또 다른 효과들은 아래의 기재로부터 해당 기술 분야의 통상의 지식을 가진 자에게 명확히 이해될 수 있을 것이다.On the other hand, the effects according to the embodiments are not limited to those mentioned above, and other effects not mentioned will be clearly understood by those of ordinary skill in the art from the following description.
도 1은 본 발명의 일실시예에 따른 도로 주행 안전 장치를 개략적으로 도시한 요부 사시도이다.1 is a perspective view schematically showing a road driving safety device according to an embodiment of the present invention.
도 2는 도로 주행 안전 장치의 안전 케이스가 도로에 설치된 상태를 나타낸 도면이다.2 is a view showing a state in which the safety case of the road driving safety device is installed on the road.
도 3은 도로 주행 안전 장치의 구성을 개략적으로 나타낸 도면이다.3 is a diagram schematically showing the configuration of a road driving safety device.
도 4는 본 발명의 다른 실시예에 따른 도로 주행 안전 장치를 개략적으로 도시한 요부 사시도이다.4 is a perspective view schematically showing a road driving safety device according to another embodiment of the present invention.
도 5는 본 발명의 일실시예에 따른 사고 발생으로 외관에 문제가 있는 차량이 있는지 여부를 판단하기 위해 도로에 정체가 있는 구간에 위치하는 차량을 촬영한 이미지를 처리하는 방법을 설명하기 위한 도면이다.5 is a view for explaining a method of processing an image taken of a vehicle located in a section with congestion on the road in order to determine whether there is a vehicle having a problem in appearance due to the occurrence of an accident according to an embodiment of the present invention; to be.
도 6은 본 발명의 일실시예에 따른 사고 발생으로 외관에 문제가 있는 차량이 있는지 여부를 판단하기 위해 분석 대상 차량의 이미지를 처리하는데 채용되는 학습 방법을 설명하기 위한 도면이다.6 is a view for explaining a learning method employed to process an image of a vehicle to be analyzed in order to determine whether there is a vehicle having an appearance problem due to an accident occurrence according to an embodiment of the present invention.
도 7은 일실시예에 따른 정체 구간 길이에 따라 점멸을 제어하는 과정을 설명하기 위한 순서도이다.7 is a flowchart illustrating a process of controlling blinking according to a length of a congestion section according to an embodiment.
도 8은 일실시예에 따른 사고 발생 차량 수에 따라 점멸을 제어하는 과정을 설명하기 위한 순서도이다.8 is a flowchart for explaining a process of controlling blinking according to the number of vehicles having an accident according to an exemplary embodiment.
도 9는 일실시예에 따른 중규모 사고 시 거리에 따라 단계 별로 점멸을 제어하는 과정을 설명하기 위한 순서도이다.9 is a flowchart for explaining a process of controlling the blinking step by step according to a distance during a medium-scale accident according to an embodiment.
도 10은 일실시예에 따른 대규모 사고 시 거리에 따라 단계 별로 점멸을 제어하는 과정을 설명하기 위한 순서도이다.10 is a flowchart for explaining a process of controlling the blinking step by step according to the distance during a large-scale accident according to an embodiment.
도 11은 일실시예에 따른 차량의 표면을 분류하는 과정을 설명하기 위한 순서도이다.11 is a flowchart illustrating a process of classifying a surface of a vehicle according to an exemplary embodiment.
도 12는 일실시예에 따른 인공 신경망을 설명하기 위한 도면이다.12 is a diagram for explaining an artificial neural network according to an embodiment.
도 13은 일실시예에 따른 인공 신경망을 학습하는 방법을 설명하기 위한 도면이다.13 is a diagram for explaining a method of learning an artificial neural network according to an embodiment.
이하에서, 첨부된 도면을 참조하여 실시예들을 상세하게 설명한다. 그러나, 실시예들에는 다양한 변경이 가해질 수 있어서 특허출원의 권리 범위가 이러한 실시예들에 의해 제한되거나 한정되는 것은 아니다. 실시예들에 대한 모든 변경, 균등물 내지 대체물이 권리 범위에 포함되는 것으로 이해되어야 한다.Hereinafter, embodiments will be described in detail with reference to the accompanying drawings. However, since various changes may be made to the embodiments, the scope of the patent application is not limited or limited by these embodiments. It should be understood that all modifications, equivalents and substitutes for the embodiments are included in the scope of the rights.
실시예들에 대한 특정한 구조적 또는 기능적 설명들은 단지 예시를 위한 목적으로 개시된 것으로서, 다양한 형태로 변경되어 실시될 수 있다. 따라서, 실시예들은 특정한 개시형태로 한정되는 것이 아니며, 본 명세서의 범위는 기술적 사상에 포함되는 변경, 균등물, 또는 대체물을 포함한다.Specific structural or functional descriptions of the embodiments are disclosed for purposes of illustration only, and may be changed and implemented in various forms. Accordingly, the embodiments are not limited to a specific disclosure form, and the scope of the present specification includes changes, equivalents, or substitutes included in the technical spirit.
제1 또는 제2 등의 용어를 다양한 구성요소들을 설명하는데 사용될 수 있지만, 이런 용어들은 하나의 구성요소를 다른 구성요소로부터 구별하는 목적으로만 해석되어야 한다. 예를 들어, 제1 구성요소는 제2 구성요소로 명명될 수 있고, 유사하게 제2 구성요소는 제1 구성요소로도 명명될 수 있다.Although terms such as first or second may be used to describe various elements, these terms should be interpreted only for the purpose of distinguishing one element from another. For example, a first component may be termed a second component, and similarly, a second component may also be termed a first component.
어떤 구성요소가 다른 구성요소에 "연결되어" 있다고 언급된 때에는, 그 다른 구성요소에 직접적으로 연결되어 있거나 또는 접속되어 있을 수도 있지만, 중간에 다른 구성요소가 존재할 수도 있다고 이해되어야 할 것이다.When a component is referred to as being “connected” to another component, it may be directly connected or connected to the other component, but it should be understood that another component may exist in between.
실시예에서 사용한 용어는 단지 설명을 목적으로 사용된 것으로, 한정하려는 의도로 해석되어서는 안된다. 단수의 표현은 문맥상 명백하게 다르게 뜻하지 않는 한, 복수의 표현을 포함한다. 본 명세서에서, "포함하다" 또는 "가지다" 등의 용어는 명세서 상에 기재된 특징, 숫자, 단계, 동작, 구성요소, 부품 또는 이들을 조합한 것이 존재함을 지정하려는 것이지, 하나 또는 그 이상의 다른 특징들이나 숫자, 단계, 동작, 구성요소, 부품 또는 이들을 조합한 것들의 존재 또는 부가 가능성을 미리 배제하지 않는 것으로 이해되어야 한다.Terms used in the examples are used for the purpose of description only, and should not be construed as limiting. The singular expression includes the plural expression unless the context clearly dictates otherwise. In the present specification, terms such as “comprise” or “have” are intended to designate that a feature, number, step, operation, component, part, or combination thereof described in the specification exists, but one or more other features It is to be understood that this does not preclude the possibility of the presence or addition of numbers, steps, operations, components, parts, or combinations thereof.
공간적으로 상대적인 용어인 "아래(below)", "아래(beneath)", "하부(lower)", "위(above)", "상부(upper)" 등은 도면에 도시되어 있는 바와 같이 하나의 구성요소와 다른 구성요소들과의 상관관계를 용이하게 기술하기 위해 사용될 수 있다. 공간적으로 상대적인 용어는 도면에 도시되어 있는 방향에 더하여 사용 시 또는 동작 시 구성요소들의 서로 다른 방향을 포함하는 용어로 이해되어야 한다. 예를 들어, 도면에 도시되어 있는 구성요소를 뒤집을 경우, 다른 구성요소의 "아래(below)"또는 "아래(beneath)"로 기술된 구성요소는 다른 구성요소의 "위(above)"에 놓일 수 있다. 따라서, 예시적인 용어인 "아래"는 아래와 위의 방향을 모두 포함할 수 있다. 구성요소는 다른 방향으로도 배향될 수 있으며, 이에 따라 공간적으로 상대적인 용어들은 배향에 따라 해석될 수 있다.Spatially relative terms "below", "beneath", "lower", "above", "upper", etc. It can be used to easily describe the correlation between a component and other components. A spatially relative term should be understood as a term that includes different directions of components during use or operation in addition to the directions shown in the drawings. For example, when a component shown in the drawing is turned over, a component described as “beneath” or “beneath” of another component may be placed “above” of the other component. can Accordingly, the exemplary term “below” may include both directions below and above. Components may also be oriented in other orientations, and thus spatially relative terms may be interpreted according to orientation.
다르게 정의되지 않는 한, 기술적이거나 과학적인 용어를 포함해서 여기서 사용되는 모든 용어들은 실시예가 속하는 기술 분야에서 통상의 지식을 가진 자에 의해 일반적으로 이해되는 것과 동일한 의미를 가지고 있다. 일반적으로 사용되는 사전에 정의되어 있는 것과 같은 용어들은 관련 기술의 문맥 상 가지는 의미와 일치하는 의미를 가지는 것으로 해석되어야 하며, 본 출원에서 명백하게 정의하지 않는 한, 이상적이거나 과도하게 형식적인 의미로 해석되지 않는다.Unless defined otherwise, all terms used herein, including technical or scientific terms, have the same meaning as commonly understood by one of ordinary skill in the art to which the embodiment belongs. Terms such as those defined in a commonly used dictionary should be interpreted as having a meaning consistent with the meaning in the context of the related art, and should not be interpreted in an ideal or excessively formal meaning unless explicitly defined in the present application. does not
또한, 첨부 도면을 참조하여 설명함에 있어, 도면 부호에 관계없이 동일한 구성 요소는 동일한 참조부호를 부여하고 이에 대한 중복되는 설명은 생략하기로 한다. 실시예를 설명함에 있어서 관련된 공지 기술에 대한 구체적인 설명이 실시예의 요지를 불필요하게 흐릴 수 있다고 판단되는 경우 그 상세한 설명을 생략한다.In addition, in the description with reference to the accompanying drawings, the same components are assigned the same reference numerals regardless of the reference numerals, and the overlapping description thereof will be omitted. In the description of the embodiment, if it is determined that a detailed description of a related known technology may unnecessarily obscure the gist of the embodiment, the detailed description thereof will be omitted.
도 1은 본 발명의 일실시예에 따른 도로 주행 안전 장치를 개략적으로 도시한 요부 사시도이고, 도 2는 도로 주행 안전 장치의 안전 케이스가 도로에 설치된 상태를 나타낸 도면이고, 도 3은 도로 주행 안전 장치의 구성을 개략적으로 나타낸 도면이다.1 is a perspective view schematically showing a road driving safety device according to an embodiment of the present invention, FIG. 2 is a view showing a state in which a safety case of the road driving safety device is installed on the road, and FIG. 3 is road driving safety It is a diagram schematically showing the configuration of the device.
도 1 내지 도 3에 도시된 바와 같이, 본 발명의 제1 실시예에 따른 도로 주행 안전 장치(100)는, 도로(11)에 설치되는 안전 케이스(10)와, 안전 케이스(10)에 설치되며 차량의 진행 방향으로 빛을 조사하는 조명부(20)와, 조명부(20)에 전원이 공급되도록 제어하는 제어부(30)와, 안전 케이스(10)에 설치되어 차량 또는 동물 등의 움직임이 감지되면 조명부(20)를 작동시키는 센서부(40)를 포함한다.1 to 3 , the road driving safety device 100 according to the first embodiment of the present invention includes a safety case 10 installed on a road 11 , and a safety case 10 installed in the safety case 10 . The lighting unit 20 that irradiates light in the traveling direction of the vehicle, the control unit 30 that controls so that power is supplied to the lighting unit 20, and the safety case 10 are installed in the safety case 10 to detect the movement of a vehicle or animal and a sensor unit 40 for operating the lighting unit 20 .
안전 케이스(10)는 도 2에 도시된 바와 같이, 중앙분리대와 가드레일 부분에 장착되어서 차량과 접촉이 없도록 설치될 수 있으며, 일정 구간에 일정 간격으로 복수개 설치될 수 있다.As shown in FIG. 2 , the safety case 10 may be installed so as not to be in contact with the vehicle by being mounted on the median and guard rail portions, and may be installed in plurality at regular intervals in a predetermined section.
안전 케이스(10)의 내부에 설치 공간이 형성되는 것으로, 본 실시예에서 중앙분리대에 1개, 좌측 가드레일 부분에 1개, 우측 가드레일 부분에 1개가 설치되어, 총 3개가 서로 간에 이격된 상태로 설치되는 것을 예시적으로 설명한다. 그러나 안전 케이스(10)는 3개로 반드시 설치되는 것으로 한정되는 것은 아니고, 2개 이하 또는 3개를 초과하여 도로의 중앙분리대와 가드레일 부분에 설치되는 것도 가능하다.An installation space is formed inside the safety case 10, and in this embodiment, one is installed in the median, one in the left guardrail part, and one in the right guardrail part, so that a total of three are spaced apart from each other. It will be described as an example to be installed in the state. However, the safety case 10 is not necessarily limited to being installed in three pieces, and two or less or more than three safety cases 10 may be installed in the road median and guard rail portions.
안전 케이스(10)는 중앙분리대 및 가드레일 각각을 따라 일정 간격으로 설치될 수 있으며, 바람직하게는 터널 내 및 모든 도로(11)에 있는 중앙분리대 및 가드레일에 설치될 수 있다.The safety case 10 may be installed at regular intervals along each of the median and guardrails, and preferably installed in the median and guardrails in the tunnel and on all roads 11 .
이러한 안전 케이스(10)에는 조명부(20)가 설치된다.The lighting unit 20 is installed in the safety case 10 .
조명부(20)는 안전 케이스(10)에 설치되어 차량 주행 방향의 전방으로 조명을 선택적으로 조사하도록 설치될 수 있다.The lighting unit 20 may be installed in the safety case 10 to selectively irradiate the lighting forward in the vehicle driving direction.
조명부(20)는 차량 주행 방향의 전방 방향으로 조명을 선택적으로 조사 가능하도록 설치되는 것으로서, 본 실시예에서는 엘이디(LED) 조명으로 설치되는 것을 예시적으로 설명한다. 물론 조명부(20)는 엘이디 조명으로 반드시 적용되는 것은 아니고, 중앙분리대 및 가드레일 부분을 용이하게 확인 가능한 소정의 조명으로 적용되는 것도 가능하다.The lighting unit 20 is installed so as to selectively irradiate the lighting in the forward direction of the vehicle driving direction, and in this embodiment, it will be exemplarily described that it is installed as an LED lighting. Of course, the lighting unit 20 is not necessarily applied as LED lighting, and it is also possible to apply a predetermined lighting that can easily check the median and guard rail portions.
한편, 안전 케이스(10)가 도로의 중앙분리대와 가드레일 부분에 적어도 3개 이상으로 설치되는 바, 조명부(20) 또한 도로의 중앙분리대와 가드레일 부분에 적어도 3개 이상으로 설치될 수 있다. 즉, 조명부(20)는 50m 단위로 적어도 3개 이상으로 설치되어, 대략 150m 단위로 설치될 수 있다.On the other hand, at least three or more safety cases 10 are installed in the median and guard rail portions of the road, and the lighting unit 20 may also be installed in at least three or more in the median and guard rail portions of the road. That is, at least three lighting units 20 may be installed in units of 50 m, and may be installed in units of approximately 150 m.
따라서, 조명부(20)의 발광 작용에 의해 차량 운전자가 중앙분리대 및 가드레일 부분을 용이하게 확인하도록 하는 것이 가능하다.Accordingly, it is possible for the vehicle driver to easily check the median section and the guard rail portion by the light-emitting action of the lighting unit 20 .
따라서, 운전자는 가로등이 설치되지 않은 곳을 주행하는 경우, 중앙분리대 및 가드레일 부분을 용이하게 확인하여 안전 운전을 하는 것이 가능하다. 이러한 조명부(20)의 발광 작용은 후술하는 센서부(40)의 센싱 작용에 의해 선택적으로 발광될 수 있다. 이에 대해서는 이하에서 센서부(40)를 설명하면서 보다 구체적으로 설명한다.Accordingly, when the driver is driving in a place where no street lights are installed, it is possible to easily check the median and guard rail portions and drive safely. The light-emitting action of the lighting unit 20 may be selectively emitted by the sensing action of the sensor unit 40 to be described later. Hereinafter, the sensor unit 40 will be described in more detail.
한편, 조명부(20)에는 제어부(30)가 연결되어 야간에 차량 주행 방향의 전방으로 발광하도록 제어될 수 있다.On the other hand, the control unit 30 is connected to the lighting unit 20 and may be controlled to emit light forward in the driving direction of the vehicle at night.
제어부(30)는, 도로(11)의 측면에 설치되는 태양 전지(31)와, 태양 전지(31)와 조명부(20)를 연결하는 연결부(33)를 포함할 수 있다. 태양 전지(31)는 도로의 측면에 설치되어 주간에 태양의 빛에너지를 전기 에너지로 전환하여 저장하도록 할 수 있다. 태양 전지(31)는 조명부(20)들에 원활한 전원을 공급하도록 도로(11)의 측면을 따라 복수개로 설치될 수 있다. 이러한 태양 전지(31)는 도로(11)의 측면에 고정된 상태로 설치되는 것도 가능하고, 설치된 위치의 변동이 가능하도록 이동 가능한 상태로 설치되는 것도 가능하다.The control unit 30 may include a solar cell 31 installed on the side of the road 11 , and a connection unit 33 connecting the solar cell 31 and the lighting unit 20 . The solar cell 31 may be installed on the side of the road to convert and store solar light energy into electrical energy during the daytime. A plurality of solar cells 31 may be installed along the side of the road 11 to smoothly supply power to the lighting units 20 . The solar cell 31 may be installed in a fixed state on the side of the road 11 or may be installed in a movable state so that the installed position can be changed.
태양 전지(31)와 조명부(20)는 연결부(33)를 통해서 연결되어 조명부(20)로 전원의 공급이 이루어질 수 있다.The solar cell 31 and the lighting unit 20 may be connected through a connection unit 33 to supply power to the lighting unit 20 .
연결부(33)는 하나의 태양 전지(31)와 복수개의 조명부(20)를 연결하거나, 태양 전지(31)와 조명부(20)를 단독으로 연결하도록 설치되는 것도 가능하다. 따라서, 태양 전지(31)를 통해 주간에 태양 에너지를 이용하여 빛 에너지를 전기 에너지 형태로 저장하고, 야간에 연결부(33)를 통해 조명부(20)에 전원을 공급하여 조명부(20)가 선택적으로 발광되도록 하는 것이 가능하다.The connection part 33 may be installed to connect one solar cell 31 and a plurality of lighting parts 20 or to connect the solar cell 31 and the lighting part 20 alone. Therefore, by using solar energy during the daytime through the solar cell 31 to store light energy in the form of electrical energy, and by supplying power to the lighting unit 20 through the connection unit 33 at night, the lighting unit 20 is selectively It is possible to make it luminous.
한편, 안전 케이스(10)에는 조명부(20)를 선택적으로 작동하기 위한 센서부(40)가 설치된다.On the other hand, the safety case 10 is provided with a sensor unit 40 for selectively operating the lighting unit 20 .
센서부(40)는, 안전 케이스(10)에 설치되어 도로(11)의 측면에서 도로의 중앙 부분으로 차량 또는 동물 등이 진행하는 움직임을 센싱하도록 설치될 수 있다. 센서부(40)는 적외선 센서 등으로 설치되어 차량 주행 방향의 전방에서 차량 또는 동물 등의 움직임을 실시간으로 확인하는 것이 가능하다. 센서부(40)는 도로의 2차선의 경우, 양차선을 모두 감지하도록 센서부(50)의 센싱 방향이 조절되는 것도 가능하다.The sensor unit 40 may be installed in the safety case 10 to sense a movement of a vehicle or an animal from the side of the road 11 to the central portion of the road. The sensor unit 40 is installed with an infrared sensor, etc., so that it is possible to check the movement of a vehicle or an animal in real time in front of the vehicle driving direction. In the case of two lanes of the road, the sensor unit 40 may adjust the sensing direction of the sensor unit 50 to detect both lanes.
센서부(40)는 안전 케이스(10)에 하나로 설치되는 것도 가능하고, 설치된 방향을 달리하면서 복수개로 설치되는 것도 가능하다. 따라서, 센서부(40)는 차량의 주행 방향의 전방에서 도로(11)의 양측면을 통해 도로(11)의 중앙 부분으로 차량 또는 동물 등의 소정의 물체가 이동하는 것을 용이하게 확인할 수 있다.The sensor unit 40 may be installed as one in the safety case 10 , and may be installed in plurality while changing the installation direction. Accordingly, the sensor unit 40 can easily confirm that a predetermined object such as a vehicle or an animal moves from the front of the vehicle in the driving direction to the central portion of the road 11 through both sides of the road 11 .
이와 같이, 센서부(40)를 통해 차량 또는 동물 등의 소정의 물체가 도로(11)의 중앙 부분으로 이동하는 것으로 확인되면, 조명부(20)가 발광되도록 작동 제어될 수 있다.As such, when it is confirmed that a predetermined object, such as a vehicle or an animal, moves to the central portion of the road 11 through the sensor unit 40 , the lighting unit 20 may be controlled to emit light.
경고부(210)는 센서부(40)에 의해 차량 또는 동물의 움직임이 감지된 경우, 점멸되어 운전자에게 위험 경고를 알리는 것이 가능하다. 따라서, 운전자는 조명부(20)를 통해 중앙분리대 및 가드레일 부분을 확인하면서 경고부(210)를 통해 위험을 보다 효과적으로 감지하는 것이 가능하다.When a movement of a vehicle or an animal is detected by the sensor unit 40 , the warning unit 210 may blink to notify the driver of a danger warning. Accordingly, it is possible for the driver to more effectively sense the danger through the warning unit 210 while checking the median section and the guard rail portion through the lighting unit 20 .
즉, 안전 케이스(10)에는 차량 진행 방향의 전방으로 빛을 조사하도록 조명부(20)가 설치되고, 차량 진행 방향의 후방으로 빛이 점멸되도록 경고부(210)가 설치될 수 있다.That is, the lighting unit 20 may be installed in the safety case 10 to irradiate light in the front of the vehicle traveling direction, and the warning unit 210 may be installed so that the light flickers in the rear of the vehicle traveling direction.
한편, 안전 케이스(10)가 터널 내 도로에 설치되어 있는 경우, 센서부(40)는 안전 케이스(10)가 아닌 도로 위에 별도로 설치될 수 있으며, 예를 들어, 이미지 센서 등으로 설치되어 도로에서 주행중인 차량의 움직임을 감지하고, 차량의 움직임을 통해 도로 주행에 정체가 있는지 여부를 감지할 수 있으며, 도로 주행에 정체가 있는 것으로 감지되면, 사고가 발생하였는지 여부 및 차량 증가로 인해 단순 정체가 발생하였는지 여부를 감지할 수 있다.On the other hand, when the safety case 10 is installed on the road in the tunnel, the sensor unit 40 may be separately installed on the road instead of the safety case 10, for example, it is installed as an image sensor and is installed on the road. It can detect the movement of a driving vehicle and detect whether there is congestion in road driving through the movement of the vehicle. It can be detected whether or not it has occurred.
제어부(30)는 센서부(40)에 의해 도로에서 차량이 주행중인 것으로 감지되면, 조명부(20)에서 빛이 조사되도록 제어하고, 센서부(40)에 의해 도로에 정체가 발생한 것으로 감지되면, 경고부(210)에서 정체 사유에 따라 상이한 색의 빛이 점멸되도록 제어할 수 있다.When it is detected that the vehicle is driving on the road by the sensor unit 40, the control unit 30 controls the lighting unit 20 to emit light, and when the sensor unit 40 detects that there is congestion on the road, The warning unit 210 may control the light of different colors to flicker depending on the cause of congestion.
즉, 제어부(30)는 도로에 단순 정체가 발생한 것으로 감지되면, 경고부(210)에서 제1 색으로 빛이 점멸되도록 제어하고, 도로에 사고가 발생한 것으로 감지되면, 경고부(210)에서 제2 색으로 빛이 점멸되도록 제어할 수 있다.That is, when it is detected that a simple congestion has occurred on the road, the controller 30 controls the light to blink in the first color in the warning unit 210, and when it is detected that an accident has occurred on the road, the warning unit 210 controls the second You can control the light to flicker in two colors.
따라서, 차량 운전자는 제1 색의 빛, 예를 들면, 파란색의 빛이 경고부(210)에서 점멸되면, 터널 내 및 모든 도로에 단순 정체가 발생한 것을 미리 확인할 수 있으며, 제2 색의 빛, 예를 들면, 노란색의 빛이 경고부(210)에서 점멸되면, 터널 내 및 모든 도로에 사고가 발생한 것을 미리 확인할 수 있다.Therefore, when the light of the first color, for example, the blue light flickers in the warning unit 210, the vehicle driver can confirm in advance that simple congestion has occurred in the tunnel and on all roads, and the light of the second color, For example, when the yellow light flickers in the warning unit 210, it can be confirmed in advance that an accident has occurred in the tunnel and on all roads.
도 4는 본 발명의 일실시예에 따른 도로 주행에 정체가 발생하는 것을 설명하기 위한 도면이다.4 is a view for explaining the occurrence of congestion in road driving according to an embodiment of the present invention.
센서부(40)는 도로에서 주행중인 차량의 움직임을 통해 도로 주행에 정체가 있는지 여부를 감지할 수 있으며, 도로 주행에 정체가 있는 것으로 감지되면, 차량 증가로 인해 단순 교통 정체가 발생하였는지 여부 및 차량 증가로 인해 단순 정체가 발생하였는지 여부를 감지할 수 있다. The sensor unit 40 may detect whether there is congestion in road driving through the movement of a vehicle traveling on the road, and when it is detected that there is congestion in road driving, whether simple traffic congestion has occurred due to an increase in vehicles, and It is possible to detect whether a simple congestion has occurred due to an increase in vehicles.
도 4의 (a)에 도시된 바와 같이, 센서부(40)는 제1 구역(401)에 교통 정체가 있는 것으로 감지되면, 제1 구역(401)을 촬영한 이미지 정보를 분석하여, 제1 구역(401)이 차량 증가로 인해 교통 체증이 발생한 것으로 감지할 수 있다.As shown in (a) of FIG. 4 , when it is detected that there is a traffic jam in the first area 401 , the sensor unit 40 analyzes the image information photographed in the first area 401 , Zone 401 may detect that a traffic jam has occurred due to an increase in vehicles.
도 4의 (b)에 도시된 바와 같이, 센서부(40)는 제2 구역(402)에 교통 정체가 있는 것으로 감지되면, 제2 구역(402)을 촬영한 이미지 정보를 분석하여, 제2 구역(402)이 교통 사고로 인해 정체가 발생한 것으로 감지할 수 있다.As shown in (b) of FIG. 4 , when it is detected that there is a traffic jam in the second area 402 , the sensor unit 40 analyzes image information photographed in the second area 402 , Zone 402 may detect that congestion has occurred due to a traffic accident.
일실시예에 따르면, 제어부(30)는 도로에 단순 정체가 발생한 것으로 감지되면, 단순 정체 발생 지점으로부터 후방으로 제1 거리 떨어진 위치까지 제1 색의 빛이 점멸되도록 제어하고, 도로에 사고가 발생한 것으로 감지되면, 사고 발생 지점으로부터 후방으로 제1 거리 보다 더 길게 설정된 제2 거리 떨어진 위치까지 제2 색의 빛이 점멸되도록 제어할 수 있다.According to one embodiment, when it is sensed that simple traffic jam occurs on the road, the controller 30 controls the light of the first color to flicker from the point where the simple congestion occurs to a position that is a first distance to the rear, and an accident occurs on the road. When it is detected that the accident occurs, it is possible to control the light of the second color to flicker from the point of occurrence to a second distance set to be longer than the first distance to the rear rearward.
예를 들어, 제어부(30)는 단순 정체 발생 지점으로부터 후방으로 50m 떨어진 위치까지 파란색의 빛이 점멸되도록 제어할 수 있으며, 사고 발생 지점으로부터 후방으로 100m 떨어진 위치까지 노란색의 빛이 점멸되도록 제어할 수 있다.For example, the control unit 30 may control the blue light to flicker to a position 50 m rearward from the simple congestion occurrence point, and control the yellow light to flicker to a position 100 m rearward from the accident occurrence point. have.
센서부(40)는 차량의 주행 속도를 감지할 수 있으며, 차량의 주행 속도가 기준 속도 이하인 정체 구간이 어디인지 감지할 수 있으며, 제어부(30)는 정체 구간의 길이에 따라 경고부(210)에서 점멸되는 제1 색의 빛의 세기를 설정하여, 정체 구간이 길수록 제1 색의 빛이 강하게 점멸되도록 제어할 수 있다.The sensor unit 40 may detect the driving speed of the vehicle, and detect where the congestion section is in which the driving speed of the vehicle is less than or equal to the reference speed, and the control unit 30 warns the warning unit 210 according to the length of the congestion section. By setting the intensity of the light of the first color that flickers in , it is possible to control the light of the first color to flicker strongly as the stagnation period is longer.
예를 들어, 제어부(30)는 정체 구간이 10m인 경우, 제1 색의 빛이 보통 세기로 점멸되도록 제어할 수 있으며, 정체 구간이 20m인 경우, 제1 색의 빛이 더 강한 세기로 점멸되도록 제어할 수 있다.For example, when the congestion section is 10m, the controller 30 may control the light of the first color to blink with a normal intensity, and when the congestion section is 20m, the light of the first color blinks with a stronger intensity can be controlled as much as possible.
센서부(40)는 이미지 센서를 통해 획득된 이미지 정보를 기초로, 사고 발생 차량의 수가 몇 대인지 감지할 수 있으며, 제어부(30)는 사고 발생 차량의 수에 따라 경고부(210)에서 점멸되는 제2 색의 빛의 세기를 설정하여, 사고 발생 차량 수가 많을수록 제2 색의 빛이 강하게 점멸되도록 제어할 수 있다.The sensor unit 40 may detect the number of accidents occurring vehicles based on the image information obtained through the image sensor, and the control unit 30 blinks in the warning unit 210 according to the number of accidents occurring vehicles. By setting the intensity of the light of the second color, it is possible to control the light of the second color to flicker strongly as the number of vehicles in the accident increases.
예를 들어, 제어부(30)는 사고 발생 차량의 수가 2대인 경우, 제2 색의 빛이 보통 세기로 점멸되도록 제어할 수 있으며, 사고 발생 차량의 수가 4대인 경우, 제2 색의 빛이 더 강한 세기로 점멸되도록 제어할 수 있다.For example, the control unit 30 may control the light of the second color to flicker with normal intensity when the number of vehicles in the accident is 2, and when the number of vehicles in the accident is 4, the light of the second color is more It can be controlled to blink with strong intensity.
제어부(30)는 사고 발생 차량의 수에 따라 사고 위험 거리인 제3 거리를 설정할 수 있다.The control unit 30 may set a third distance that is an accident risk distance according to the number of vehicles in which the accident occurred.
예를 들어, 제어부(30)는 사고 발생 차량의 수가 2대인 경우, 제3 거리를 10m로 설정할 수 있고, 사고 발생 차량의 수가 4대인 경우, 제3 거리를 20m로 설정할 수 있다.For example, the controller 30 may set the third distance to 10 m when the number of accident-inducing vehicles is two, and may set the third distance to 20 m when the number of accident-occurring vehicles is four.
제어부(30)는 사고 발생 지점으로부터 후방으로 제2 거리 떨어진 위치까지 제2 색의 빛이 점멸되도록 제어하면서, 사고 발생 지점으로부터 후방으로 제3 거리 떨어진 위치까지 점멸되는 제2 색의 빛이 제3 색의 빛으로 변경되어 점멸되도록 제어할 수 있다.The control unit 30 controls the light of the second color to flicker from the point of the accident to a position that is a second distance backward from the point of occurrence, while the light of the second color that flickers from the point where the accident occurred to a position that is a third distance from the point of the accident is the third It can be controlled to change to a colored light and flicker.
예를 들어, 제어부(30)는 사고 발생 지점으로부터 후방으로 100m 떨어진 위치까지 노란색의 빛이 점멸되도록 제어하면서, 사고 발생 지점으로부터 후방으로 30m 떨어진 위치까지 노란색이 아닌 빨간색의 빛이 점멸되도록 제어하여, 사고 발생 지점이 가까워지면 다른 색의 빛으로 변경되어 점멸되도록 제어할 수 있다.For example, the control unit 30 controls so that the yellow light blinks to a position 100 m rearward from the accident occurrence point, while controlling the red light instead of yellow to blink 30 m rearward from the accident occurrence point, When the accident point approaches, it can be controlled to change to a different color light and blink.
제어부(30)는 사고 발생 지점에 가까울수록 제3 색의 빛이 점멸되는 속도가 빨라지도록 제어할 수 있다.The controller 30 may control the flashing speed of the light of the third color to be faster as it approaches the accident occurrence point.
예를 들어, 제어부(30)는 사고 발생 지점으로부터 후방으로 20m 부터 30m 떨어진 위치까지 1초에 한번 깜박이는 속도로 제3 색의 빛이 점멸되도록 제어하고, 사고 발생 지점으로부터 후방으로 10m 부터 20m 떨어진 위치까지 1초에 두번 깜박이는 속도로 제3 색의 빛이 점멸되도록 제어하고, 사고 발생 지점으로부터 후방으로 10m 떨어진 위치까지 1초에 세번 깜박이는 속도로 제3 색의 빛이 점멸되도록 제어할 수 있다.For example, the control unit 30 controls the light of the third color to flicker at a rate of blinking once per second from the location 20m to 30m away from the accident point rearward, and 10m to 20m away from the accident point rearward. The third color light can be controlled to blink at a rate of blinking twice per second to the location, and the light of the third color can be controlled to blink at a rate of blinking three times per second to a location 10m rearward from the accident site. have.
센서부(40)는 이미지 센서를 통해 획득된 이미지 정보를 기초로, 사고가 발생한 차선을 감지할 수 있으며, 제어부(30)는 사고가 발생하지 않은 차선의 양측에서 점멸되는 빛 보다 사고가 발생한 차선의 양측에서 점멸되는 빛이 더 강하게 점멸되도록 제어할 수 있다.The sensor unit 40 may detect a lane in which an accident has occurred based on image information obtained through the image sensor, and the control unit 30 may detect a lane in which an accident has occurred rather than a light flashing on both sides of a lane in which an accident does not occur. You can control the light flickering on both sides to flicker more strongly.
예를 들어, 제어부(30)는 제2 차선에서 사고가 발생한 것으로 감지되면, 제2 차선의 양측에서 점멸되는 노란빛이 강한 세기로 점멸되도록 제어할 수 있으며, 제1 차선의 좌측과 제3 차선의 우측에서 점멸되는 노란빛이 보통 세기로 점멸되도록 제어할 수 있다.For example, when it is detected that an accident has occurred in the second lane, the control unit 30 may control the yellow light flickering on both sides of the second lane to flash with strong intensity, and the left side of the first lane and the third lane. The yellow light flickering on the right side can be controlled to flicker with normal intensity.
한편, 센서부(40)는 이미지 센서를 통해 차량의 위치 및 차량 간의 거리를 분석하여 감지할 수 있으며, 도로에 정체가 있는 구간에 위치하는 복수의 차량들의 이미지 정보를 획득하고, 상기 이미지 정보에서 복수의 차량 각각에 대한 대상 구역 이미지 및 차량 상태 정보를 확인하고, 차량 상태 정보를 기초로 외관에 문제가 있는 차량이 있는지 여부를 확인하여, 사고가 발생하였는지 여부에 대한 판단 결과를 생성할 수 있다.On the other hand, the sensor unit 40 may detect by analyzing the position of the vehicle and the distance between the vehicles through the image sensor, obtain image information of a plurality of vehicles located in a section with congestion on the road, and from the image information It is possible to check the target area image and vehicle state information for each of a plurality of vehicles, and to determine whether there is a vehicle having an appearance problem based on the vehicle state information, thereby generating a determination result as to whether an accident has occurred. .
예를 들어, 센서부(40)는 도로에 정체가 있는 구간에 위치하는 제1 차량 및 제2 차량의 이미지 정보를 획득하고, 체1 차량 및 제2 차량 각각의 차량 상태 정보를 확인하고, 제1 차량 및 제2 차량 각각의 외관에 문제가 있는 차량이 있는지 여부를 확인하여, 모든 차량의 외관에 문제가 없는 것으로 확인되면, 사고가 발생하지 않은 것으로 판단 결과를 생성할 수 있으며, 제1 차량 및 제2 차량 중 적어도 하나의 외관에 문제가 있는 것으로 확인되면, 사고가 발생한 것으로 판단 결과를 생성할 수 있다.For example, the sensor unit 40 obtains image information of the first vehicle and the second vehicle located in a section where there is congestion on the road, checks the vehicle state information of each of the first vehicle and the second vehicle, and the second It is checked whether there is a vehicle having a problem in the exterior of each of the first vehicle and the second vehicle, and when it is confirmed that there is no problem in the exterior of all vehicles, a determination result may be generated that an accident has not occurred, and the first vehicle and when it is confirmed that there is a problem in the appearance of at least one of the second vehicles, a result of determining that an accident has occurred may be generated.
센서부(40)는 판단 결과에 따라, 사고가 발생한 것으로 판단되면, 사고 발생으로 도로 주행에 정체가 발생한 것으로 감지하고, 사고가 발생하지 않은 것으로 판단되면, 단순 정체로 도로에 정체가 발생한 것으로 감지할 수 있다.When it is determined that an accident has occurred according to the determination result, the sensor unit 40 detects that traffic congestion has occurred due to the occurrence of an accident, and when it is determined that no accident has occurred, it is detected that congestion has occurred on the road due to simple congestion can do.
도 5는 본 발명의 일실시예에 따른 사고 발생으로 외관에 문제가 있는 차량이 있는지 여부를 판단하기 위해 도로에 정체가 있는 구간에 위치하는 차량을 촬영한 이미지를 처리하는 방법을 설명하기 위한 도면이다.5 is a view for explaining a method of processing an image taken of a vehicle located in a section with congestion on the road in order to determine whether there is a vehicle having a problem in appearance due to the occurrence of an accident according to an embodiment of the present invention; to be.
일실시예에 따르면, 센서부(40)는 터널 내 및 모든 도로의 상부에 설치된 이미지 센서를 포함할 수 있으며, 이미지 센서를 통해 도로에 정체가 있는 구간에 위치하는 복수의 차량들의 이미지 정보를 획득하고, 획득된 이미지 정보를 차량 별로 구분하여 분석 대상 차량의 이미지 정보를 추출하고, 추출된 분석 대상 차량의 이미지 정보에서 대상 구역 이미지 및 차량 상태 정보를 확인하고, 차량 상태 정보를 기초로 외관에 문제가 있는 차량이 있는지 여부를 확인하여, 사고가 발생하였는지 여부에 대한 판단 결과를 생성할 수 있다.According to an embodiment, the sensor unit 40 may include an image sensor installed in the tunnel and on the top of all roads, and acquire image information of a plurality of vehicles located in a section where there is congestion on the road through the image sensor. and extract the image information of the vehicle to be analyzed by classifying the obtained image information by vehicle, check the target area image and vehicle condition information from the extracted image information of the vehicle to be analyzed, and check the appearance of the vehicle based on the vehicle condition information It is possible to check whether there is a vehicle with the , and generate a determination result as to whether an accident has occurred.
구체적으로, 센서부(40)는 분석 대상 차량의 대상 구역 이미지를 분석하여, 그 대상 구역에 포함된 차량 상태 정보를 추출할 수 있다. 센서부(40)는 분석 대상 차량의 대상 구역을 확정하여, 대상 구역 이미지(501)를 획득할 수 있다.Specifically, the sensor unit 40 may analyze the target area image of the analysis target vehicle and extract vehicle state information included in the target area. The sensor unit 40 may determine a target area of the vehicle to be analyzed to obtain an image 501 of the target area.
일실시예에 따르면, 센서부(40)는 대상 구역 이미지(501) 내 색상 정보 및 텍스쳐 정보에 기초하여 유효 차량 경계를 식별할 수 있다. 센서부(40)는 영역 별로 차량인지 여부를 색상과 텍스쳐를 기반으로 판단할 수 있다. 센서부(40)는 미리 정의된 단위의 필터를 슬라이딩하여 각 영역 별로 차량 여부를 판단할 수 있고, 그 필터는 색상과 텍스쳐에 따라 결과를 출력할도록 설계될 수 있다.According to an embodiment, the sensor unit 40 may identify an effective vehicle boundary based on color information and texture information in the target area image 501 . The sensor unit 40 may determine whether it is a vehicle for each area based on a color and a texture. The sensor unit 40 may determine whether a vehicle is in each area by sliding a filter of a predefined unit, and the filter may be designed to output a result according to a color and a texture.
일실시예에 따르면, 센서부(40)는 대상 구역 이미지(501) 내에서 유효 차량 경계로 분리되는 유효 차량 영역(502)을 추출할 수 있다. 센서부(40)는 유효 차량 영역(502) 내 입자 객체들의 외관 특징들을 추출할 수 있다. According to an embodiment, the sensor unit 40 may extract an effective vehicle area 502 separated by an effective vehicle boundary from the target area image 501 . The sensor unit 40 may extract appearance characteristics of particle objects in the effective vehicle area 502 .
일실시예에 따르면, 센서부(40)는 추출된 외관 특징들에 기초하여, 입자 객체들 중 이물질 객체(503)을 식별하고, 이물질 객체(503)를 유효 차량 영역(502)로부터 제거할 수 있다. 센서부(40)는 유효 차량 영역(502) 내 분포하는 차체 및 유리의 외관, 색상 및 텍스쳐 정보를 기준으로 미리 정의된 범위를 벗어나는 객체를 식별하고, 식별된 객체를 이물질 객체(503)로 판단할 수 있다.According to an embodiment, the sensor unit 40 may identify a foreign object 503 among particle objects based on the extracted exterior features, and remove the foreign object 503 from the effective vehicle area 502 . have. The sensor unit 40 identifies an object outside a predefined range based on the appearance, color, and texture information of the vehicle body and glass distributed within the effective vehicle area 502 , and determines the identified object as a foreign object 503 . can do.
일실시예에 따르면, 센서부(40)는 이물질 객체가 제거된 유효 차량 영역(504) 내 입자 객체들의 크기 특징들(505 내지 507)을 추출할 수 있다. 센서부(40)는 유효 차량 영역(504) 내 입자 객체들을 식별하고, 식별된 입자 객체들을 설명하는 정보 중 크기 특징들(505 내지 507)을 크기 별로 추출할 수 있다. 센서부(40)는 차체 및 유리를 분류하는데 기준이 되는 범위에 따라 크기 특징들(505 내지 507)을 크기 별로 추출하고 분류할 수 있다.According to an embodiment, the sensor unit 40 may extract size features 505 to 507 of particle objects in the effective vehicle area 504 from which the foreign object is removed. The sensor unit 40 may identify particle objects within the effective vehicle area 504 and extract size features 505 to 507 from among information describing the identified particle objects for each size. The sensor unit 40 may extract and classify the size features 505 to 507 by size according to a range serving as a reference for classifying the vehicle body and the glass.
일실시예에 따르면, 센서부(40)는 추출된 크기 특징들(505 내지 507)에 기초하여 입자 객체들을 차체 객체 및 유리 객체 중 어느 하나로 각각 분류할 수 있다. 센서부(40)는 차체 객체로 분류된 적어도 하나의 입자 객체의 유효 차량 영역(504) 내 제1 비율을 생성할 수 있다. 제1 비율은 유효 차량 영역(504) 내 차체 비율과 대응할 수 있다. 센서부(40)는 제1 비율을 이용하여 차체의 특성이 반영된 차량 상태 정보를 생성할 수 있다.According to an embodiment, the sensor unit 40 may classify the particle objects into one of a body object and a glass object, respectively, based on the extracted size features 505 to 507 . The sensor unit 40 may generate a first ratio in the effective vehicle area 504 of at least one particle object classified as a vehicle body object. The first ratio may correspond to a body ratio within the effective vehicle area 504 . The sensor unit 40 may generate vehicle state information in which the characteristics of the vehicle body are reflected by using the first ratio.
일실시예에 따르면, 센서부(40)는 유리 객체로 분류된 적어도 하나의 입자 객체의 유효 차량 영역(504) 내 제2 비율을 생성할 수 있다. 제2 비율은 유효 차량 영역(504) 내 유리의 비율과 대응할 수 있다. 센서부(40)는 제2 비율을 이용하여 유리의 특성이 반영된 차량 상태 정보를 생성할 수 있다.According to an embodiment, the sensor unit 40 may generate a second ratio in the effective vehicle area 504 of at least one particle object classified as a glass object. The second proportion may correspond to the proportion of glass in the effective vehicle area 504 . The sensor unit 40 may generate vehicle state information in which the characteristics of glass are reflected by using the second ratio.
일실시예에 따르면, 센서부(40)는 이물질 객체(503)의 유효 차량 영역(502) 내 제3 비율을 생성할 수 있다. 제3 비율은 유효 차량 영역(502) 내 이물질이 차지하는 비율을 의미할 수 있다. According to an embodiment, the sensor unit 40 may generate a third ratio of the foreign object 503 within the effective vehicle area 502 . The third ratio may mean a ratio occupied by foreign substances in the effective vehicle area 502 .
일실시예에 따르면, 센서부(40)는 유효 차량 영역(504) 내 색상 특징을 추출할 수 있다. 센서부(40)는 색상 특징에 기초하여 차 색상 정보를 생성할 수 있다. According to an embodiment, the sensor unit 40 may extract a color feature within the effective vehicle area 504 . The sensor unit 40 may generate car color information based on color characteristics.
일실시예에 따르면, 센서부(40)는 제1 비율, 제2 비율, 제3 비율 및 차 색상 정보에 기초하여 기본 차량 정보를 생성할 수 있다. 센서부(40)는 유효 차량 영역(504)의 이미지 처리에 따른 제1 비율, 제2 비율, 제3 비율 및 차 색상 정보에 기초하여 기본 차량 정보를 생성할 수 있다. According to an embodiment, the sensor unit 40 may generate basic vehicle information based on the first ratio, the second ratio, the third ratio, and the vehicle color information. The sensor unit 40 may generate basic vehicle information based on the first ratio, the second ratio, the third ratio, and the car color information according to the image processing of the effective vehicle area 504 .
일실시예에 따르면, 센서부(40)는 대상 구역 이미지(501)를 위치에 기초하여 식별할 수 있고, 차량이 위치한 터널의 환경 정보를 조회하여, 터널 내의 현재 환경 상태(조도 등)가 반영된 보조 차량 정보를 생성할 수 있다.According to an embodiment, the sensor unit 40 may identify the target area image 501 based on the location, inquire the environment information of the tunnel in which the vehicle is located, and reflect the current environmental state (illuminance, etc.) in the tunnel. Auxiliary vehicle information may be generated.
센서부(40)는 기본 차량 정보 및 보조 차량 정보에 기초하여 유효 차량 영역(502)에 대응하는 특징 벡터(510)를 생성할 수 있다. 센서부(40)는 특징 벡터(510)를 미리 학습된 뉴럴 네트워크(511)로 적용하여 출력 정보(512)를 획득할 수 있다.The sensor unit 40 may generate a feature vector 510 corresponding to the effective vehicle area 502 based on the basic vehicle information and the auxiliary vehicle information. The sensor unit 40 may obtain the output information 512 by applying the feature vector 510 to the pre-trained neural network 511 .
뉴럴 네트워크(511)는 차량의 이미지로부터 추출된 특징들을 기반으로 생성된 기본 차량 정보와 촬영 구역인 터널 내의 환경 상태에 따라 영향을 주는 보조 차량 정보에 따른 입력으로부터 차량 상태 정보를 추정하도록 학습될 수 있다.The neural network 511 may be trained to estimate vehicle state information from input according to basic vehicle information generated based on features extracted from the image of the vehicle and auxiliary vehicle information that affects depending on the environmental condition in the tunnel, which is the shooting area. have.
센서부(40)는 출력 정보(512)에 기초하여 유효 차량 영역(502)에 대응하는 차량 상태 정보를 생성할 수 있다.The sensor unit 40 may generate vehicle state information corresponding to the effective vehicle area 502 based on the output information 512 .
출력 정보(512)는 차량의 스크래치 별로 매칭도를 포함하는 정보이거나 차량이 찌그러진 상태를 설명하는 변수들로 설계될 수 있다. 또한, 출력 정보(512)는 차량의 분류에 따라 이산적으로 설계될 수 있는데, 예를 들어 뉴럴 네트워크(511)의 출력 레이어의 출력 노드들은 각각 차량의 종류별로 각각 대응하고, 출력 노드들은 각 차종 분류들 별로 확률값들을 각각 출력할 수 있다. 이하 도 7를 참조하여 뉴럴 네트워크(511)의 학습 내용이 후술된다.The output information 512 may be information including a matching degree for each scratch of the vehicle or may be designed as variables describing a state in which the vehicle is distorted. In addition, the output information 512 may be discretely designed according to the classification of the vehicle. For example, output nodes of the output layer of the neural network 511 correspond to each type of vehicle, and the output nodes correspond to each type of vehicle. Probability values can be output for each classification. Hereinafter, the learning contents of the neural network 511 will be described with reference to FIG. 7 .
도 6은 본 발명의 일실시예에 따른 사고 발생으로 외관에 문제가 있는 차량이 있는지 여부를 판단하기 위해 분석 대상 차량의 이미지를 처리하는데 채용되는 학습 방법을 설명하기 위한 도면이다.6 is a view for explaining a learning method employed to process an image of a vehicle to be analyzed in order to determine whether there is a vehicle having an appearance problem due to an accident occurrence according to an embodiment of the present invention.
일실시예에 따르면, 학습 장치는 대상 구역 이미지로부터 차량 상태 정보를 획득하는데 필요한 정보를 추정하기 위한 뉴럴 네트워크(604)를 학습시킬 수 있다. 학습 장치는 센서부(40)와 다른 별개의 주체일 수 있지만, 이에 제한되는 것은 아니다.According to an embodiment, the learning apparatus may train the neural network 604 for estimating information required to obtain vehicle state information from the target area image. The learning device may be a separate entity different from the sensor unit 40 , but is not limited thereto.
일실시예에 따르면, 학습 장치는 레이블드 차량 이미지들(601)을 획득할 수 있다. 학습 장치는 차종별로 차량 이미지에 각각 미리 레이블링된 정보를 획득할 수 있는데, 차량 이미지는 미리 분류된 차량의 종류에 따라 레이블링될 수 있다.According to an embodiment, the learning apparatus may acquire labeled vehicle images 601 . The learning apparatus may obtain pre-labeled information on each vehicle image for each vehicle type, and the vehicle image may be labeled according to a pre-classified vehicle type.
일실시예에 따르면, 학습 장치는 레이블드 차량 이미지들(601)의 색상 정보, 텍스쳐 정보 및 입자 객체들의 외관 특징들, 크기 특징들 중 적어도 하나에 기초하여 차체 객체에 대응하는 제1 비율, 유리 객체에 대응하는 제2 비율, 이물질 객체에 대응하는 제3 비율 및 차 색상 정보에 기초하여 기본 차량 정보(602)를 생성할 수 있다. 학습 장치는 기본 차량 정보(602)에 기초하여 분석 대상 차량의 특징 벡터들(603)을 생성할 수 있다. 분석 대상 차량의 특징 벡터들(603)을 생성하는데 있어서 보조 차량 정보가 채용될 수 있다.According to an embodiment, the learning apparatus may provide a first ratio corresponding to the vehicle body object, a glass, based on at least one of color information, texture information, and appearance characteristics and size characteristics of the particle objects of the labeled vehicle images 601 . The basic vehicle information 602 may be generated based on the second ratio corresponding to the object, the third ratio corresponding to the foreign object, and the vehicle color information. The learning apparatus may generate the feature vectors 603 of the vehicle to be analyzed based on the basic vehicle information 602 . Auxiliary vehicle information may be employed in generating the feature vectors 603 of the vehicle to be analyzed.
일실시예에 따르면, 학습 장치는 특징 벡터들(603)을 뉴럴 네트워크(604)에 적용하여 출력 정보(605)를 획득할 수 있다. 학습 장치는 출력 정보(605)와 레이블들(606)에 기초하여 뉴럴 네트워크(604)를 학습시킬 수 있다. 학습 장치는 출력 정보(605)에 대응하는 에러들을 계산하고, 그 에러들을 최소화하기 위해 뉴럴 네트워크(604) 내 노드들의 연결 관계를 최적화하여 뉴럴 네트워크(604)를 학습시킬 수 있다. 센서부(40)는 학습이 완료된 뉴럴 네트워크(604)를 이용하여 대상 구역 이미지로부터 차량 상태 정보를 획득할 수 있다.According to an embodiment, the learning apparatus may obtain the output information 605 by applying the feature vectors 603 to the neural network 604 . The learning apparatus may train the neural network 604 based on the output information 605 and the labels 606 . The learning apparatus may train the neural network 604 by calculating errors corresponding to the output information 605 and optimizing the connection relationships of nodes in the neural network 604 to minimize the errors. The sensor unit 40 may acquire vehicle state information from the target area image by using the neural network 604 that has been trained.
도 7은 일실시예에 따른 정체 구간 길이에 따라 점멸을 제어하는 과정을 설명하기 위한 순서도이다.7 is a flowchart illustrating a process of controlling blinking according to a length of a congestion section according to an embodiment.
도 7을 참조하면, 먼저, S701 단계에서, 센서부(40)는 도로에 정체가 발생한 것으로 감지되면, 정체 사유를 확인하기 위해, 차량 상태 정보를 기초로 외관에 문제가 있는 차량이 있는지 여부를 확인하여, 사고가 발생하였는지 여부에 대한 판단 결과를 생성할 수 있으며, 판단 결과에 따라, 사고가 발생하지 않은 것으로 판단되면, 단순 정체로 도로에 정체가 발생한 것으로 감지할 수 있다.Referring to FIG. 7 , first, in step S701 , when it is detected that there is congestion on the road, the sensor unit 40 determines whether there is a vehicle having a problem in appearance based on vehicle state information in order to check the cause of the congestion. By checking, it is possible to generate a determination result as to whether an accident has occurred, and according to the determination result, when it is determined that an accident has not occurred, it is possible to detect that congestion has occurred on the road due to simple congestion.
S702 단계에서, 센서부(40)는 도로에서 차량의 주행 속도가 기준 속도 이하인 정체 구간을 감지할 수 있다.In step S702 , the sensor unit 40 may detect a congestion section in which the driving speed of the vehicle is equal to or less than the reference speed on the road.
S703 단계에서, 제어부(30)는 정체 구간의 시작점인 단순 정체 발생 지점과 정체 구간의 끝점인 단순 정체 종료 지점을 확인할 수 있으며, 단순 정체 발생 지점으로부터 단순 정체 종료 지점까지의 거리를 통해, 정체 구간 길이를 산출할 수 있다.In step S703, the control unit 30 can identify a simple congestion occurrence point, which is the starting point of the congestion section, and a simple congestion end point, which is an end point of the congestion section, and through the distance from the simple congestion occurrence point to the simple congestion end point, the congestion section length can be calculated.
S704 단계에서, 제어부(30)는 정체 구간 길이가 제1 기준 거리 보다 짧은지 여부를 확인할 수 있다. 이때, 제1 기준 거리는 미리 정해진 시간 동안의 시간대별 정체 패턴을 통해 설정될 수 있다.In step S704 , the controller 30 may determine whether the length of the congestion section is shorter than the first reference distance. In this case, the first reference distance may be set through a congestion pattern for each time period for a predetermined time.
예를 들어, 제어부(30)는 한달 동안의 시간대별 정체 패턴을 확인하여, 현재 시간이 7시로 확인되면, 7시의 정체 패턴을 통해 제1 기준 거리를 설정할 수 있다.For example, the control unit 30 may check the congestion pattern for each time period for a month, and when the current time is 7 o'clock, the control unit 30 may set the first reference distance through the 7 o'clock congestion pattern.
제어부(30)는 정체 패턴을 통해 제1 기준 거리를 설정하는데 있어, 정체 발생 확률이 높을수록 제1 기준 거리를 더 긴 값으로 설정할 수 있다.In setting the first reference distance through the congestion pattern, the controller 30 may set the first reference distance to a longer value as the probability of occurrence of congestion increases.
예를 들어, 제어부(30)는 현재 시간이 7시인 경우, 7시의 정체 패턴을 확인한 결과, 정체 발생 확률이 80%로 확인되면, 제1 기준 거리를 20m로 설정할 수 있으며, 현재 시간이 8시인 경우, 8시의 정체 패턴을 확인한 결과, 정체 발생 확률이 90%로 확인되면, 제1 기준 거리를 30m로 설정할 수 있다.For example, when the current time is 7 o'clock, the control unit 30 may set the first reference distance to 20 m if, as a result of checking the congestion pattern at 7 o'clock, the probability of occurrence of congestion is 80%, the current time is 8 In the case of a viewer, when it is confirmed that the probability of occurrence of congestion is 90% as a result of checking the congestion pattern at 8 o'clock, the first reference distance may be set to 30 m.
S704 단계에서 정체 구간 길이가 제1 기준 거리 보다 짧은 것으로 확인되면, S705 단계에서, 제어부(30)는 정체 구간을 일반적인 정체 현상으로 판단할 수 있다.If it is determined in step S704 that the length of the congestion section is shorter than the first reference distance, in step S705, the controller 30 may determine the congestion section as a general congestion phenomenon.
예를 들어, 정체 구간 길이가 30m이고 제1 기준 거리가 50m인 경우, 제어부(30)는 정체 구간을 일반적인 정체 현상으로 판단할 수 있다.For example, when the length of the congestion section is 30 m and the first reference distance is 50 m, the controller 30 may determine the congestion section as a general congestion phenomenon.
S706 단계에서, 제어부(30)는 정체 구간의 시작점인 단순 정체 발생 지점으로부터 후방으로 정체 구간 길이의 4배 떨어진 위치까지, 제1 색의 빛이 제1 강도의 세기로 제1 점멸 속도를 통해 점멸되도록 제어할 수 있다.In step S706, the control unit 30 flashes the light of the first color at a first flashing speed with the intensity of the first intensity from the simple congestion occurrence point, which is the starting point of the congestion section, to a position that is four times the length of the congestion section backward. can be controlled as much as possible.
예를 들어, 정체 구간 길이가 30m인 경우, 제어부(30)는 단순 정체 발생 지점으로부터 후방으로 120m 떨어진 위치까지, 제1 색의 빛이 제1 강도의 세기로 제1 점멸 속도를 통해 점멸되도록 제어할 수 있다.For example, when the length of the congestion section is 30 m, the control unit 30 controls the light of the first color to blink through the first blinking speed with the intensity of the first intensity from the simple congestion occurrence point to a position 120 m rearward. can do.
한편, S704 단계에서 정체 구간 길이가 제1 기준 거리 보다 긴 것으로 확인되면, S707 단계에서, 제어부(30)는 정체 구간을 특수적인 정체 현상으로 판단할 수 있다.Meanwhile, if it is determined in step S704 that the length of the congestion section is longer than the first reference distance, in step S707 , the controller 30 may determine the congestion section as a special congestion phenomenon.
S708 단계에서, 제어부(30)는 정체 구간 길이가 제2 기준 거리 보다 짧은지 여부를 확인할 수 있다. 이때, 제2 기준 거리는 제1 기준 거리 보다 긴 값으로 설정될 수 있다.In step S708 , the controller 30 may determine whether the length of the congestion section is shorter than the second reference distance. In this case, the second reference distance may be set to a value longer than the first reference distance.
S708 단계에서 정체 구간 길이가 제2 기준 거리 보다 짧은 것으로 확인되면, S709 단계에서, 제어부(30)는 정체 구간을 심각한 정체 현상으로 판단할 수 있다.If it is determined that the length of the congestion section is shorter than the second reference distance in step S708 , in step S709 , the controller 30 may determine the congestion section as a serious congestion phenomenon.
예를 들어, 정체 구간 길이가 80m이고 제2 기준 거리가 100m인 경우, 제어부(30)는 정체 구간을 심각한 정체 현상으로 판단할 수 있다.For example, when the length of the congestion section is 80 m and the second reference distance is 100 m, the controller 30 may determine the congestion section as a serious congestion phenomenon.
S710 단계에서, 제어부(30)는 정체 구간의 시작점인 단순 정체 발생 지점으로부터 후방으로 정체 구간 길이의 3배 떨어진 위치까지, 제1 색의 빛이 제2 강도의 세기로 제1 점멸 속도를 통해 점멸되도록 제어할 수 있다. 이때, 제2 강도는 제1 강도 보다 더 강한 빛의 세기로 설정될 수 있으며, 예를 들면, 제1 강도가 10lx인 경우, 제2 강도는 20lx로 설정될 수 있다.In step S710, the control unit 30 flashes the light of the first color with the intensity of the second intensity at a first flashing speed from the simple congestion occurrence point, which is the starting point of the congestion section, to a position three times the length of the congestion section backwards. can be controlled as much as possible. In this case, the second intensity may be set to a light intensity stronger than the first intensity. For example, when the first intensity is 10lx, the second intensity may be set to 20lx.
예를 들어, 정체 구간 길이가 80m인 경우, 제어부(30)는 단순 정체 발생 지점으로부터 후방으로 240m 떨어진 위치까지, 제1 색의 빛이 제2 강도의 세기로 제1 점멸 속도를 통해 점멸되도록 제어할 수 있다.For example, when the length of the congestion section is 80 m, the control unit 30 controls the light of the first color to flash through the first flashing speed with the intensity of the second intensity from the simple congestion occurrence point to a position 240 m to the rear. can do.
한편, S708 단계에서 정체 구간 길이가 제2 기준 거리 보다 긴 것으로 확인되면, S711 단계에서, 제어부(30)는 정체 구간을 매우 심각한 정체 현상으로 판단할 수 있다.On the other hand, if it is confirmed in step S708 that the length of the congestion section is longer than the second reference distance, in step S711 , the controller 30 may determine the congestion section as a very serious congestion phenomenon.
예를 들어, 정체 구간 길이가 130m이고 제2 기준 거리가 100m인 경우, 제어부(30)는 정체 구간을 매우 심각한 정체 현상으로 판단할 수 있다.For example, when the length of the congestion section is 130 m and the second reference distance is 100 m, the controller 30 may determine the congestion section as a very serious congestion phenomenon.
S712 단계에서, 제어부(30)는 정체 구간의 시작점인 단순 정체 발생 지점으로부터 후방으로 정체 구간 길이의 2배 떨어진 위치까지, 제1 색의 빛이 제2 강도의 세기로 제2 점멸 속도를 통해 점멸되도록 제어할 수 있다. 이때, 제2 점멸 속도는 제1 점멸 속도 보다 더 빠른 속도로 설정될 수 있으며, 예를 들면, 제1 점멸 속도가 1초에 한 번 깜빡이는 속도인 경우, 제2 점멸 속도는 1초에 두 번 깜빡이는 속도로 설정될 수 있다.In step S712, the control unit 30 flashes the light of the first color at a second flashing speed with the intensity of the second intensity from the simple congestion occurrence point, which is the start point of the congestion section, to a position that is twice the length of the congestion section backward. can be controlled as much as possible. In this case, the second flashing speed may be set to be faster than the first flashing speed. For example, when the first flashing speed is a flashing speed once per second, the second flashing speed is two per second. It can be set to flashing speed.
예를 들어, 정체 구간 길이가 130m인 경우, 제어부(30)는 단순 정체 발생 지점으로부터 후방으로 260m 떨어진 위치까지, 제1 색의 빛이 제2 강도의 세기로 제2 점멸 속도를 통해 점멸되도록 제어할 수 있다.For example, when the length of the congestion section is 130 m, the control unit 30 controls the light of the first color to flash through the second flashing speed with the intensity of the second intensity from the simple congestion occurrence point to a position 260 m to the rear. can do.
도 8은 일실시예에 따른 사고 발생 차량 수에 따라 점멸을 제어하는 과정을 설명하기 위한 순서도이다.8 is a flowchart for explaining a process of controlling blinking according to the number of vehicles having an accident according to an exemplary embodiment.
도 8을 참조하면, 먼저, S801 단계에서, 센서부(40)는 도로에 정체가 발생한 것으로 감지되면, 정체 사유를 확인하기 위해, 차량 상태 정보를 기초로 외관에 문제가 있는 차량이 있는지 여부를 확인하여, 사고가 발생하였는지 여부에 대한 판단 결과를 생성할 수 있으며, 판단 결과에 따라, 사고가 발생한 것으로 판단되면, 사고 발생으로 도로에 정체가 발생한 것으로 감지할 수 있다.Referring to FIG. 8 , first, in step S801 , when the sensor unit 40 detects that there is congestion on the road, to determine the cause of the congestion, based on the vehicle state information, whether there is a vehicle having a problem in appearance By checking, a determination result as to whether an accident has occurred may be generated, and if it is determined that an accident has occurred according to the determination result, it may be detected that congestion has occurred on the road due to the occurrence of the accident.
S802 단계에서, 센서부(40)는 사고 발생 차량의 수를 감지할 수 있다.In step S802 , the sensor unit 40 may detect the number of accidents occurring vehicles.
S803 단계에서, 제어부(30)는 사고 발생 차량의 수가 미리 설정된 제1 기준치 보다 작은지 여부를 확인할 수 있다. 이때, 제1 기준치는 실시예에 따라 상이하게 설정될 수 있다.In step S803 , the controller 30 may determine whether the number of accident-causing vehicles is smaller than a preset first reference value. In this case, the first reference value may be set differently depending on the embodiment.
S803 단계에서 사고 발생 차량의 수가 제1 기준치 보다 작은 것으로 확인되면, S804 단계에서, 제어부(30)는 도로에서 발생한 사고를 소규모 사고로 판단할 수 있다.If it is determined in step S803 that the number of vehicles having an accident is smaller than the first reference value, in step S804, the controller 30 may determine the accident occurring on the road as a small accident.
예를 들어, 사고 발생 차량의 수가 2대이고 제1 기준치가 3인 경우, 제어부(30)는 도로에서 발생한 사고를 소규모 사고로 판단할 수 있다.For example, when the number of vehicles having an accident is 2 and the first reference value is 3, the controller 30 may determine the accident occurring on the road as a small accident.
S805 단계에서, 제어부(30)는 사고 발생 지점으로부터 후방으로 제1 기준 거리의 2배 떨어진 위치인 제1 지점까지, 제2 색의 빛이 제1 강도의 세기로 제1 점멸 속도를 통해 점멸되도록 제어할 수 있다.In step S805, the control unit 30 causes the light of the second color to flash through the first flashing speed with the intensity of the first intensity from the accident point to the first point that is twice the first reference distance rearward. can be controlled
예를 들어, 제1 기준 거리가 50m인 경우, 제어부(30)는 도로에서 발생한 사고가 소규모 사고로 판단되면, 사고 발생 지점으로부터 후방으로 100m 떨어진 위치를 제1 지점으로 설정하고, 사고 발생 지점으로부터 제1 지점까지 제2 색의 빛이 제1 강도의 세기로 제1 점멸 속도를 통해 점멸되도록 제어할 수 있다.For example, when the first reference distance is 50 m, the control unit 30 determines that the accident occurring on the road is a small accident, sets a position 100 m rearward from the accident point as the first point, and from the accident point The light of the second color may be controlled to blink through the first blinking speed with the intensity of the first intensity up to the first point.
한편, S803 단계에서 사고 발생 차량의 수가 제1 기준치 보다 큰 것으로 확인되면, S806 단계에서, 제어부(30)는 도로에서 발생한 사고를 중대형 사고로 판단할 수 있다.On the other hand, if it is confirmed in step S803 that the number of vehicles in which the accident occurs is greater than the first reference value, in step S806, the controller 30 may determine the accident occurring on the road as a serious accident.
S807 단계에서, 제어부(30)는 사고 발생 차량의 수가 미리 설정된 제2 기준치 보다 작은지 여부를 확인할 수 있다. 이때, 제2 기준치는 제1 기준치 보다 높은 값으로 설정될 수 있다.In step S807 , the control unit 30 may determine whether the number of vehicles having an accident is smaller than a preset second reference value. In this case, the second reference value may be set to be higher than the first reference value.
S807 단계에서 사고 발생 차량의 수가 제2 기준치 보다 작은 것으로 확인되면, S808 단계에서, 제어부(30)는 도로에서 발생한 사고를 중규모 사고로 판단할 수 있다.If it is confirmed in step S807 that the number of vehicles having an accident is smaller than the second reference value, in step S808 , the controller 30 may determine the accident occurring on the road as a medium-scale accident.
예를 들어, 사고 발생 차량의 수가 4대이고 제2 기준치가 5인 경우, 제어부(30)는 도로에서 발생한 사고를 중규모 사고로 판단할 수 있다.For example, when the number of vehicles having an accident is 4 and the second reference value is 5, the controller 30 may determine the accident occurring on the road as a medium-scale accident.
S809 단계에서, 제어부(30)는 사고 발생 지점으로부터 후방으로 제1 기준 거리의 3배 떨어진 위치인 제2 지점까지, 제2 색의 빛이 제2 강도의 세기로 제1 점멸 속도를 통해 점멸되도록 제어할 수 있다.In step S809, the controller 30 causes the light of the second color to flash through the first flashing speed with the intensity of the second intensity from the accident occurrence point to the second point that is three times the first reference distance to the rear. can be controlled
예를 들어, 제1 기준 거리가 50m인 경우, 제어부(30)는 도로에서 발생한 사고가 중규모 사고로 판단되면, 사고 발생 지점으로부터 후방으로 150m 떨어진 위치를 제2 지점으로 설정하고, 사고 발생 지점으로부터 제2 지점까지 제2 색의 빛이 제2 강도의 세기로 제1 점멸 속도를 통해 점멸되도록 제어할 수 있다.For example, when the first reference distance is 50 m, the control unit 30 determines that the accident occurring on the road is a medium-scale accident, sets a position 150 m rearward from the accident point as the second point, and from the accident point The light of the second color may be controlled to blink through the first blinking speed with the intensity of the second intensity up to the second point.
한편, S807 단계에서 사고 발생 차량의 수가 제2 기준치 보다 큰 것으로 확인되면, S810 단계에서, 제어부(30)는 도로에서 발생한 사고를 대규모 사고로 판단할 수 있다.On the other hand, if it is confirmed in step S807 that the number of vehicles having an accident is greater than the second reference value, in step S810 , the controller 30 may determine that the accident occurring on the road is a large-scale accident.
예를 들어, 사고 발생 차량의 수가 6대이고 제2 기준치가 5인 경우, 제어부(30)는 도로에서 발생한 사고를 대규모 사고로 판단할 수 있다.For example, when the number of vehicles having an accident is 6 and the second reference value is 5, the controller 30 may determine that the accident occurring on the road is a large-scale accident.
S811 단계에서, 제어부(30)는 사고 발생 지점으로부터 후방으로 제1 기준 거리의 4배 떨어진 위치인 제3 지점까지, 제2 색의 빛이 제2 강도의 세기로 제2 점멸 속도를 통해 점멸되도록 제어할 수 있다.In step S811, the control unit 30 causes the light of the second color to flash through the second flashing speed with the intensity of the second intensity to the third point that is four times the first reference distance rearward from the accident occurrence point. can be controlled
예를 들어, 제1 기준 거리가 50m인 경우, 제어부(30)는 도로에서 발생한 사고가 대규모 사고로 판단되면, 사고 발생 지점으로부터 후방으로 200m 떨어진 위치를 제3 지점으로 설정하고, 사고 발생 지점으로부터 제3 지점까지 제2 색의 빛이 제2 강도의 세기로 제2 점멸 속도를 통해 점멸되도록 제어할 수 있다.For example, if the first reference distance is 50 m, the control unit 30 determines that the accident occurring on the road is a large-scale accident, sets a position 200 m rearward from the accident point as the third point, and from the accident point The light of the second color may be controlled to blink through the second blinking speed with the intensity of the second intensity up to the third point.
도 9는 일실시예에 따른 중규모 사고 시 거리에 따라 단계 별로 점멸을 제어하는 과정을 설명하기 위한 순서도이다.9 is a flowchart for explaining a process of controlling the blinking step by step according to a distance during a medium-scale accident according to an embodiment.
도 9를 참조하면, 먼저, S901 단계에서, 제어부(30)는 사고 발생 차량의 수가 제1 기준치 보다 크고 제2 기준치 보다 작은 것으로 확인되면, 도로에서 발생한 사고를 중규모 사고로 판단할 수 있다.Referring to FIG. 9 , first, in step S901 , when it is confirmed that the number of vehicles having an accident is greater than a first reference value and smaller than a second reference value, the controller 30 may determine the accident occurring on the road as a medium-scale accident.
S902 단계에서, 제어부(30)는 사고 발생 지점으로부터 제2 지점까지 제2 색의 빛이 제2 강도의 세기로 제1 점멸 속도를 통해 점멸되도록 제어할 수 있다.In step S902 , the controller 30 may control the light of the second color to blink through the first blinking speed with the intensity of the second intensity from the accident occurrence point to the second point.
S903 단계에서, 제어부(30)는 일정 시간이 지난 후 사고 수습이 완료되었는지 여부를 확인할 수 있다. 이때, 제어부(30)는 사고 발생으로 인해 생긴 정체가 해소된 것으로 확인되면, 사고 수습을 완료한 것으로 확인할 수 있다.In step S903, the control unit 30 may check whether the accident management is completed after a predetermined time has elapsed. At this time, when it is confirmed that the congestion caused by the accident has been resolved, the control unit 30 may confirm that the accident management has been completed.
S903 단계에서 사고 수습이 완료된 것으로 확인되면, 제어부(30)는 사고 발생 지점으로부터 제2 지점까지 점멸되고 있는 제2 색의 빛이 더 이상 점멸되지 않도록 제어할 수 있다.When it is confirmed that the accident management is completed in step S903, the control unit 30 may control the light of the second color that is blinking from the accident occurrence point to the second point not to blink anymore.
S903 단계에서 사고 수습이 완료되지 않은 것으로 확인되면, S904 단계에서, 제어부(30)는 사고 발생 지점으로부터 제2 지점까지 점멸되고 있는 제2 색의 빛이 점멸된 상태를 유지한 점멸 시간을 확인하고, 점멸 시간이 미리 설정된 기준 시간 보다 긴지 여부를 확인할 수 있다. 이때, 기준 시간은 실시예에 따라 상이하게 설정될 수 있다.If it is confirmed in step S903 that the accident management has not been completed, in step S904, the control unit 30 checks the blinking time during which the light of the second color that is blinking from the accident occurrence point to the second point maintains the blinking state, , it is possible to check whether the blinking time is longer than the preset reference time. In this case, the reference time may be set differently depending on the embodiment.
S904 단계에서 점멸 시간이 기준 시간 보다 짧은 것으로 확인되면, S902 단계로 되돌아가, 제어부(30)는 사고 발생 지점으로부터 제2 지점까지 제2 색의 빛이 제2 강도의 세기로 제1 점멸 속도를 통해 점멸되도록 제어하여, 점멸 상태를 유지시킬 수 있다.If it is confirmed in step S904 that the blinking time is shorter than the reference time, it returns to step S902, and the control unit 30 sets the first blinking speed with the intensity of the second intensity of the light of the second color from the accident point to the second point. By controlling it to blink through, it is possible to maintain the blinking state.
S904 단계에서 점멸 시간이 기준 시간 보다 긴 것으로 확인되면, S905 단계에서, 제어부(30)는 사고 발생 지점으로부터 제1 지점까지 제2 색의 빛이 제2 강도의 세기로 제1 점멸 속도를 통해 점멸되도록 제어하고, 제1 지점으로부터 제2 지점까지 제2 색의 빛이 제1 강도의 세기로 제1 점멸 속도를 통해 점멸되도록 제어할 수 있다.If it is confirmed in step S904 that the blinking time is longer than the reference time, in step S905, the control unit 30 flashes the light of the second color from the accident occurrence point to the first point with the intensity of the second intensity through the first blinking speed. and control so that the light of the second color from the first point to the second point blinks with the intensity of the first intensity through the first blinking speed.
S906 단계에서, 제어부(30)는 일정 시간이 지난 후 사고 수습이 완료되었는지 여부를 확인할 수 있다.In step S906 , the control unit 30 may check whether the accident management is completed after a predetermined time has elapsed.
S906 단계에서 사고 수습이 완료된 것으로 확인되면, 제어부(30)는 사고 발생 지점으로부터 제2 지점까지 점멸되고 있는 제2 색의 빛이 더 이상 점멸되지 않도록 제어할 수 있다.When it is confirmed that the accident management is completed in step S906 , the controller 30 may control the light of the second color that is blinking from the accident occurrence point to the second point not to blink anymore.
S906 단계에서 사고 수습이 완료되지 않은 것으로 확인되면, S905 단계로 되돌아가, 제어부(30)는 사고 발생 지점으로부터 제1 지점까지 제2 색의 빛이 제2 강도의 세기로 제1 점멸 속도를 통해 점멸되도록 제어하고, 제1 지점으로부터 제2 지점까지 제2 색의 빛이 제1 강도의 세기로 제1 점멸 속도를 통해 점멸되도록 제어하여, 점멸 상태를 유지시킬 수 있다.If it is confirmed that the accident management is not completed in step S906, the control unit 30 returns to step S905, and the control unit 30 transmits the light of the second color from the accident occurrence point to the first point through the first flashing speed with the intensity of the second intensity. The blinking state may be maintained by controlling the blinking, and controlling the light of the second color from the first point to the second point to blink through the first blinking speed with the intensity of the first intensity.
도 10은 일실시예에 따른 대규모 사고 시 거리에 따라 단계 별로 점멸을 제어하는 과정을 설명하기 위한 순서도이다.10 is a flowchart for explaining a process of controlling the blinking step by step according to the distance during a large-scale accident according to an embodiment.
도 10을 참조하면, 먼저, S1001 단계에서, 제어부(30)는 사고 발생 차량의 수가 제2 기준치 보다 큰 것으로 확인되면, 도로에서 발생한 사고를 대규모 사고로 판단할 수 있다.Referring to FIG. 10 , first, in step S1001 , when it is confirmed that the number of vehicles having an accident is greater than a second reference value, the controller 30 may determine an accident occurring on the road as a large-scale accident.
S1002 단계에서, 제어부(30)는 사고 발생 지점으로부터 제3 지점까지 제2 색의 빛이 제2 강도의 세기로 제2 점멸 속도를 통해 점멸되도록 제어할 수 있다.In step S1002 , the controller 30 may control the light of the second color to blink through the second blinking speed with the intensity of the second intensity from the accident occurrence point to the third point.
S1003 단계에서, 제어부(30)는 일정 시간이 지난 후 사고 수습이 완료되었는지 여부를 확인할 수 있다. 이때, 제어부(30)는 사고 발생으로 인해 생긴 정체가 해소된 것으로 확인되면, 사고 수습을 완료한 것으로 확인할 수 있다.In step S1003, the control unit 30 may check whether the accident management is completed after a predetermined time has elapsed. At this time, when it is confirmed that the congestion caused by the accident has been resolved, the control unit 30 may confirm that the accident management has been completed.
S1003 단계에서 사고 수습이 완료된 것으로 확인되면, 제어부(30)는 사고 발생 지점으로부터 제3 지점까지 점멸되고 있는 제2 색의 빛이 더 이상 점멸되지 않도록 제어할 수 있다.When it is confirmed that the accident management is completed in step S1003, the controller 30 may control the light of the second color that is blinking from the accident occurrence point to the third point not to blink anymore.
S1003 단계에서 사고 수습이 완료되지 않은 것으로 확인되면, S1004 단계에서, 제어부(30)는 사고 발생 지점으로부터 제3 지점까지 점멸되고 있는 제2 색의 빛이 점멸된 상태를 유지한 점멸 시간을 확인하고, 점멸 시간이 미리 설정된 기준 시간 보다 긴지 여부를 확인할 수 있다. 이때, 기준 시간은 실시예에 따라 상이하게 설정될 수 있다.If it is confirmed in step S1003 that the accident management is not completed, in step S1004, the control unit 30 checks the blinking time during which the light of the second color that is blinking from the accident occurrence point to the third point maintains the blinking state, , it is possible to check whether the blinking time is longer than the preset reference time. In this case, the reference time may be set differently depending on the embodiment.
S1004 단계에서 점멸 시간이 기준 시간 보다 짧은 것으로 확인되면, S1002 단계로 되돌아가, 제어부(30)는 사고 발생 지점으로부터 제3 지점까지 제2 색의 빛이 제2 강도의 세기로 제2 점멸 속도를 통해 점멸되도록 제어하여, 점멸 상태를 유지시킬 수 있다.If it is confirmed that the blinking time is shorter than the reference time in step S1004, the process returns to step S1002, and the control unit 30 determines the second blinking speed with the intensity of the second intensity of the light of the second color from the accident point to the third point. By controlling it to blink through, it is possible to maintain the blinking state.
S1004 단계에서 점멸 시간이 기준 시간 보다 긴 것으로 확인되면, S1005 단계에서, 제어부(30)는 사고 발생 지점으로부터 제1 지점까지 제2 색의 빛이 제2 강도의 세기로 제2 점멸 속도를 통해 점멸되도록 제어하고, 제1 지점으로부터 제2 지점까지 제2 색의 빛이 제2 강도의 세기로 제1 점멸 속도를 통해 점멸되도록 제어하고, 제2 지점으로부터 제3 지점까지 제2 색의 빛이 제1 강도의 세기로 제1 점멸 속도를 통해 점멸되도록 제어할 수 있다.If it is confirmed in step S1004 that the blinking time is longer than the reference time, in step S1005, the control unit 30 blinks the light of the second color from the accident occurrence point to the first point with the second intensity of the second blinking speed. control so that the light of the second color from the first point to the second point blinks through the first blinking speed with the intensity of the second intensity, and the light of the second color from the second point to the third point It can be controlled to blink through the first blinking speed with an intensity of 1 intensity.
S1006 단계에서, 제어부(30)는 일정 시간이 지난 후 사고 수습이 완료되었는지 여부를 확인할 수 있다.In step S1006, the control unit 30 may check whether the accident management is completed after a predetermined time has elapsed.
S1006 단계에서 사고 수습이 완료된 것으로 확인되면, 제어부(30)는 사고 발생 지점으로부터 제3 지점까지 점멸되고 있는 제2 색의 빛이 더 이상 점멸되지 않도록 제어할 수 있다.When it is confirmed that the accident management is completed in step S1006, the controller 30 may control the light of the second color that is blinking from the accident occurrence point to the third point not to blink anymore.
S1006 단계에서 사고 수습이 완료되지 않은 것으로 확인되면, S1005 단계로 되돌아가, 제어부(30)는 사고 발생 지점으로부터 제1 지점까지 제2 색의 빛이 제2 강도의 세기로 제2 점멸 속도를 통해 점멸되도록 제어하고, 제1 지점으로부터 제2 지점까지 제2 색의 빛이 제2 강도의 세기로 제1 점멸 속도를 통해 점멸되도록 제어하고, 제2 지점으로부터 제3 지점까지 제2 색의 빛이 제1 강도의 세기로 제1 점멸 속도를 통해 점멸되도록 제어하여, 점멸 상태를 유지시킬 수 있다.If it is confirmed that the accident management is not completed in step S1006, the process returns to step S1005, and the control unit 30 allows the light of the second color from the accident occurrence point to the first point through the second flashing speed with the intensity of the second intensity. control to blink, and control so that the light of the second color from the first point to the second point blinks through the first blinking speed with the intensity of the second intensity, and the light of the second color from the second point to the third point By controlling the flashing through the first flashing speed with the intensity of the first intensity, the flashing state may be maintained.
도 11은 일실시예에 따른 차량의 표면을 분류하는 과정을 설명하기 위한 순서도이다.11 is a flowchart illustrating a process of classifying a surface of a vehicle according to an exemplary embodiment.
먼저, 제어부(30)는 도로에 정체가 있는 구간에 위치하는 복수의 차량들 중 어느 하나인 제1 차량을 분석 대상 차량으로 확인할 수 있다.First, the controller 30 may identify a first vehicle, which is any one of a plurality of vehicles located in a section with congestion on the road, as an analysis target vehicle.
S1301 단계에서, 제어부(30)는 라이다를 통해 제1 차량의 표면에 대한 3D 데이터를 획득할 수 있다. 여기서, 3D 데이터는 제1 차량의 표면에 대한 3D 이미지이다. 이를 위해, 제어부(30)는 라이다가 장착된 기기와 유무선을 통해 연결될 수 있다.In step S1301, the controller 30 may acquire 3D data on the surface of the first vehicle through the lidar. Here, the 3D data is a 3D image of the surface of the first vehicle. To this end, the control unit 30 may be connected to a device equipped with a lidar through wired or wireless.
S1302 단계에서, 제어부(30)는 카메라를 통해 제1 차량의 표면에 대한 2D 데이터를 획득할 수 있다. 여기서, 2D 데이터는 제1 차량의 표면에 대한 2D 이미지이다. 이를 위해, 제어부(30)는 카메라가 장착된 기기와 유무선을 통해 연결될 수 있다.In step S1302, the controller 30 may acquire 2D data on the surface of the first vehicle through the camera. Here, the 2D data is a 2D image of the surface of the first vehicle. To this end, the controller 30 may be connected to a device equipped with a camera through wired or wireless.
S1303 단계에서, 제어부(30)는 2D 데이터와 3D 데이터의 합집합 영역을 분리하여, 2D 데이터 및 3D 데이터를 병합한 제1 데이터를 추출할 수 있다.In step S1303 , the controller 30 may separate the union region of the 2D data and the 3D data to extract the first data obtained by merging the 2D data and the 3D data.
구체적으로, 제어부(30)는 2D 데이터와 3D 데이터를 비교하여, 서로 중복되는 합집합 영역을 파악할 수 있으며, 2D 데이터에서 합집합 영역을 분리하고 3D 데이터에서 합집합 영역을 분리하여, 분리된 합집합 영역을 병합하여 제1 데이터를 추출할 수 있다. 여기서, 제1 데이터는 4채널로 구성될 수 있으며, 3채널은 RGB 값을 나타내는 2D 데이터이고, 1채널은 3D 깊이 값을 나타내는 데이터일 수 있다.Specifically, the controller 30 can compare the 2D data and the 3D data to identify overlapping union regions, separate the union regions from the 2D data and separate the union regions from the 3D data, and merge the separated union regions. Thus, the first data may be extracted. Here, the first data may consist of 4 channels, 3 channels may be 2D data representing an RGB value, and 1 channel may be data representing a 3D depth value.
제어부(30)는 제1 데이터를 인코딩하여 제1 입력 신호를 생성할 수 있다.The controller 30 may generate a first input signal by encoding the first data.
구체적으로, 제어부(30)는 제1 데이터의 픽셀을 색 정보로 인코딩하여 제1 입력 신호를 생성할 수 있다. 색 정보는 RGB 색상 정보, 명도 정보, 채도 정보, 깊이 정보를 포함할 수 있으나, 이에 국한하지 않는다. 제어부(30)는 색 정보를 수치화된 값으로 환산할 수 있으며, 이 값을 포함한 데이터 시트 형태로 제1 데이터를 인코딩할 수 있다.Specifically, the controller 30 may generate the first input signal by encoding the pixels of the first data with color information. The color information may include, but is not limited to, RGB color information, brightness information, saturation information, and depth information. The controller 30 may convert the color information into a numerical value, and may encode the first data in the form of a data sheet including the value.
제어부(30)는 제1 입력 신호를 도로 주행 안전 장치(100) 내 미리 학습된 제1 인공 신경망에 입력할 수 있다.The controller 30 may input the first input signal to the first artificial neural network previously learned in the road driving safety device 100 .
일실시예에 따른 제1 인공 신경망은 특징 추출 신경망과 분류 신경망으로 구성되어 있으며 특징 추출 신경망은 입력 신호를 컨볼루션 계층과 풀링 계층을 차례로 쌓아 진행한다. 컨볼루션 계층은 컨볼루션 연산, 컨볼루션 필터 및 활성함수를 포함하고 있다. 컨볼루션 필터의 계산은 대상 입력의 행렬 크기에 따라 조절되나 일반적으로 9X9 행렬을 사용한다. 활성 함수는 일반적으로 ReLU 함수, 시그모이드 함수, 및 tanh 함수 등을 사용하나 이에 한정하지 않는다. 풀링 계층은 입력의 행렬 크기를 줄이는 역할을 하는 계층으로, 특정 영역의 픽셀을 묶어 대표값을 추출하는 방식을 사용한다. 풀링 계층의 연산에는 일반적으로 평균값이나 최대값을 많이 사용하나 이에 한정하지 않는다. 해당 연산은 정방 행렬을 사용하여 진행되는데, 일반적으로 9X9 행렬을 사용한다. 컨볼루션 계층과 풀링 계층은 해당 입력이 차이를 유지한 상태에서 충분히 작아질 때까지 번갈아 반복 진행된다.The first artificial neural network according to an embodiment is composed of a feature extraction neural network and a classification neural network, and the feature extraction neural network sequentially stacks a convolutional layer and a pooling layer on an input signal. The convolution layer includes a convolution operation, a convolution filter, and an activation function. The calculation of the convolution filter is adjusted according to the matrix size of the target input, but a 9X9 matrix is generally used. The activation function generally uses, but is not limited to, a ReLU function, a sigmoid function, and a tanh function. The pooling layer is a layer that reduces the size of the input matrix, and uses a method of extracting representative values by tying pixels in a specific area. In general, the average value or the maximum value is often used for the calculation of the pooling layer, but is not limited thereto. The operation is performed using a square matrix, usually a 9x9 matrix. The convolutional layer and the pooling layer are repeated alternately until the corresponding input becomes small enough while maintaining the difference.
일실시예에 따르면, 분류 신경망은 히든 레이어와 출력 레이어를 가지고 있다. 제1 차량의 표면의 거칠기 단계를 분류를 위한 제1 인공 신경망의 분류 신경망은 5층 이하의 히든 레이어로 구성되며, 총 50개 이하의 히든 레이어 노드를 포함할 수 있다. 히든 레이어의 활성함수는 ReLU 함수, 시그모이드 함수 및 tanh 함수 등을 사용하나 이에 한정하지 않는다. 분류 신경망의 출력 레이어 노드는 총 1개이며, 제1 차량의 표면의 분류에 대한 출력값을 출력 레이어 노드에 출력할 수 있다. 제1 인공 신경망에 대한 자세한 설명은 도 12를 참조하여 후술한다.According to an embodiment, the classification neural network has a hidden layer and an output layer. Classification of the first artificial neural network for classifying the roughness level of the surface of the first vehicle The neural network consists of five or less hidden layers, and may include a total of 50 or less hidden layer nodes. The activation function of the hidden layer uses a ReLU function, a sigmoid function, and a tanh function, but is not limited thereto. There is a total of one output layer node of the classification neural network, and an output value for classification of the surface of the first vehicle may be output to the output layer node. A detailed description of the first artificial neural network will be described later with reference to FIG. 12 .
제어부(30)는 제1 인공 신경망의 입력의 결과에 기초하여, 제1 출력 신호를 획득할 수 있다.The controller 30 may obtain a first output signal based on a result of the input of the first artificial neural network.
S1304 단계에서, 제어부(30)는 제1 출력 신호에 기초하여, 제1 차량의 표면에 대한 제1 분류 결과를 생성할 수 있다. 여기서, 제1 분류 결과는 제1 차량의 표면이 어느 단계로 분류되는지에 대한 정보를 포함할 수 있다.In operation S1304 , the controller 30 may generate a first classification result for the surface of the first vehicle based on the first output signal. Here, the first classification result may include information on which stage the surface of the first vehicle is classified.
예를 들어, 제어부(30)는 제1 출력 신호의 출력값을 확인한 결과, 출력값이 1인 경우, 제1 차량의 표면이 1단계에 해당하는 것으로 제1 분류 결과를 생성하고, 출력값이 2인 경우, 제1 차량의 표면이 2단계에 해당하는 것으로 제1 분류 결과를 생성할 수 있다. 단계가 올라갈수록 제1 차량의 표면이 더 거칠어진다는 것을 파악할 수 있다.For example, as a result of checking the output value of the first output signal, when the output value is 1, the control unit 30 generates a first classification result that the surface of the first vehicle corresponds to stage 1, and when the output value is 2 , a first classification result may be generated as the surface of the first vehicle corresponds to the second stage. It can be seen that the higher the step, the rougher the surface of the first vehicle becomes.
S1105 단계에서, 제어부(30)는 제1 데이터를 분석하여 제1 차량의 표면에 발생한 균열을 검출할 수 있다. 균열 검출 시에는 이미지 분석을 통해 일정 크기 이상으로 확인된 부분만 제1 차량의 표면에 발생한 균열로 검출할 수 있다.In step S1105 , the controller 30 may analyze the first data to detect a crack generated on the surface of the first vehicle. In the case of crack detection, only a portion confirmed to be larger than a certain size through image analysis may be detected as a crack occurring on the surface of the first vehicle.
S1106 단계에서, 제어부(30)는 제1 차량의 표면에 발생한 균열을 영역별로 확인하여, 정상 영역과 손상 영역을 구분할 수 있다.In step S1106 , the control unit 30 may identify cracks generated on the surface of the first vehicle for each region, and distinguish a normal region from a damaged region.
구체적으로, 제어부(30)는 제1 데이터를 제1 영역, 제2 영역 등의 복수의 영역으로 구분하여, 각각의 영역 별로 균열이 몇 개씩 검출되었는지 확인할 수 있으며, 제1 설정값 미만으로 균열이 검출된 영역을 정상 영역으로 구분하고, 제1 설정값 이상으로 균열이 검출된 영역을 손상 영역으로 구분할 수 있다. 이때, 제1 설정값은 실시예에 따라 상이하게 설정될 수 있다.Specifically, the control unit 30 divides the first data into a plurality of areas such as a first area and a second area, and can check how many cracks are detected for each area, and the number of cracks is less than the first set value. The detected area may be divided into a normal area, and an area in which cracks greater than or equal to the first set value are detected may be classified as a damaged area. In this case, the first set value may be set differently depending on the embodiment.
S1107 단계에서, 제어부(30)는 제1 데이터에서 손상 영역을 삭제한 제2 데이터를 추출할 수 있다.In step S1107 , the controller 30 may extract second data from which the damaged area is deleted from the first data.
예를 들어, 제1 데이터에 있는 이미지가 제1 영역, 제2 영역 및 제3 영역으로 구성되어 있는데, 제1 영역은 손상 영역으로 구분되고, 제2 영역 및 제3 영역은 정상 영역으로 구분된 경우, 제어부(30)는 제2 영역 및 제3 영역만 포함된 이미지를 제2 데이터로 추출할 수 있다.For example, the image in the first data consists of a first region, a second region, and a third region, wherein the first region is divided into a damaged region, and the second region and the third region are divided into a normal region. In this case, the controller 30 may extract an image including only the second region and the third region as the second data.
제어부(30)는 제2 데이터를 인코딩하여 제2 입력 신호를 생성할 수 있다.The controller 30 may generate a second input signal by encoding the second data.
구체적으로, 제어부(30)는 제2 데이터의 픽셀을 색 정보로 인코딩하여 제2 입력 신호를 생성할 수 있다. 색 정보는 RGB 색상 정보, 명도 정보, 채도 정보, 깊이 정보를 포함할 수 있으나, 이에 국한하지 않는다. 제어부(30)는 색 정보를 수치화된 값으로 환산할 수 있으며, 이 값을 포함한 데이터 시트 형태로 제2 데이터를 인코딩할 수 있다.Specifically, the controller 30 may generate a second input signal by encoding pixels of the second data with color information. The color information may include, but is not limited to, RGB color information, brightness information, saturation information, and depth information. The controller 30 may convert the color information into a numerical value, and may encode the second data in the form of a data sheet including the value.
제어부(30)는 제2 입력 신호를 도로 주행 안전 장치(100) 내 미리 학습된 제2 인공 신경망에 입력할 수 있다.The controller 30 may input the second input signal to the second artificial neural network previously learned in the road driving safety device 100 .
일실시예에 따른 제2 인공 신경망은 특징 추출 신경망과 분류 신경망으로 구성되어 있으며 특징 추출 신경망은 입력 신호를 컨볼루션 계층과 풀링 계층을 차례로 쌓아 진행한다. 컨볼루션 계층은 컨볼루션 연산, 컨볼루션 필터 및 활성함수를 포함하고 있다. 컨볼루션 필터의 계산은 대상 입력의 행렬 크기에 따라 조절되나 일반적으로 9X9 행렬을 사용한다. 활성 함수는 일반적으로 ReLU 함수, 시그모이드 함수, 및 tanh 함수 등을 사용하나 이에 한정하지 않는다. 풀링 계층은 입력의 행렬 크기를 줄이는 역할을 하는 계층으로, 특정 영역의 픽셀을 묶어 대표값을 추출하는 방식을 사용한다. 풀링 계층의 연산에는 일반적으로 평균값이나 최대값을 많이 사용하나 이에 한정하지 않는다. 해당 연산은 정방 행렬을 사용하여 진행되는데, 일반적으로 9X9 행렬을 사용한다. 컨볼루션 계층과 풀링 계층은 해당 입력이 차이를 유지한 상태에서 충분히 작아질 때까지 번갈아 반복 진행된다.The second artificial neural network according to an embodiment consists of a feature extraction neural network and a classification neural network, and the feature extraction neural network sequentially stacks a convolutional layer and a pooling layer on an input signal. The convolution layer includes a convolution operation, a convolution filter, and an activation function. The calculation of the convolution filter is adjusted according to the matrix size of the target input, but a 9X9 matrix is generally used. The activation function generally uses, but is not limited to, a ReLU function, a sigmoid function, and a tanh function. The pooling layer is a layer that reduces the size of the input matrix, and uses a method of extracting representative values by tying pixels in a specific area. In general, the average value or the maximum value is often used for the calculation of the pooling layer, but is not limited thereto. The operation is performed using a square matrix, usually a 9x9 matrix. The convolutional layer and the pooling layer are repeated alternately until the corresponding input becomes small enough while maintaining the difference.
일실시예에 따르면, 분류 신경망은 히든 레이어와 출력 레이어를 가지고 있다. 제1 차량의 표면의 거칠기 단계를 분류를 위한 제2 인공 신경망의 분류 신경망은 5층 이하의 히든 레이어로 구성되며, 총 50개 이하의 히든 레이어 노드를 포함할 수 있다. 히든 레이어의 활성함수는 ReLU 함수, 시그모이드 함수 및 tanh 함수 등을 사용하나 이에 한정하지 않는다. 분류 신경망의 출력 레이어 노드는 총 1개이며, 제1 차량의 표면의 분류에 대한 출력값을 출력 레이어 노드에 출력할 수 있다. 제2 인공 신경망에 대한 자세한 설명은 도 12를 참조하여 후술한다.According to an embodiment, the classification neural network has a hidden layer and an output layer. Classification of the second artificial neural network for classifying the roughness level of the surface of the first vehicle The neural network consists of five or less hidden layers, and may include a total of 50 or less hidden layer nodes. The activation function of the hidden layer uses a ReLU function, a sigmoid function, and a tanh function, but is not limited thereto. There is a total of one output layer node of the classification neural network, and an output value for classification of the surface of the first vehicle may be output to the output layer node. A detailed description of the second artificial neural network will be described later with reference to FIG. 12 .
제어부(30)는 제2 인공 신경망의 입력의 결과에 기초하여, 제2 출력 신호를 획득할 수 있다.The controller 30 may obtain a second output signal based on a result of the input of the second artificial neural network.
S1108 단계에서, 제어부(30)는 제2 출력 신호에 기초하여, 제1 차량의 표면에 대한 제2 분류 결과를 생성할 수 있다. 여기서, 제2 분류 결과는 제1 차량의 표면이 어느 단계로 분류되는지에 대한 정보를 포함할 수 있다.In operation S1108 , the controller 30 may generate a second classification result for the surface of the first vehicle based on the second output signal. Here, the second classification result may include information on which stage the surface of the first vehicle is classified.
예를 들어, 제어부(30)는 제2 출력 신호의 출력값을 확인한 결과, 출력값이 1인 경우, 제1 차량의 표면이 1단계에 해당하는 것으로 제2 분류 결과를 생성하고, 출력값이 2인 경우, 제1 차량의 표면이 2단계에 해당하는 것으로 제2 분류 결과를 생성할 수 있다.For example, as a result of checking the output value of the second output signal, when the output value is 1, the control unit 30 generates a second classification result as that the surface of the first vehicle corresponds to stage 1, and when the output value is 2 , a second classification result may be generated as the surface of the first vehicle corresponds to the second stage.
S1109 단계에서, 제어부(30)는 제1 분류 결과 및 제2 분류 결과를 기초로, 제1 차량의 표면에 대한 최종 분류 결과를 설정할 수 있다.In operation S1109 , the controller 30 may set a final classification result for the surface of the first vehicle based on the first classification result and the second classification result.
구체적으로, 제1 분류 결과 및 제2 분류 결과가 동일한 경우, 제어부(30)는 제1 분류 결과 및 제2 분류 결과 중 어느 하나를 제1 차량의 표면에 대한 최종 분류 결과로 설정할 수 있다.Specifically, when the first classification result and the second classification result are the same, the controller 30 may set any one of the first classification result and the second classification result as the final classification result for the surface of the first vehicle.
제어부(30)는 최종 분류 결과를 이용하여, 제1 차량의 외관에 문제가 있는지 여부를 확인할 수 있으며, 이를 통해, 제1 차량에 사고가 발생하였는지 여부를 판단할 수 있다.The controller 30 may determine whether there is a problem in the appearance of the first vehicle by using the final classification result, and through this, determine whether an accident has occurred in the first vehicle.
도 12는 일실시예에 따른 인공 신경망을 설명하기 위한 도면이다.12 is a diagram for explaining an artificial neural network according to an embodiment.
일실시예에 따른 인공 신경망(1200)은 제1 인공 신경망 및 제2 인공 신경망 중 어느 하나일 수 있다. 제1 인공 신경망인 경우, 제1 데이터의 인코딩에 의해 생성된 제1 입력 신호를 입력으로 하여, 제1 차량의 표면이 거칠기 단계 중 어느 단계로 분류되는지에 대한 정보를 출력으로 할 수 있다. 제2 인공 신경망인 경우, 제2 데이터의 인코딩에 의해 생성된 제2 입력 신호를 입력으로 하여, 제1 차량의 표면이 거칠기 단계 중 어느 단계로 분류되는지에 대한 정보를 출력으로 할 수 있다.The artificial neural network 1200 according to an embodiment may be any one of a first artificial neural network and a second artificial neural network. In the case of the first artificial neural network, information on which level of roughness the surface of the first vehicle is classified may be output as a first input signal generated by encoding the first data as an input. In the case of the second artificial neural network, the second input signal generated by encoding the second data may be input as an input, and information on which stage of the roughness stage of the first vehicle surface is classified may be output.
일실시예에 따른 인코딩은 이미지의 픽셀 별 색 정보를 수치화된 데이터 시트 형태로 저장하는 방식으로 이뤄질 수 있는데, 색 정보는 하나의 픽셀이 가지고 있는 RGB 색상, 명도 정보, 채도 정보, 깊이 정보를 포함할 수 있으나, 이에 국한하지 않는다.Encoding according to an embodiment may be performed by storing color information for each pixel of an image in the form of a digitized data sheet, and the color information includes RGB color, brightness information, saturation information, and depth information of one pixel. can, but is not limited to.
일실시예에 따르면, 인공 신경망(1200)은 특징 추출 신경망(1210)과 분류 신경망(1220)으로 구성되어 있으며, 특징 추출 신경망(1210)은 이미지에서 제1 차량의 영역과 배경 영역을 분리하는 작업을 수행할 수 있으며, 분류 신경망(1220)은 이미지 내에서 제1 차량의 표면이 거칠기 단계 중 어느 단계로 분류되는지 여부를 파악하는 작업을 수행하도록 할 수 있다.According to an embodiment, the artificial neural network 1200 is composed of a feature extraction neural network 1210 and a classification neural network 1220, and the feature extraction neural network 1210 separates the first vehicle region and the background region from the image. may be performed, and the classification neural network 1220 may perform an operation of determining whether the surface of the first vehicle is classified into any roughness stage in the image.
특징 추출 신경망(1210)이 제1 차량의 영역과 배경 영역을 구분하는 방법은 이미지를 인코딩한 입력 신호의 데이터 시트로부터 색 정보의 각 값들의 변화가 한 픽셀을 포함하는 8개의 픽셀 중 6개 이상에서 30% 이상의 변화가 생긴 것으로 감지되는 픽셀들의 묶음을 제1 차량의 영역과 배경 영역의 경계로 삼을 수 있으나, 이에 국한하지 않는다.In the method for the feature extraction neural network 1210 to distinguish the region of the first vehicle and the background region, the change in each value of color information from the data sheet of the input signal encoding the image is at least 6 out of 8 pixels including one pixel. A bundle of pixels that are detected as having a change of 30% or more may be used as a boundary between the area of the first vehicle and the background area, but is not limited thereto.
특징 추출 신경망(1210)은 입력 신호를 컨볼루션 계층과 풀링 계층을 차례로 쌓아 진행한다. 컨볼루션 계층은 컨볼루션 연산, 컨볼루션 필터 및 활성함수를 포함하고 있다. 컨볼루션 필터의 계산은 대상 입력의 행렬 크기에 따라 조절되나 일반적으로 9X9 행렬을 사용한다. 활성 함수는 일반적으로 ReLU 함수, 시그모이드 함수, 및 tanh 함수 등을 사용하나 이에 한정하지 않는다. 풀링 계층은 입력의 행렬 크기를 줄이는 역할을 하는 계층으로, 특정 영역의 픽셀을 묶어 대표값을 추출하는 방식을 사용한다. 풀링 계층의 연산에는 일반적으로 평균값이나 최대값을 많이 사용하나 이에 한정하지 않는다. 해당 연산은 정방 행렬을 사용하여 진행되는데, 일반적으로 9X9 행렬을 사용한다. 컨볼루션 계층과 풀링 계층은 해당 입력이 차이를 유지한 상태에서 충분히 작아질 때까지 번갈아 반복 진행된다.The feature extraction neural network 1210 proceeds by sequentially stacking a convolutional layer and a pooling layer on the input signal. The convolution layer includes a convolution operation, a convolution filter, and an activation function. The calculation of the convolution filter is adjusted according to the matrix size of the target input, but a 9X9 matrix is generally used. The activation function generally uses, but is not limited to, a ReLU function, a sigmoid function, and a tanh function. The pooling layer is a layer that reduces the size of the input matrix, and uses a method of extracting representative values by tying pixels in a specific area. In general, the average value or the maximum value is often used for the calculation of the pooling layer, but is not limited thereto. The operation is performed using a square matrix, usually a 9x9 matrix. The convolutional layer and the pooling layer are repeated alternately until the corresponding input becomes small enough while maintaining the difference.
분류 신경망(1220)은 특징 추출 신경망(1210)을 통해 배경으로부터 구분된 제1 차량의 영역의 표면을 확인하고, 미리 정의된 거칠기 단계별 표면 상태와 유사한지 여부를 확인하여, 제1 차량의 영역의 표면이 거칠기 단계 중 어느 단계로 분류되는지 여부를 파악할 수 있다. 거칠기 단계별 표면 상태와 비교하기 위해, 도로 주행 안전 장치(100)의 데이터베이스에 저장된 정보들을 활용할 수 있다.The classification neural network 1220 checks the surface of the region of the first vehicle separated from the background through the feature extraction neural network 1210, and checks whether it is similar to the predefined roughness step surface state, It is possible to determine whether the surface is classified into which level of roughness level. In order to compare the roughness step-by-step surface state, information stored in the database of the road driving safety device 100 may be utilized.
분류 신경망(1220)은 히든 레이어와 출력 레이어를 가지고 있으며, 5층 이하의 히든 레이어로 구성되어, 총 50개 이하의 히든 레이어 노드를 포함하고, 히든 레이어의 활성함수는 ReLU 함수, 시그모이드 함수 및 tanh 함수 등을 사용하나 이에 한정하지 않는다.The classification neural network 1220 has a hidden layer and an output layer, and is composed of five or less hidden layers, including a total of 50 or less hidden layer nodes, and the activation function of the hidden layer is a ReLU function and a sigmoid function. and tanh functions, but is not limited thereto.
분류 신경망(1220)는 총 1개의 출력층 노드만 포함할 수 있다.The classification neural network 1220 may include only one output layer node in total.
분류 신경망(1220)의 출력은 제1 차량의 표면이 거칠기 단계 중 어느 단계로 분류되는지에 대한 출력값으로, 거칠기 단계 중 어느 단계에 해당하는지를 지시할 수 있다. 예를 들어, 출력값이 1인 경, 제1 차량의 표면이 1단계에 해당하는 것을 지시하고, 출력값이 2인 경우, 제1 차량의 표면이 2단계에 해당하는 것을 지시할 수 있다.The output of the classification neural network 1220 is an output value of which stage of the roughness stage the surface of the first vehicle is classified, and may indicate which stage of the roughness stage it corresponds to. For example, when the output value is 1, it may indicate that the surface of the first vehicle corresponds to the first stage, and when the output value is 2, it may indicate that the surface of the first vehicle corresponds to the second stage.
일실시예에 따르면, 인공 신경망(1200)은 사용자가 인공 신경망(1200)에 따른 출력의 문제점 발견 시 사용자에 의해 입력된 수정 정답에 의해 생성되는 제1 학습 신호를 전달받아 학습할 수 있다. 인공 신경망(1200)에 따른 출력의 문제점은 제1 차량의 표면에 대해 거칠기 단계 중 다른 단계로 분류한 출력값을 출력한 경우를 의미할 수 있다.According to an embodiment, the artificial neural network 1200 may learn by receiving the first learning signal generated by the corrected correct answer input by the user when the user discovers a problem in the output according to the artificial neural network 1200 . The problem of output according to the artificial neural network 1200 may mean a case in which an output value classified into another stage among the roughness stages is output with respect to the surface of the first vehicle.
일실시예에 따른 제1 학습 신호는 정답과 출력값의 오차를 바탕으로 만들어지며, 경우에 따라 델타를 이용하는 SGD나 배치 방식 혹은 역전파 알고리즘을 따르는 방식을 사용할 수 있다. 인공 신경망(1200)은 제1 학습 신호에 의해 기존의 가중치를 수정해 학습을 수행하며, 경우에 따라 모멘텀을 사용할 수 있다. 오차의 계산에는 비용함수가 사용될 수 있는데, 비용함수로 Cross entropy 함수를 사용할 수 있다. 이하 도 13을 참조하여 인공 신경망(1200)의 학습 내용이 후술된다.The first learning signal according to an embodiment is created based on the error between the correct answer and the output value, and in some cases, SGD using delta, a batch method, or a method following a backpropagation algorithm may be used. The artificial neural network 1200 performs learning by modifying the existing weights according to the first learning signal, and may use momentum in some cases. A cost function can be used to calculate the error, and a cross entropy function can be used as the cost function. Hereinafter, the learning contents of the artificial neural network 1200 will be described with reference to FIG. 13 .
도 13은 일실시예에 따른 인공 신경망을 학습하는 방법을 설명하기 위한 도면이다.13 is a diagram for explaining a method of learning an artificial neural network according to an embodiment.
일실시예에 따르면, 학습 장치는 인공 신경망(1200)을 학습시킬 수 있다. 학습 장치는 도로 주행 안전 장치(100)와 다른 별개의 주체일 수 있지만, 이에 제한되는 것은 아니다.According to an embodiment, the learning apparatus may train the artificial neural network 1200 . The learning apparatus may be a separate entity different from the road driving safety apparatus 100 , but is not limited thereto.
일실시예에 따르면, 인공 신경망(1200)은 트레이닝 샘플들이 입력되는 입력 레이어와 트레이닝 출력들을 출력하는 출력 레이어를 포함하고, 트레이닝 출력들과 제1 레이블들 사이의 차이에 기초하여 학습될 수 있다. 여기서, 제1 레이블들은 거칠기 단계별로 등록되어 있는 대표 이미지에 기초하여 정의될 수 있다. 인공 신경망(1200)은 복수의 노드들의 그룹으로 연결되어 있고, 연결된 노드들 사이의 가중치들과 노드들을 활성화시키는 활성화 함수에 의해 정의된다. According to an embodiment, the artificial neural network 1200 includes an input layer to which training samples are input and an output layer to output training outputs, and may be learned based on a difference between the training outputs and the first labels. Here, the first labels may be defined based on a representative image registered for each roughness level. The artificial neural network 1200 is connected as a group of a plurality of nodes, and is defined by weights between the connected nodes and an activation function that activates the nodes.
학습 장치는 GD(Gradient Decent) 기법 또는 SGD(Stochastic Gradient Descent) 기법을 이용하여 인공 신경망(1200)을 학습시킬 수 있다. 학습 장치는 인공 신경망(1200)의 출력들 및 레이블들 의해 설계된 손실 함수를 이용할 수 있다.The learning apparatus may train the artificial neural network 1200 using a Gradient Decent (GD) technique or a Stochastic Gradient Descent (SGD) technique. The learning apparatus may use a loss function designed by the outputs and labels of the artificial neural network 1200 .
학습 장치는 미리 정의된 손실 함수(loss function)을 이용하여 트레이닝 에러를 계산할 수 있다. 손실 함수는 레이블, 출력 및 파라미터를 입력 변수로 미리 정의될 수 있고, 여기서 파라미터는 인공 신경망(1200) 내 가중치들에 의해 설정될 수 있다. 예를 들어, 손실 함수는 MSE(Mean Square Error) 형태, 엔트로피(entropy) 형태 등으로 설계될 수 있는데, 손실 함수가 설계되는 실시예에는 다양한 기법 또는 방식이 채용될 수 있다.The learning apparatus may calculate a training error using a predefined loss function. The loss function may be predefined with a label, an output, and a parameter as input variables, where the parameter may be set by weights in the artificial neural network 1200 . For example, the loss function may be designed in a Mean Square Error (MSE) form, an entropy form, or the like, and various techniques or methods may be employed in an embodiment in which the loss function is designed.
학습 장치는 역전파(backpropagation) 기법을 이용하여 트레이닝 에러에 영향을 주는 가중치들을 찾아낼 수 있다. 여기서, 가중치들은 인공 신경망(1200) 내 노드들 사이의 관계들이다. 학습 장치는 역전파 기법을 통해 찾아낸 가중치들을 최적화시키기 위해 레이블들 및 출력들을 이용한 SGD 기법을 이용할 수 있다. 예를 들어, 학습 장치는 레이블들, 출력들 및 가중치들에 기초하여 정의된 손실 함수의 가중치들을 SGD 기법을 이용하여 갱신할 수 있다.The learning apparatus may find weights affecting the training error by using a backpropagation technique. Here, the weights are relationships between nodes in the artificial neural network 1200 . The learning apparatus may use the SGD technique using labels and outputs to optimize the weights found through the backpropagation technique. For example, the learning apparatus may update the weights of the loss function defined based on the labels, outputs, and weights using the SGD technique.
일실시예에 따르면, 학습 장치는 도로 주행 안전 장치(100)의 데이터베이스로부터 레이블드 트레이닝 거칠기 단계별 대표 이미지들(1301)을 획득할 수 있다. 학습 장치는 거칠기 단계별 대표 이미지들(1301)에 각각 미리 레이블링된 정보를 획득할 수 있는데, 거칠기 단계별 대표 이미지들(1301)은 미리 분류된 거칠기 단계에 따라 레이블링될 수 있다.According to an embodiment, the learning apparatus may obtain the representative images 1301 for each level of the labeled training roughness from the database of the road safety apparatus 100 . The learning apparatus may obtain pre-labeled information on the representative images 1301 for each roughness stage, and the representative images 1301 for each roughness stage may be labeled according to the pre-classified roughness stage.
일실시예에 따르면, 학습 장치는 1000개의 레이블드 트레이닝 거칠기 단계별 대표 이미지들(1301)을 획득할 수 있으며, 레이블드 트레이닝 거칠기 단계별 대표 이미지들(1301)에 기초하여 제1 트레이닝 거칠기 단계별 벡터들(1302)을 생성할 수 있다. 제1 트레이닝 거칠기 단계별 벡터들(1302)을 추출하는데는 다양한 방식이 채용될 수 있다.According to an embodiment, the learning apparatus may acquire 1000 labeled training roughness step-by-step representative images 1301, and based on the labeled training roughness step-by-step representative images 1301, the first training roughness step-by-step vectors ( 1302) can be created. Various methods may be employed to extract the first training roughness step vectors 1302 .
일실시예에 따르면, 학습 장치는 제1 트레이닝 거칠기 단계별 벡터들(1302)을 인공 신경망(1200)에 적용하여 제1 트레이닝 출력들(1303)을 획득할 수 있다. 학습 장치는 제1 트레이닝 출력들(1303)과 제1 레이블들(1304)에 기초하여 인공 신경망(1200)을 학습시킬 수 있다. 학습 장치는 제1 트레이닝 출력들(1303)에 대응하는 트레이닝 에러들을 계산하고, 그 트레이닝 에러들을 최소화하기 위해 인공 신경망(1200) 내 노드들의 연결 관계를 최적화하여 인공 신경망(1200)을 학습시킬 수 있다.According to an embodiment, the learning apparatus may obtain the first training outputs 1303 by applying the first training roughness step vectors 1302 to the artificial neural network 1200 . The learning apparatus may train the artificial neural network 1200 based on the first training outputs 1303 and the first labels 1304 . The learning apparatus may train the artificial neural network 1200 by calculating the training errors corresponding to the first training outputs 1303 and optimizing the connection relationship of nodes in the artificial neural network 1200 to minimize the training errors. .
이상에서 설명된 실시예들은 하드웨어 구성요소, 소프트웨어 구성요소, 및/또는 하드웨어 구성요소 및 소프트웨어 구성요소의 조합으로 구현될 수 있다. 예를 들어, 실시예들에서 설명된 장치, 방법 및 구성요소는, 예를 들어, 프로세서, 콘트롤러, ALU(arithmetic logic unit), 디지털 신호 프로세서(digital signal processor), 마이크로컴퓨터, FPGA(field programmable gate array), PLU(programmable logic unit), 마이크로프로세서, 또는 명령(instruction)을 실행하고 응답할 수 있는 다른 어떠한 장치와 같이, 하나 이상의 범용 컴퓨터 또는 특수 목적 컴퓨터를 이용하여 구현될 수 있다. 처리 장치는 운영 체제(OS) 및 운영 체제 상에서 수행되는 하나 이상의 소프트웨어 애플리케이션을 수행할 수 있다. 또한, 처리 장치는 소프트웨어의 실행에 응답하여, 데이터를 접근, 저장, 조작, 처리 및 생성할 수도 있다. 이해의 편의를 위하여, 처리 장치는 하나가 사용되는 것으로 설명된 경우도 있지만, 해당 기술분야에서 통상의 지식을 가진 자는, 처리 장치가 복수 개의 처리 요소(processing element) 및/또는 복수 유형의 처리 요소를 포함할 수 있음을 알 수 있다. 예를 들어, 처리 장치는 복수 개의 프로세서 또는 하나의 프로세서 및 하나의 콘트롤러를 포함할 수 있다. 또한, 병렬 프로세서(parallel processor)와 같은, 다른 처리 구성(processing configuration)도 가능하다.The embodiments described above may be implemented by a hardware component, a software component, and/or a combination of a hardware component and a software component. For example, the apparatus, methods and components described in the embodiments may include, for example, a processor, a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable gate (FPGA). array), a programmable logic unit (PLU), a microprocessor, or any other device capable of executing and responding to instructions, may be implemented using one or more general purpose or special purpose computers. The processing device may execute an operating system (OS) and one or more software applications running on the operating system. The processing device may also access, store, manipulate, process, and generate data in response to execution of the software. For convenience of understanding, although one processing device is sometimes described as being used, one of ordinary skill in the art will recognize that the processing device includes a plurality of processing elements and/or a plurality of types of processing elements. It can be seen that may include For example, the processing device may include a plurality of processors or one processor and one controller. Other processing configurations are also possible, such as parallel processors.
실시예에 따른 방법은 다양한 컴퓨터 수단을 통하여 수행될 수 있는 프로그램 명령 형태로 구현되어 컴퓨터 판독 가능 매체에 기록될 수 있다. 컴퓨터 판독 가능 매체는 프로그램 명령, 데이터 파일, 데이터 구조 등을 단독으로 또는 조합하여 포함할 수 있다. 매체에 기록되는 프로그램 명령은 실시예를 위하여 특별히 설계되고 구성된 것들이거나 컴퓨터 소프트웨어 당업자에게 공지되어 사용 가능한 것일 수도 있다. 컴퓨터 판독 가능 기록 매체의 예에는 하드 디스크, 플로피 디스크 및 자기 테이프와 같은 자기 매체(magnetic media), CD-ROM, DVD와 같은 광기록 매체(optical media), 플롭티컬 디스크(floptical disk)와 같은 자기-광 매체(magneto-optical media), 및 롬(ROM), 램(RAM), 플래시 메모리 등과 같은 프로그램 명령을 저장하고 수행하도록 특별히 구성된 하드웨어 장치가 포함된다. 프로그램 명령의 예에는 컴파일러에 의해 만들어지는 것과 같은 기계어 코드뿐만 아니라 인터프리터 등을 사용해서 컴퓨터에 의해서 실행될 수 있는 고급 언어 코드를 포함한다. 된 하드웨어 장치는 실시예의 동작을 수행하기 위해 하나 이상의 소프트웨어 모듈로서 작동하도록 구성될 수 있으며, 그 역도 마찬가지이다.The method according to the embodiment may be implemented in the form of program instructions that can be executed through various computer means and recorded in a computer-readable medium. The computer-readable medium may include program instructions, data files, data structures, and the like, alone or in combination. The program instructions recorded on the medium may be specially designed and configured for the embodiment, or may be known and available to those skilled in the art of computer software. Examples of the computer-readable recording medium include magnetic media such as hard disks, floppy disks and magnetic tapes, optical media such as CD-ROMs and DVDs, and magnetic such as floppy disks. - includes magneto-optical media, and hardware devices specially configured to store and execute program instructions, such as ROM, RAM, flash memory, and the like. Examples of program instructions include not only machine language codes such as those generated by a compiler, but also high-level language codes that can be executed by a computer using an interpreter or the like. A hardware device may be configured to operate as one or more software modules to perform the operations of the embodiments, and vice versa.
소프트웨어는 컴퓨터 프로그램(computer program), 코드(code), 명령(instruction), 또는 이들 중 하나 이상의 조합을 포함할 수 있으며, 원하는 대로 동작하도록 처리 장치를 구성하거나 독립적으로 또는 결합적으로(collectively) 처리 장치를 명령할 수 있다. 소프트웨어 및/또는 데이터는, 처리 장치에 의하여 해석되거나 처리 장치에 명령 또는 데이터를 제공하기 위하여, 어떤 유형의 기계, 구성요소(component), 물리적 장치, 가상 장치(virtual equipment), 컴퓨터 저장 매체 또는 장치, 또는 전송되는 신호 파(signal wave)에 영구적으로, 또는 일시적으로 구체화(embody)될 수 있다. 소프트웨어는 네트워크로 연결된 컴퓨터 시스템 상에 분산되어서, 분산된 방법으로 저장되거나 실행될 수도 있다. 소프트웨어 및 데이터는 하나 이상의 컴퓨터 판독 가능 기록 매체에 저장될 수 있다.The software may comprise a computer program, code, instructions, or a combination of one or more thereof, which configures a processing device to operate as desired or is independently or collectively processed You can command the device. The software and/or data may be any kind of machine, component, physical device, virtual equipment, computer storage medium or device, to be interpreted by or to provide instructions or data to the processing device. , or may be permanently or temporarily embody in a transmitted signal wave. The software may be distributed over networked computer systems and stored or executed in a distributed manner. Software and data may be stored in one or more computer-readable recording media.
이상과 같이 실시예들이 비록 한정된 도면에 의해 설명되었으나, 해당 기술분야에서 통상의 지식을 가진 자라면를 기초로 다양한 기술적 수정 및 변형을 적용할 수 있다. 예를 들어, 설명된 기술들이 설명된 방법과 다른 순서로 수행되거나, 및/또는 설명된 시스템, 구조, 장치, 회로 등의 구성요소들이 설명된 방법과 다른 형태로 결합 또는 조합되거나, 다른 구성요소 또는 균등물에 의하여 대치되거나 치환되더라도 적절한 결과가 달성될 수 있다.As described above, although the embodiments have been described with reference to the limited drawings, various technical modifications and variations may be applied to those skilled in the art based on the description. For example, the described techniques are performed in a different order than the described method, and/or the described components of the system, structure, apparatus, circuit, etc. are combined or combined in a different form than the described method, or other components Or substituted or substituted by equivalents may achieve an appropriate result.
그러므로, 다른 구현들, 다른 실시예들 및 특허청구범위와 균등한 것들도 후술하는 청구범위의 범위에 속한다.Therefore, other implementations, other embodiments, and equivalents to the claims are also within the scope of the following claims.

Claims (5)

  1. 도로의 중앙분리대와 가드레일 부분에 장착되어서 차량과 접촉이 없도록 하여 일정 구간에 일정 간격으로 도로에 설치되는 안전 케이스;a safety case installed on the road at regular intervals in a certain section so that it does not come into contact with the vehicle by being mounted on the median section and the guard rail portion of the road;
    차량 진행 방향의 전방으로 빛을 조사하도록 상기 안전 케이스에 설치되는 조명부;a lighting unit installed in the safety case to irradiate light in a forward direction of the vehicle;
    차량 진행 방향의 후방으로 빛이 점멸되도록 상기 안전 케이스에 설치되는 경고부;a warning unit installed in the safety case so that the light flickers in the rear of the vehicle traveling direction;
    도로에서 주행중인 차량의 움직임을 감지하고, 차량의 움직임을 통해 도로 주행에 정체가 있는지 여부를 감지하여, 도로 주행에 정체가 있는 것으로 감지되면, 사고가 발생하였는지 여부 및 차량 증가로 인해 단순 정체가 발생하였는지 여부를 감지하는 센서부; 및Detects the movement of a vehicle traveling on the road, detects whether there is congestion in road driving through the movement of the vehicle, and detects that there is congestion in road driving. a sensor unit that detects whether it has occurred; and
    상기 센서부에 의해 도로에서 차량이 주행중인 것으로 감지되면, 상기 조명부에서 빛이 조사되도록 제어하고, 상기 센서부에 의해 단순 정체가 발생한 것으로 감지되면, 상기 경고부에서 제1 색으로 빛이 점멸되도록 제어하고, 상기 센서부에 의해 사고가 발생한 것으로 감지되면, 상기 경고부에서 제2 색으로 빛이 점멸되도록 제어하는 제어부를 포함하며,When it is detected that the vehicle is driving on the road by the sensor unit, the lighting unit controls light to be irradiated, and when it is detected that simple congestion has occurred by the sensor unit, the warning unit blinks the light in the first color. and a control unit for controlling the light to blink in a second color in the warning unit when it is detected that an accident has occurred by the sensor unit,
    상기 센서부는,The sensor unit,
    이미지 센서를 통해 차량의 위치 및 차량 간의 거리를 분석하여 감지하고,It detects by analyzing the location of the vehicle and the distance between the vehicles through the image sensor,
    도로에 정체가 있는 구간에 위치하는 복수의 차량들의 이미지 정보를 획득하고, 상기 획득된 이미지 정보를 차량 별로 구분하여 분석 대상 차량의 이미지 정보를 추출하고, 상기 분석 대상 차량의 이미지 정보에서 대상 구역 이미지 및 차량 상태 정보를 확인하고, 상기 차량 상태 정보를 기초로 외관에 문제가 있는 차량이 있는지 여부를 확인하여, 사고가 발생하였는지 여부에 대한 판단 결과를 생성하고,Obtaining image information of a plurality of vehicles located in a section with congestion on the road, classifying the obtained image information for each vehicle to extract image information of a vehicle to be analyzed, and an image of a target area from the image information of the vehicle to be analyzed And check the vehicle state information, check whether there is a vehicle having an appearance problem based on the vehicle state information, and generate a determination result as to whether an accident has occurred,
    상기 판단 결과에 따라, 사고가 발생한 것으로 판단되면, 사고 발생으로 도로에 정체가 발생한 것으로 감지하고, 사고가 발생하지 않은 것으로 판단되면, 단순 정체로 도로에 정체가 발생한 것으로 감지하는,According to the determination result, when it is determined that an accident has occurred, it is detected that congestion has occurred on the road due to the occurrence of the accident, and when it is determined that no accident has occurred, it is detected that congestion has occurred on the road due to simple congestion,
    터널 내 및 모든 도로 주행 안전 장치.Safety devices for driving in tunnels and on all roads.
  2. 제1항에 있어서,According to claim 1,
    상기 센서부는,The sensor unit,
    상기 판단 결과에 따라, 단순 정체로 도로에 정체가 발생한 것으로 감지되면, 차량의 주행 속도가 기준 속도 이하인 정체 구간을 감지하고,According to the determination result, when it is detected that congestion has occurred on the road due to simple congestion, a congestion section in which the driving speed of the vehicle is less than or equal to the reference speed is detected,
    상기 제어부는,The control unit is
    상기 정체 구간의 시작점인 단순 정체 발생 지점으로부터 상기 정체 구간의 끝점인 단순 정체 종료 지점까지의 정체 구간 길이를 산출하고,calculating the length of the congestion section from the simple congestion occurrence point, which is the starting point of the congestion section, to the simple congestion end point, which is the end point of the congestion section,
    미리 정해진 기간 동안의 시간대별 정체 패턴을 통해 제1 기준 거리가 설정되면, 상기 정체 구간 길이가 상기 제1 기준 거리 보다 짧은지 여부를 확인하고,When the first reference distance is set through the congestion pattern for each time period for a predetermined period, it is checked whether the length of the congestion section is shorter than the first reference distance,
    상기 정체 구간 길이가 상기 제1 기준 거리 보다 짧은 것으로 확인되면, 상기 정체 구간을 일반적인 정체 현상으로 판단하여, 상기 단순 정체 발생 지점으로부터 후방으로 상기 정체 구간 길이의 4배 떨어진 위치까지, 상기 제1 색의 빛이 제1 강도의 세기로 제1 점멸 속도를 통해 점멸되도록 제어하고,When it is confirmed that the length of the congestion section is shorter than the first reference distance, it is determined that the congestion section is a general congestion phenomenon, and the first color control so that the light of the light blinks through the first blinking speed with the intensity of the first intensity,
    상기 정체 구간 길이가 상기 제1 기준 거리 보다 긴 것으로 확인되면, 상기 정체 구간을 특수적인 정체 현상으로 판단하여, 상기 정체 구간 길이가 상기 제1 기준 거리 보다 긴 값으로 설정된 제2 기준 거리 보다 짧은지 여부를 확인하고,If it is confirmed that the length of the congestion section is longer than the first reference distance, it is determined that the congestion section is a special congestion phenomenon, and whether the length of the congestion section is shorter than a second reference distance set to a value longer than the first reference distance check whether
    상기 정체 구간 길이가 상기 제2 기준 거리 보다 짧은 것으로 확인되면, 상기 정체 구간을 심각한 정체 현상으로 판단하여, 상기 단순 정체 발생 지점으로부터 후방으로 상기 정체 구간 길이의 3배 떨어진 위치까지, 상기 제1 색의 빛이 상기 제1 강도 보다 강한 제2 강도의 세기로 상기 제1 점멸 속도를 통해 점멸되도록 제어하고,If it is confirmed that the length of the congestion section is shorter than the second reference distance, it is determined that the congestion section is a serious congestion phenomenon, and the first color control so that the light of the light blinks through the first blinking speed with an intensity of a second intensity stronger than the first intensity,
    상기 정체 구간 길이가 상기 제2 기준 거리 보다 긴 것으로 확인되면, 상기 정체 구간을 매우 심각한 정체 현상으로 판단하여, 상기 단순 정체 발생 지점으로부터 후방으로 상기 정체 구간 길이의 2배 떨어진 위치까지, 상기 제1 색의 빛이 상기 제2 강도의 세기로 상기 제1 점멸 속도 보다 빠른 제2 점멸 속도를 통해 점멸되도록 제어하는,If it is confirmed that the length of the congestion section is longer than the second reference distance, it is determined that the congestion section is a very serious congestion phenomenon, and the first Controlling the light of the color to blink through a second blinking speed faster than the first blinking speed with the intensity of the second intensity,
    터널 내 및 모든 도로 주행 안전 장치.Safety devices for driving in tunnels and on all roads.
  3. 제2항에 있어서,3. The method of claim 2,
    상기 센서부는,The sensor unit,
    상기 판단 결과에 따라, 사고 발생으로 도로에 정체가 발생한 것으로 감지되면, 사고 발생 차량의 수를 감지하고,According to the determination result, when it is detected that congestion has occurred on the road due to the occurrence of an accident, the number of vehicles in the accident is detected,
    상기 제어부는,The control unit is
    상기 사고 발생 차량의 수가 미리 설정된 제1 기준치 보다 작은지 여부를 확인하고,Check whether the number of the accident-causing vehicle is smaller than a preset first reference value,
    상기 사고 발생 차량의 수가 상기 제1 기준치 보다 작은 것으로 확인되면, 도로에서 발생한 사고를 소규모 사고로 판단하여, 사고 발생 지점으로부터 후방으로 상기 제1 기준 거리의 2배 떨어진 위치인 제1 지점까지, 상기 제2 색의 빛이 상기 제1 강도의 세기로 상기 제1 점멸 속도를 통해 점멸되도록 제어하고,If it is confirmed that the number of vehicles in which the accident occurred is smaller than the first reference value, it is determined that the accident occurred on the road is a small accident, and from the accident point to the first point that is twice the first reference distance rearward, the Controlling the light of the second color to blink through the first blinking speed with the intensity of the first intensity,
    상기 사고 발생 차량의 수가 상기 제1 기준치 보다 큰 것으로 확인되면, 도로에서 발생한 사고를 중대형 사고로 판단하여, 상기 사고 발생 차량의 수가 상기 제1 기준치 보다 높은 값으로 설정된 제2 기준치 보다 작은지 여부를 확인하고,When it is confirmed that the number of vehicles in the accident is greater than the first reference value, the accident occurring on the road is determined as a major accident, and whether the number of vehicles in the accident is smaller than the second reference value set to a value higher than the first reference value check,
    상기 사고 발생 차량의 수가 상기 제2 기준치 보다 작은 것으로 확인되면, 도로에서 발생한 사고를 중규모 사고로 판단하여, 상기 사고 발생 지점으로부터 후방으로 상기 제1 기준 거리의 3배 떨어진 위치인 제2 지점까지, 상기 제2 색의 빛이 상기 제2 강도의 세기로 상기 제1 점멸 속도를 통해 점멸되도록 제어하고,When it is confirmed that the number of vehicles in the accident is smaller than the second reference value, it is determined that the accident occurred on the road is a medium-scale accident, and from the accident point to a second point that is three times the first reference distance to the rear, Controlling the light of the second color to blink through the first blinking speed with the intensity of the second intensity,
    상기 사고 발생 차량의 수가 상기 제2 기준치 보다 큰 것으로 확인되면, 도로에서 발생한 사고를 대규모 사고로 판단하여, 상기 사고 발생 지점으로부터 후방으로 상기 제1 기준 거리의 4배 떨어진 위치인 제3 지점까지, 상기 제2 색의 빛이 상기 제2 강도의 세기로 상기 제2 점멸 속도를 통해 점멸되도록 제어하는,When it is confirmed that the number of vehicles in the accident is greater than the second reference value, it is determined that the accident occurred on the road is a large-scale accident, and from the accident point to a third point that is 4 times the first reference distance rearward, Controlling the light of the second color to blink through the second blinking speed with the intensity of the second intensity,
    터널 내 및 모든 도로 주행 안전 장치.Safety devices for driving in tunnels and on all roads.
  4. 제3항에 있어서,4. The method of claim 3,
    상기 제어부는,The control unit is
    도로에서 발생한 사고가 중규모 사고로 판단되어, 상기 사고 발생 지점으로부터 상기 제2 지점까지 상기 제2 색의 빛이 점멸된 점멸 시간이 기준 시간 보다 긴 것으로 확인되면, 상기 사고 발생 지점으로부터 상기 제1 지점까지 상기 제2 색의 빛이 상기 제2 강도의 세기로 상기 제1 점멸 속도를 통해 점멸되도록 제어하고, 상기 제1 지점으로부터 상기 제2 지점까지 상기 제2 색의 빛이 상기 제1 강도의 세기로 상기 제1 점멸 속도를 통해 점멸되도록 제어하고,If it is determined that the accident occurring on the road is a medium-scale accident, and it is confirmed that the blinking time of the second color light from the accident occurrence point to the second point is longer than the reference time, the first point from the accident occurrence point control so that the light of the second color flickers through the first flashing speed with the intensity of the second intensity, and the light of the second color from the first point to the second point has the intensity of the first intensity to control to blink through the first blinking speed,
    도로에서 발생한 사고가 대규모 사고로 판단되어, 상기 사고 발생 지점으로부터 상기 제3 지점까지 상기 제2 색의 빛이 점멸된 점멸 시간이 상기 기준 시간 보다 긴 것으로 확인되면, 상기 사고 발생 지점으로부터 상기 제1 지점까지 상기 제2 색의 빛이 상기 제2 강도의 세기로 상기 제2 점멸 속도를 통해 점멸되도록 제어하고, 상기 제1 지점으로부터 상기 제2 지점까지 상기 제2 색의 빛이 상기 제2 강도의 세기로 상기 제1 점멸 속도를 통해 점멸되도록 제어하고, 상기 제2 지점으로부터 상기 제3 지점까지 상기 제2 색의 빛이 상기 제1 강도의 세기로 상기 제1 점멸 속도를 통해 점멸되도록 제어하는,If it is determined that the accident occurring on the road is a large-scale accident, and it is confirmed that the blinking time of the second color light from the accident occurrence point to the third point is longer than the reference time, the first control so that the light of the second color blinks through the second blinking speed with the intensity of the second intensity up to a point, and the light of the second color from the first point to the second point of the second intensity Control to blink through the first blinking speed with intensity, and control so that the light of the second color from the second point to the third point blinks through the first blinking speed with the intensity of the first intensity,
    터널 내 및 모든 도로 주행 안전 장치.Safety devices for driving in tunnels and on all roads.
  5. 제1항에 있어서,The method of claim 1,
    상기 제어부는,The control unit is
    상기 복수의 차량들 중 어느 하나인 제1 차량이 분석 대상 차량으로 확인되면, 라이다를 통해 상기 제1 차량의 표면에 대한 3D 데이터를 획득하고, 카메라를 통해 상기 제1 차량의 표면에 대한 2D 데이터를 획득하고,When the first vehicle, which is any one of the plurality of vehicles, is identified as the vehicle to be analyzed, 3D data on the surface of the first vehicle is obtained through lidar, and 2D data on the surface of the first vehicle is obtained through the camera. acquire data,
    상기 2D 데이터와 상기 3D 데이터의 합집합 영역을 분리하여, 상기 2D 데이터 및 상기 3D 데이터를 병합한 제1 데이터를 추출하고,separating the union region of the 2D data and the 3D data to extract the first data obtained by merging the 2D data and the 3D data;
    상기 제1 데이터를 인코딩 하여 제1 입력 신호를 생성하고,Encoding the first data to generate a first input signal,
    상기 제1 입력 신호를 제1 인공 신경망에 입력하고, 상기 제1 인공 신경망의 입력의 결과에 기초하여, 제1 출력 신호를 획득하고,inputting the first input signal to a first artificial neural network, and obtaining a first output signal based on a result of the input of the first artificial neural network;
    상기 제1 출력 신호에 기초하여, 상기 제1 차량의 표면에 대한 제1 분류 결과를 생성하고,generate a first classification result for the surface of the first vehicle based on the first output signal;
    상기 제1 데이터를 분석하여 상기 제1 차량의 표면에 발생한 균열을 검출하고,Analyze the first data to detect cracks generated on the surface of the first vehicle,
    상기 제1 차량의 표면에 발생한 균열을 영역별로 확인하여, 미리 설정된 제1 설정값 미만으로 균열이 검출된 정상 영역과 상기 제1 설정값 이상으로 균열이 검출된 손상 영역을 구분하고,The cracks generated on the surface of the first vehicle are checked for each region, and a normal region in which cracks are detected less than a preset first set value and a damaged region in which cracks are detected above the first set value are distinguished.
    상기 제1 데이터에서 상기 손상 영역을 삭제한 제2 데이터를 추출하고,extracting second data from which the damaged area is deleted from the first data;
    상기 제2 데이터를 인코딩 하여 제2 입력 신호를 생성하고,generating a second input signal by encoding the second data;
    상기 제2 입력 신호를 제2 인공 신경망에 입력하고, 상기 제2 인공 신경망의 입력의 결과에 기초하여, 제2 출력 신호를 획득하고,inputting the second input signal to a second artificial neural network, and obtaining a second output signal based on a result of the input of the second artificial neural network;
    상기 제2 출력 신호에 기초하여, 상기 제1 차량의 표면에 대한 제2 분류 결과를 생성하고,generating a second classification result for the surface of the first vehicle based on the second output signal;
    상기 제1 분류 결과 및 상기 제2 분류 결과가 동일한 경우, 상기 제1 분류 결과 및 상기 제2 분류 결과 중 어느 하나를 상기 제1 차량의 표면에 대한 최종 분류 결과로 설정하는,When the first classification result and the second classification result are the same, setting any one of the first classification result and the second classification result as a final classification result for the surface of the first vehicle,
    터널 내 및 모든 도로 주행 안전 장치.Safety devices for driving in tunnels and on all roads.
PCT/KR2021/008418 2021-03-22 2021-07-02 Safety apparatus for travel in tunnels and on all roads WO2022203125A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020210036811A KR102269227B1 (en) 2021-03-22 2021-03-22 Driving safety apparatus in tunnel and on all roads
KR10-2021-0036811 2021-03-22

Publications (1)

Publication Number Publication Date
WO2022203125A1 true WO2022203125A1 (en) 2022-09-29

Family

ID=76628997

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2021/008418 WO2022203125A1 (en) 2021-03-22 2021-07-02 Safety apparatus for travel in tunnels and on all roads

Country Status (2)

Country Link
KR (1) KR102269227B1 (en)
WO (1) WO2022203125A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116824862A (en) * 2023-08-28 2023-09-29 济南瑞源智能城市开发有限公司 Intelligent tunnel traffic operation control method, device and medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102269227B1 (en) * 2021-03-22 2021-06-25 주식회사 에스투에이치원 Driving safety apparatus in tunnel and on all roads

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100797394B1 (en) * 2005-12-08 2008-01-28 한국전자통신연구원 Apparatus and Method for Providing Traffic Jam Information for Installing on the Road
KR20090053013A (en) * 2007-11-22 2009-05-27 한국전자통신연구원 Method and system for accident detection using sensor attached to the median strip
KR101306759B1 (en) * 2013-03-08 2013-09-10 주식회사 아이엑스 System for prevent traffic accident
KR101666003B1 (en) * 2016-04-27 2016-10-13 주식회사 엠지브이보안시스템 System for confirming vehicle accident in parking
KR20200012618A (en) * 2018-07-27 2020-02-05 한국자동차연구원 Apparatus for displaying guide information for vehicle
KR102269227B1 (en) * 2021-03-22 2021-06-25 주식회사 에스투에이치원 Driving safety apparatus in tunnel and on all roads

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100797394B1 (en) * 2005-12-08 2008-01-28 한국전자통신연구원 Apparatus and Method for Providing Traffic Jam Information for Installing on the Road
KR20090053013A (en) * 2007-11-22 2009-05-27 한국전자통신연구원 Method and system for accident detection using sensor attached to the median strip
KR101306759B1 (en) * 2013-03-08 2013-09-10 주식회사 아이엑스 System for prevent traffic accident
KR101666003B1 (en) * 2016-04-27 2016-10-13 주식회사 엠지브이보안시스템 System for confirming vehicle accident in parking
KR20200012618A (en) * 2018-07-27 2020-02-05 한국자동차연구원 Apparatus for displaying guide information for vehicle
KR102269227B1 (en) * 2021-03-22 2021-06-25 주식회사 에스투에이치원 Driving safety apparatus in tunnel and on all roads

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116824862A (en) * 2023-08-28 2023-09-29 济南瑞源智能城市开发有限公司 Intelligent tunnel traffic operation control method, device and medium
CN116824862B (en) * 2023-08-28 2023-12-01 济南瑞源智能城市开发有限公司 Intelligent tunnel traffic operation control method, device and medium

Also Published As

Publication number Publication date
KR102269227B1 (en) 2021-06-25

Similar Documents

Publication Publication Date Title
WO2022203125A1 (en) Safety apparatus for travel in tunnels and on all roads
WO2020071683A1 (en) Object recognition method of autonomous driving device, and autonomous driving device
WO2017026642A1 (en) Traffic signal safety system for crosswalk
WO2020050498A1 (en) Method and device for sensing surrounding environment using image segmentation
KR100862561B1 (en) A system for sensing a traffic accident
WO2020085881A1 (en) Method and apparatus for image segmentation using an event sensor
WO2020138908A1 (en) Electronic device and control method therefor
WO2018186583A1 (en) Method for identifying obstacle on driving ground and robot for implementing same
WO2017119557A1 (en) Driving assistance device and control method therefor
WO2019139310A1 (en) Autonomous driving apparatus and method for autonomous driving of a vehicle
RU2667338C1 (en) Method of detecting objects and device for detecting objects
WO2016088960A1 (en) Method and system for detecting, in night environment, danger due to presence of pedestrian, for advanced driver assistance system
WO2020139063A1 (en) Electronic apparatus for detecting risk factors around vehicle and method for controlling same
WO2018105842A1 (en) Radar-based high-precision incident detection system
WO2020241930A1 (en) Method for estimating location using multi-sensor and robot for implementing same
WO2016112557A1 (en) Electronic tag-based indoor guidance system for the blind, and method therefor
KR102294286B1 (en) Driving safety apparatus in tunnel and on all roads
WO2023120831A1 (en) De-identification method and computer program recorded in recording medium for executing same
WO2019045293A1 (en) Method for generating target-oriented local path and robot for implementing same
WO2018230864A2 (en) Method for sensing depth of object by considering external light and device implementing same
WO2020230931A1 (en) Robot generating map on basis of multi-sensor and artificial intelligence, configuring correlation between nodes and running by means of map, and method for generating map
WO2020085653A1 (en) Multiple-pedestrian tracking method and system using teacher-student random fern
WO2015093853A1 (en) Vehicle driving auxiliary device and vehicle having same
WO2022255677A1 (en) Method for determining location of fixed object by using multi-observation information
WO2016163590A1 (en) Vehicle auxiliary device and method based on infrared image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21933353

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE