WO2022203125A1 - Appareil de sécurité pour le déplacement dans des tunnels et sur toutes les routes - Google Patents

Appareil de sécurité pour le déplacement dans des tunnels et sur toutes les routes Download PDF

Info

Publication number
WO2022203125A1
WO2022203125A1 PCT/KR2021/008418 KR2021008418W WO2022203125A1 WO 2022203125 A1 WO2022203125 A1 WO 2022203125A1 KR 2021008418 W KR2021008418 W KR 2021008418W WO 2022203125 A1 WO2022203125 A1 WO 2022203125A1
Authority
WO
WIPO (PCT)
Prior art keywords
accident
vehicle
congestion
intensity
point
Prior art date
Application number
PCT/KR2021/008418
Other languages
English (en)
Korean (ko)
Inventor
이학승
Original Assignee
주식회사 에스투에이치원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 에스투에이치원 filed Critical 주식회사 에스투에이치원
Publication of WO2022203125A1 publication Critical patent/WO2022203125A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • EFIXED CONSTRUCTIONS
    • E01CONSTRUCTION OF ROADS, RAILWAYS, OR BRIDGES
    • E01FADDITIONAL WORK, SUCH AS EQUIPPING ROADS OR THE CONSTRUCTION OF PLATFORMS, HELICOPTER LANDING STAGES, SIGNS, SNOW FENCES, OR THE LIKE
    • E01F9/00Arrangement of road signs or traffic signals; Arrangements for enforcing caution
    • E01F9/50Road surface markings; Kerbs or road edgings, specially adapted for alerting road users
    • E01F9/576Traffic lines
    • E01F9/582Traffic lines illuminated
    • EFIXED CONSTRUCTIONS
    • E01CONSTRUCTION OF ROADS, RAILWAYS, OR BRIDGES
    • E01FADDITIONAL WORK, SUCH AS EQUIPPING ROADS OR THE CONSTRUCTION OF PLATFORMS, HELICOPTER LANDING STAGES, SIGNS, SNOW FENCES, OR THE LIKE
    • E01F9/00Arrangement of road signs or traffic signals; Arrangements for enforcing caution
    • E01F9/60Upright bodies, e.g. marker posts or bollards; Supports for road signs
    • E01F9/604Upright bodies, e.g. marker posts or bollards; Supports for road signs specially adapted for particular signalling purposes, e.g. for indicating curves, road works or pedestrian crossings
    • E01F9/615Upright bodies, e.g. marker posts or bollards; Supports for road signs specially adapted for particular signalling purposes, e.g. for indicating curves, road works or pedestrian crossings illuminated
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B5/00Visible signalling systems, e.g. personal calling systems, remote indication of seats occupied
    • G08B5/22Visible signalling systems, e.g. personal calling systems, remote indication of seats occupied using electric transmission; using electromagnetic transmission
    • G08B5/36Visible signalling systems, e.g. personal calling systems, remote indication of seats occupied using electric transmission; using electromagnetic transmission using visible light sources
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • H05B47/16Controlling the light source by timing means
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Definitions

  • the embodiments below relate to a technology for providing road safety devices installed in tunnels and on all roads.
  • Tunnels are formed through mountains or underground for rapid passage of vehicles traveling on roads or the like.
  • Such a tunnel has the advantage of minimizing the driving time and traffic inconvenience by reducing the mileage of the vehicle, but has a disadvantage in that the interior of the tunnel is dark and the driving environment of the vehicle is poor due to poor ventilation.
  • an object of the present invention is to provide a safety device for driving in a tunnel and on all roads that controls so that the light of the second color blinks in the rear of the vehicle traveling direction when it is detected that an accident has occurred.
  • the safety case is installed on the road at regular intervals in a certain section so as not to be in contact with the vehicle by being mounted on the road median and the guard rail portion; a lighting unit installed in the safety case to irradiate light in a forward direction of the vehicle; a warning unit installed in the safety case so that the light flickers in the rear of the vehicle traveling direction; Detects the movement of a vehicle traveling on the road, detects whether there is congestion in road driving through the movement of the vehicle, and detects that there is congestion in road driving. a sensor unit that detects whether it has occurred; and when the sensor unit detects that the vehicle is driving on the road, the lighting unit controls light to be irradiated.
  • the sensor unit includes a position of a vehicle and a distance between the vehicles through an image sensor to detect and analyze, obtain image information of a plurality of vehicles located in a section with congestion on the road, classify the obtained image information for each vehicle to extract image information of the vehicle to be analyzed, and Check the target area image and vehicle status information from the image information, check whether there is a vehicle having an appearance problem based on the vehicle status information, generate a determination result as to whether an accident has occurred, and the determination result Accordingly, when it is determined that an accident has occurred, it is detected that congestion has occurred on the road due to the occurrence of an accident, and when it is determined that no accident has occurred, it is detected that congestion has occurred on the road due to simple congestion.
  • a device is provided.
  • the sensor unit detects a congestion section in which the driving speed of the vehicle is less than or equal to a reference speed, and the control unit, a simple congestion occurrence point that is the starting point of the congestion section Calculates the length of the congestion section from to the end point of the simple congestion, which is the end point of the congestion section, and when the first reference distance is set through the congestion pattern for each time period for a predetermined period, the length of the congestion section is greater than the first reference distance If it is checked whether the length of the congestion section is shorter than the first reference distance, it is determined that the congestion section is a general congestion phenomenon, and the length of the congestion section is 4 times the length of the congestion section backward from the simple congestion occurrence point.
  • the light of the first color is controlled to blink through a first blinking speed with the intensity of the first intensity. Determined as a phenomenon, it is checked whether the length of the congestion section is shorter than a second reference distance set to a value longer than the first reference distance, and when it is confirmed that the length of the congestion section is shorter than the second reference distance, the congestion The section is determined as a serious congestion phenomenon, and from the simple congestion occurrence point to a position three times the length of the congestion section backward, the light of the first color has a second intensity stronger than the first intensity.
  • Controlled to blink through a blinking speed and when it is confirmed that the length of the congestion section is longer than the second reference distance, the congestion section is judged as a very serious congestion phenomenon, and the length of the congestion section backward from the simple congestion occurrence point is It is possible to control the light of the first color to blink through a second blinking speed faster than the first blinking speed with the intensity of the second intensity up to twice the distance.
  • the sensor unit when it is detected that congestion has occurred on the road due to the occurrence of an accident, detects the number of vehicles in the accident, and the control unit, whether the number of vehicles in the accident is smaller than a preset first reference value If it is confirmed that the number of vehicles in the accident is smaller than the first reference value, it is determined that the accident occurred on the road is a small accident, and the first is a position that is twice the first reference distance backward from the accident point.
  • the light of the second color is controlled to flash through the first flashing speed with the intensity of the first intensity, and when it is confirmed that the number of accident-causing vehicles is greater than the first reference value, an accident occurring on the road It is determined as a medium-to-large accident, and it is checked whether the number of accident vehicles is smaller than a second reference value set to a value higher than the first reference value, and if it is confirmed that the number of accident vehicles is smaller than the second reference value, on the road It is determined that the accident occurred is a medium-scale accident, and the light of the second color flashes the first with the intensity of the second intensity from the point of occurrence to a second point that is three times the first reference distance to the rear from the point of occurrence.
  • the control unit if it is determined that the accident occurring on the road is a medium-scale accident, and it is confirmed that the blinking time of the second color light from the accident occurrence point to the second point is longer than the reference time, from the accident occurrence point control so that the light of the second color flashes through the first flashing speed with the intensity of the second intensity from the first point to the first point, and the light of the second color from the first point to the second point Controlled to blink through the first blinking speed with an intensity of 1 intensity, and the accident occurring on the road is determined as a large-scale accident, and the blinking time for the blinking of the light of the second color from the accident occurrence point to the third point is If it is confirmed that it is longer than the reference time, the light of the second color is controlled to blink through the second blinking speed with the intensity of the second intensity from the accident occurrence point to the first point, and from the first point control so that the light of the second color blinks through the first blinking speed with the intensity of the second intensity from the second point to the second point
  • the controller acquires 3D data on the surface of the first vehicle through the lidar, and obtains 3D data on the surface of the first vehicle through the camera.
  • Obtaining 2D data for a surface separating the union region of the 2D data and the 3D data, extracting first data obtained by merging the 2D data and the 3D data, and encoding the first data to input the first input generating a signal, inputting the first input signal to a first artificial neural network, obtaining a first output signal based on a result of the input of the first artificial neural network, and based on the first output signal, the A first classification result for the surface of the first vehicle is generated, the first data is analyzed to detect cracks generated on the surface of the first vehicle, and cracks generated on the surface of the first vehicle are identified by area, Separating a normal region in which a crack is detected below a preset first set value and a damaged region in which a crack is detected above
  • the light of the first color is controlled to flicker in the rearward direction of the vehicle, and in the tunnel and on all roads.
  • the driver can grasp the traffic situation ahead, thereby preventing a rear collision accident.
  • FIG. 1 is a perspective view schematically showing a road driving safety device according to an embodiment of the present invention.
  • FIG. 2 is a view showing a state in which the safety case of the road driving safety device is installed on the road.
  • FIG. 3 is a diagram schematically showing the configuration of a road driving safety device.
  • FIG. 4 is a perspective view schematically showing a road driving safety device according to another embodiment of the present invention.
  • FIG. 5 is a view for explaining a method of processing an image taken of a vehicle located in a section with congestion on the road in order to determine whether there is a vehicle having a problem in appearance due to the occurrence of an accident according to an embodiment of the present invention; to be.
  • FIG. 6 is a view for explaining a learning method employed to process an image of a vehicle to be analyzed in order to determine whether there is a vehicle having an appearance problem due to an accident occurrence according to an embodiment of the present invention.
  • FIG. 7 is a flowchart illustrating a process of controlling blinking according to a length of a congestion section according to an embodiment.
  • FIG. 8 is a flowchart for explaining a process of controlling blinking according to the number of vehicles having an accident according to an exemplary embodiment.
  • FIG. 9 is a flowchart for explaining a process of controlling the blinking step by step according to a distance during a medium-scale accident according to an embodiment.
  • FIG. 10 is a flowchart for explaining a process of controlling the blinking step by step according to the distance during a large-scale accident according to an embodiment.
  • FIG. 11 is a flowchart illustrating a process of classifying a surface of a vehicle according to an exemplary embodiment.
  • FIG. 12 is a diagram for explaining an artificial neural network according to an embodiment.
  • FIG. 13 is a diagram for explaining a method of learning an artificial neural network according to an embodiment.
  • first or second may be used to describe various elements, these terms should be interpreted only for the purpose of distinguishing one element from another.
  • a first component may be termed a second component, and similarly, a second component may also be termed a first component.
  • spatially relative terms “below”, “beneath”, “lower”, “above”, “upper”, etc. It can be used to easily describe the correlation between a component and other components.
  • a spatially relative term should be understood as a term that includes different directions of components during use or operation in addition to the directions shown in the drawings. For example, when a component shown in the drawing is turned over, a component described as “beneath” or “beneath” of another component may be placed “above” of the other component. can Accordingly, the exemplary term “below” may include both directions below and above. Components may also be oriented in other orientations, and thus spatially relative terms may be interpreted according to orientation.
  • FIG. 1 is a perspective view schematically showing a road driving safety device according to an embodiment of the present invention
  • FIG. 2 is a view showing a state in which a safety case of the road driving safety device is installed on the road
  • FIG. 3 is road driving safety It is a diagram schematically showing the configuration of the device.
  • the road driving safety device 100 includes a safety case 10 installed on a road 11 , and a safety case 10 installed in the safety case 10 .
  • the lighting unit 20 that irradiates light in the traveling direction of the vehicle, the control unit 30 that controls so that power is supplied to the lighting unit 20, and the safety case 10 are installed in the safety case 10 to detect the movement of a vehicle or animal and a sensor unit 40 for operating the lighting unit 20 .
  • the safety case 10 may be installed so as not to be in contact with the vehicle by being mounted on the median and guard rail portions, and may be installed in plurality at regular intervals in a predetermined section.
  • An installation space is formed inside the safety case 10, and in this embodiment, one is installed in the median, one in the left guardrail part, and one in the right guardrail part, so that a total of three are spaced apart from each other. It will be described as an example to be installed in the state.
  • the safety case 10 is not necessarily limited to being installed in three pieces, and two or less or more than three safety cases 10 may be installed in the road median and guard rail portions.
  • the safety case 10 may be installed at regular intervals along each of the median and guardrails, and preferably installed in the median and guardrails in the tunnel and on all roads 11 .
  • the lighting unit 20 is installed in the safety case 10 .
  • the lighting unit 20 may be installed in the safety case 10 to selectively irradiate the lighting forward in the vehicle driving direction.
  • the lighting unit 20 is installed so as to selectively irradiate the lighting in the forward direction of the vehicle driving direction, and in this embodiment, it will be exemplarily described that it is installed as an LED lighting.
  • the lighting unit 20 is not necessarily applied as LED lighting, and it is also possible to apply a predetermined lighting that can easily check the median and guard rail portions.
  • At least three or more safety cases 10 are installed in the median and guard rail portions of the road, and the lighting unit 20 may also be installed in at least three or more in the median and guard rail portions of the road. That is, at least three lighting units 20 may be installed in units of 50 m, and may be installed in units of approximately 150 m.
  • the light-emitting action of the lighting unit 20 may be selectively emitted by the sensing action of the sensor unit 40 to be described later.
  • the sensor unit 40 will be described in more detail.
  • control unit 30 is connected to the lighting unit 20 and may be controlled to emit light forward in the driving direction of the vehicle at night.
  • the control unit 30 may include a solar cell 31 installed on the side of the road 11 , and a connection unit 33 connecting the solar cell 31 and the lighting unit 20 .
  • the solar cell 31 may be installed on the side of the road to convert and store solar light energy into electrical energy during the daytime.
  • a plurality of solar cells 31 may be installed along the side of the road 11 to smoothly supply power to the lighting units 20 .
  • the solar cell 31 may be installed in a fixed state on the side of the road 11 or may be installed in a movable state so that the installed position can be changed.
  • the solar cell 31 and the lighting unit 20 may be connected through a connection unit 33 to supply power to the lighting unit 20 .
  • connection part 33 may be installed to connect one solar cell 31 and a plurality of lighting parts 20 or to connect the solar cell 31 and the lighting part 20 alone. Therefore, by using solar energy during the daytime through the solar cell 31 to store light energy in the form of electrical energy, and by supplying power to the lighting unit 20 through the connection unit 33 at night, the lighting unit 20 is selectively It is possible to make it luminous.
  • the safety case 10 is provided with a sensor unit 40 for selectively operating the lighting unit 20 .
  • the sensor unit 40 may be installed in the safety case 10 to sense a movement of a vehicle or an animal from the side of the road 11 to the central portion of the road.
  • the sensor unit 40 is installed with an infrared sensor, etc., so that it is possible to check the movement of a vehicle or an animal in real time in front of the vehicle driving direction.
  • the sensor unit 40 may adjust the sensing direction of the sensor unit 50 to detect both lanes.
  • the sensor unit 40 may be installed as one in the safety case 10 , and may be installed in plurality while changing the installation direction. Accordingly, the sensor unit 40 can easily confirm that a predetermined object such as a vehicle or an animal moves from the front of the vehicle in the driving direction to the central portion of the road 11 through both sides of the road 11 .
  • the lighting unit 20 may be controlled to emit light.
  • the warning unit 210 may blink to notify the driver of a danger warning. Accordingly, it is possible for the driver to more effectively sense the danger through the warning unit 210 while checking the median section and the guard rail portion through the lighting unit 20 .
  • the lighting unit 20 may be installed in the safety case 10 to irradiate light in the front of the vehicle traveling direction, and the warning unit 210 may be installed so that the light flickers in the rear of the vehicle traveling direction.
  • the sensor unit 40 may be separately installed on the road instead of the safety case 10, for example, it is installed as an image sensor and is installed on the road. It can detect the movement of a driving vehicle and detect whether there is congestion in road driving through the movement of the vehicle. It can be detected whether or not it has occurred.
  • the control unit 30 controls the lighting unit 20 to emit light, and when the sensor unit 40 detects that there is congestion on the road, The warning unit 210 may control the light of different colors to flicker depending on the cause of congestion.
  • the controller 30 controls the light to blink in the first color in the warning unit 210, and when it is detected that an accident has occurred on the road, the warning unit 210 controls the second You can control the light to flicker in two colors.
  • the vehicle driver can confirm in advance that simple congestion has occurred in the tunnel and on all roads, and the light of the second color, For example, when the yellow light flickers in the warning unit 210, it can be confirmed in advance that an accident has occurred in the tunnel and on all roads.
  • the first color for example, the blue light flickers in the warning unit 210
  • FIG. 4 is a view for explaining the occurrence of congestion in road driving according to an embodiment of the present invention.
  • the sensor unit 40 may detect whether there is congestion in road driving through the movement of a vehicle traveling on the road, and when it is detected that there is congestion in road driving, whether simple traffic congestion has occurred due to an increase in vehicles, and It is possible to detect whether a simple congestion has occurred due to an increase in vehicles.
  • Zone 401 may detect that a traffic jam has occurred due to an increase in vehicles.
  • Zone 402 may detect that congestion has occurred due to a traffic accident.
  • the controller 30 controls the light of the first color to flicker from the point where the simple congestion occurs to a position that is a first distance to the rear, and an accident occurs on the road.
  • the light of the second color to flicker from the point of occurrence to a second distance set to be longer than the first distance to the rear rearward.
  • control unit 30 may control the blue light to flicker to a position 50 m rearward from the simple congestion occurrence point, and control the yellow light to flicker to a position 100 m rearward from the accident occurrence point. have.
  • the sensor unit 40 may detect the driving speed of the vehicle, and detect where the congestion section is in which the driving speed of the vehicle is less than or equal to the reference speed, and the control unit 30 warns the warning unit 210 according to the length of the congestion section.
  • the controller 30 may control the light of the first color to blink with a normal intensity, and when the congestion section is 20m, the light of the first color blinks with a stronger intensity can be controlled as much as possible.
  • the sensor unit 40 may detect the number of accidents occurring vehicles based on the image information obtained through the image sensor, and the control unit 30 blinks in the warning unit 210 according to the number of accidents occurring vehicles.
  • the control unit 30 blinks in the warning unit 210 according to the number of accidents occurring vehicles.
  • control unit 30 may control the light of the second color to flicker with normal intensity when the number of vehicles in the accident is 2, and when the number of vehicles in the accident is 4, the light of the second color is more It can be controlled to blink with strong intensity.
  • the control unit 30 may set a third distance that is an accident risk distance according to the number of vehicles in which the accident occurred.
  • the controller 30 may set the third distance to 10 m when the number of accident-inducing vehicles is two, and may set the third distance to 20 m when the number of accident-occurring vehicles is four.
  • the control unit 30 controls the light of the second color to flicker from the point of the accident to a position that is a second distance backward from the point of occurrence, while the light of the second color that flickers from the point where the accident occurred to a position that is a third distance from the point of the accident is the third It can be controlled to change to a colored light and flicker.
  • control unit 30 controls so that the yellow light blinks to a position 100 m rearward from the accident occurrence point, while controlling the red light instead of yellow to blink 30 m rearward from the accident occurrence point, When the accident point approaches, it can be controlled to change to a different color light and blink.
  • the controller 30 may control the flashing speed of the light of the third color to be faster as it approaches the accident occurrence point.
  • control unit 30 controls the light of the third color to flicker at a rate of blinking once per second from the location 20m to 30m away from the accident point rearward, and 10m to 20m away from the accident point rearward.
  • the third color light can be controlled to blink at a rate of blinking twice per second to the location, and the light of the third color can be controlled to blink at a rate of blinking three times per second to a location 10m rearward from the accident site. have.
  • the sensor unit 40 may detect a lane in which an accident has occurred based on image information obtained through the image sensor, and the control unit 30 may detect a lane in which an accident has occurred rather than a light flashing on both sides of a lane in which an accident does not occur. You can control the light flickering on both sides to flicker more strongly.
  • control unit 30 may control the yellow light flickering on both sides of the second lane to flash with strong intensity, and the left side of the first lane and the third lane.
  • the yellow light flickering on the right side can be controlled to flicker with normal intensity.
  • the sensor unit 40 may detect by analyzing the position of the vehicle and the distance between the vehicles through the image sensor, obtain image information of a plurality of vehicles located in a section with congestion on the road, and from the image information It is possible to check the target area image and vehicle state information for each of a plurality of vehicles, and to determine whether there is a vehicle having an appearance problem based on the vehicle state information, thereby generating a determination result as to whether an accident has occurred. .
  • the sensor unit 40 obtains image information of the first vehicle and the second vehicle located in a section where there is congestion on the road, checks the vehicle state information of each of the first vehicle and the second vehicle, and the second It is checked whether there is a vehicle having a problem in the exterior of each of the first vehicle and the second vehicle, and when it is confirmed that there is no problem in the exterior of all vehicles, a determination result may be generated that an accident has not occurred, and the first vehicle and when it is confirmed that there is a problem in the appearance of at least one of the second vehicles, a result of determining that an accident has occurred may be generated.
  • the sensor unit 40 detects that traffic congestion has occurred due to the occurrence of an accident, and when it is determined that no accident has occurred, it is detected that congestion has occurred on the road due to simple congestion can do.
  • FIG. 5 is a view for explaining a method of processing an image taken of a vehicle located in a section with congestion on the road in order to determine whether there is a vehicle having a problem in appearance due to the occurrence of an accident according to an embodiment of the present invention; to be.
  • the sensor unit 40 may include an image sensor installed in the tunnel and on the top of all roads, and acquire image information of a plurality of vehicles located in a section where there is congestion on the road through the image sensor. and extract the image information of the vehicle to be analyzed by classifying the obtained image information by vehicle, check the target area image and vehicle condition information from the extracted image information of the vehicle to be analyzed, and check the appearance of the vehicle based on the vehicle condition information It is possible to check whether there is a vehicle with the , and generate a determination result as to whether an accident has occurred.
  • the sensor unit 40 may analyze the target area image of the analysis target vehicle and extract vehicle state information included in the target area.
  • the sensor unit 40 may determine a target area of the vehicle to be analyzed to obtain an image 501 of the target area.
  • the sensor unit 40 may identify an effective vehicle boundary based on color information and texture information in the target area image 501 .
  • the sensor unit 40 may determine whether it is a vehicle for each area based on a color and a texture.
  • the sensor unit 40 may determine whether a vehicle is in each area by sliding a filter of a predefined unit, and the filter may be designed to output a result according to a color and a texture.
  • the sensor unit 40 may extract an effective vehicle area 502 separated by an effective vehicle boundary from the target area image 501 .
  • the sensor unit 40 may extract appearance characteristics of particle objects in the effective vehicle area 502 .
  • the sensor unit 40 may identify a foreign object 503 among particle objects based on the extracted exterior features, and remove the foreign object 503 from the effective vehicle area 502 . have.
  • the sensor unit 40 identifies an object outside a predefined range based on the appearance, color, and texture information of the vehicle body and glass distributed within the effective vehicle area 502 , and determines the identified object as a foreign object 503 . can do.
  • the sensor unit 40 may extract size features 505 to 507 of particle objects in the effective vehicle area 504 from which the foreign object is removed.
  • the sensor unit 40 may identify particle objects within the effective vehicle area 504 and extract size features 505 to 507 from among information describing the identified particle objects for each size.
  • the sensor unit 40 may extract and classify the size features 505 to 507 by size according to a range serving as a reference for classifying the vehicle body and the glass.
  • the sensor unit 40 may classify the particle objects into one of a body object and a glass object, respectively, based on the extracted size features 505 to 507 .
  • the sensor unit 40 may generate a first ratio in the effective vehicle area 504 of at least one particle object classified as a vehicle body object.
  • the first ratio may correspond to a body ratio within the effective vehicle area 504 .
  • the sensor unit 40 may generate vehicle state information in which the characteristics of the vehicle body are reflected by using the first ratio.
  • the sensor unit 40 may generate a second ratio in the effective vehicle area 504 of at least one particle object classified as a glass object.
  • the second proportion may correspond to the proportion of glass in the effective vehicle area 504 .
  • the sensor unit 40 may generate vehicle state information in which the characteristics of glass are reflected by using the second ratio.
  • the sensor unit 40 may generate a third ratio of the foreign object 503 within the effective vehicle area 502 .
  • the third ratio may mean a ratio occupied by foreign substances in the effective vehicle area 502 .
  • the sensor unit 40 may extract a color feature within the effective vehicle area 504 .
  • the sensor unit 40 may generate car color information based on color characteristics.
  • the sensor unit 40 may generate basic vehicle information based on the first ratio, the second ratio, the third ratio, and the vehicle color information.
  • the sensor unit 40 may generate basic vehicle information based on the first ratio, the second ratio, the third ratio, and the car color information according to the image processing of the effective vehicle area 504 .
  • the sensor unit 40 may identify the target area image 501 based on the location, inquire the environment information of the tunnel in which the vehicle is located, and reflect the current environmental state (illuminance, etc.) in the tunnel. Auxiliary vehicle information may be generated.
  • the sensor unit 40 may generate a feature vector 510 corresponding to the effective vehicle area 502 based on the basic vehicle information and the auxiliary vehicle information.
  • the sensor unit 40 may obtain the output information 512 by applying the feature vector 510 to the pre-trained neural network 511 .
  • the neural network 511 may be trained to estimate vehicle state information from input according to basic vehicle information generated based on features extracted from the image of the vehicle and auxiliary vehicle information that affects depending on the environmental condition in the tunnel, which is the shooting area. have.
  • the sensor unit 40 may generate vehicle state information corresponding to the effective vehicle area 502 based on the output information 512 .
  • the output information 512 may be information including a matching degree for each scratch of the vehicle or may be designed as variables describing a state in which the vehicle is distorted.
  • the output information 512 may be discretely designed according to the classification of the vehicle. For example, output nodes of the output layer of the neural network 511 correspond to each type of vehicle, and the output nodes correspond to each type of vehicle. Probability values can be output for each classification.
  • the learning contents of the neural network 511 will be described with reference to FIG. 7 .
  • FIG. 6 is a view for explaining a learning method employed to process an image of a vehicle to be analyzed in order to determine whether there is a vehicle having an appearance problem due to an accident occurrence according to an embodiment of the present invention.
  • the learning apparatus may train the neural network 604 for estimating information required to obtain vehicle state information from the target area image.
  • the learning device may be a separate entity different from the sensor unit 40 , but is not limited thereto.
  • the learning apparatus may acquire labeled vehicle images 601 .
  • the learning apparatus may obtain pre-labeled information on each vehicle image for each vehicle type, and the vehicle image may be labeled according to a pre-classified vehicle type.
  • the learning apparatus may provide a first ratio corresponding to the vehicle body object, a glass, based on at least one of color information, texture information, and appearance characteristics and size characteristics of the particle objects of the labeled vehicle images 601 .
  • the basic vehicle information 602 may be generated based on the second ratio corresponding to the object, the third ratio corresponding to the foreign object, and the vehicle color information.
  • the learning apparatus may generate the feature vectors 603 of the vehicle to be analyzed based on the basic vehicle information 602 .
  • Auxiliary vehicle information may be employed in generating the feature vectors 603 of the vehicle to be analyzed.
  • the learning apparatus may obtain the output information 605 by applying the feature vectors 603 to the neural network 604 .
  • the learning apparatus may train the neural network 604 based on the output information 605 and the labels 606 .
  • the learning apparatus may train the neural network 604 by calculating errors corresponding to the output information 605 and optimizing the connection relationships of nodes in the neural network 604 to minimize the errors.
  • the sensor unit 40 may acquire vehicle state information from the target area image by using the neural network 604 that has been trained.
  • FIG. 7 is a flowchart illustrating a process of controlling blinking according to a length of a congestion section according to an embodiment.
  • step S701 when it is detected that there is congestion on the road, the sensor unit 40 determines whether there is a vehicle having a problem in appearance based on vehicle state information in order to check the cause of the congestion. By checking, it is possible to generate a determination result as to whether an accident has occurred, and according to the determination result, when it is determined that an accident has not occurred, it is possible to detect that congestion has occurred on the road due to simple congestion.
  • the sensor unit 40 may detect a congestion section in which the driving speed of the vehicle is equal to or less than the reference speed on the road.
  • step S703 the control unit 30 can identify a simple congestion occurrence point, which is the starting point of the congestion section, and a simple congestion end point, which is an end point of the congestion section, and through the distance from the simple congestion occurrence point to the simple congestion end point, the congestion section length can be calculated.
  • the controller 30 may determine whether the length of the congestion section is shorter than the first reference distance.
  • the first reference distance may be set through a congestion pattern for each time period for a predetermined time.
  • control unit 30 may check the congestion pattern for each time period for a month, and when the current time is 7 o'clock, the control unit 30 may set the first reference distance through the 7 o'clock congestion pattern.
  • the controller 30 may set the first reference distance to a longer value as the probability of occurrence of congestion increases.
  • the control unit 30 may set the first reference distance to 20 m if, as a result of checking the congestion pattern at 7 o'clock, the probability of occurrence of congestion is 80%, the current time is 8 In the case of a viewer, when it is confirmed that the probability of occurrence of congestion is 90% as a result of checking the congestion pattern at 8 o'clock, the first reference distance may be set to 30 m.
  • step S704 If it is determined in step S704 that the length of the congestion section is shorter than the first reference distance, in step S705, the controller 30 may determine the congestion section as a general congestion phenomenon.
  • the controller 30 may determine the congestion section as a general congestion phenomenon.
  • step S706 the control unit 30 flashes the light of the first color at a first flashing speed with the intensity of the first intensity from the simple congestion occurrence point, which is the starting point of the congestion section, to a position that is four times the length of the congestion section backward. can be controlled as much as possible.
  • the control unit 30 controls the light of the first color to blink through the first blinking speed with the intensity of the first intensity from the simple congestion occurrence point to a position 120 m rearward. can do.
  • step S707 the controller 30 may determine the congestion section as a special congestion phenomenon.
  • step S708 the controller 30 may determine whether the length of the congestion section is shorter than the second reference distance.
  • the second reference distance may be set to a value longer than the first reference distance.
  • step S709 the controller 30 may determine the congestion section as a serious congestion phenomenon.
  • the controller 30 may determine the congestion section as a serious congestion phenomenon.
  • step S710 the control unit 30 flashes the light of the first color with the intensity of the second intensity at a first flashing speed from the simple congestion occurrence point, which is the starting point of the congestion section, to a position three times the length of the congestion section backwards.
  • the second intensity may be set to a light intensity stronger than the first intensity. For example, when the first intensity is 10lx, the second intensity may be set to 20lx.
  • the control unit 30 controls the light of the first color to flash through the first flashing speed with the intensity of the second intensity from the simple congestion occurrence point to a position 240 m to the rear. can do.
  • step S711 the controller 30 may determine the congestion section as a very serious congestion phenomenon.
  • the controller 30 may determine the congestion section as a very serious congestion phenomenon.
  • step S712 the control unit 30 flashes the light of the first color at a second flashing speed with the intensity of the second intensity from the simple congestion occurrence point, which is the start point of the congestion section, to a position that is twice the length of the congestion section backward.
  • the second flashing speed may be set to be faster than the first flashing speed. For example, when the first flashing speed is a flashing speed once per second, the second flashing speed is two per second. It can be set to flashing speed.
  • the control unit 30 controls the light of the first color to flash through the second flashing speed with the intensity of the second intensity from the simple congestion occurrence point to a position 260 m to the rear. can do.
  • FIG. 8 is a flowchart for explaining a process of controlling blinking according to the number of vehicles having an accident according to an exemplary embodiment.
  • step S801 when the sensor unit 40 detects that there is congestion on the road, to determine the cause of the congestion, based on the vehicle state information, whether there is a vehicle having a problem in appearance By checking, a determination result as to whether an accident has occurred may be generated, and if it is determined that an accident has occurred according to the determination result, it may be detected that congestion has occurred on the road due to the occurrence of the accident.
  • step S802 the sensor unit 40 may detect the number of accidents occurring vehicles.
  • step S803 the controller 30 may determine whether the number of accident-causing vehicles is smaller than a preset first reference value.
  • the first reference value may be set differently depending on the embodiment.
  • step S803 If it is determined in step S803 that the number of vehicles having an accident is smaller than the first reference value, in step S804, the controller 30 may determine the accident occurring on the road as a small accident.
  • the controller 30 may determine the accident occurring on the road as a small accident.
  • step S805 the control unit 30 causes the light of the second color to flash through the first flashing speed with the intensity of the first intensity from the accident point to the first point that is twice the first reference distance rearward. can be controlled
  • the control unit 30 determines that the accident occurring on the road is a small accident, sets a position 100 m rearward from the accident point as the first point, and from the accident point
  • the light of the second color may be controlled to blink through the first blinking speed with the intensity of the first intensity up to the first point.
  • step S803 if it is confirmed in step S803 that the number of vehicles in which the accident occurs is greater than the first reference value, in step S806, the controller 30 may determine the accident occurring on the road as a serious accident.
  • step S807 the control unit 30 may determine whether the number of vehicles having an accident is smaller than a preset second reference value.
  • the second reference value may be set to be higher than the first reference value.
  • step S808 the controller 30 may determine the accident occurring on the road as a medium-scale accident.
  • the controller 30 may determine the accident occurring on the road as a medium-scale accident.
  • step S809 the controller 30 causes the light of the second color to flash through the first flashing speed with the intensity of the second intensity from the accident occurrence point to the second point that is three times the first reference distance to the rear. can be controlled
  • the control unit 30 determines that the accident occurring on the road is a medium-scale accident, sets a position 150 m rearward from the accident point as the second point, and from the accident point
  • the light of the second color may be controlled to blink through the first blinking speed with the intensity of the second intensity up to the second point.
  • step S810 the controller 30 may determine that the accident occurring on the road is a large-scale accident.
  • the controller 30 may determine that the accident occurring on the road is a large-scale accident.
  • step S811 the control unit 30 causes the light of the second color to flash through the second flashing speed with the intensity of the second intensity to the third point that is four times the first reference distance rearward from the accident occurrence point. can be controlled
  • the control unit 30 determines that the accident occurring on the road is a large-scale accident, sets a position 200 m rearward from the accident point as the third point, and from the accident point
  • the light of the second color may be controlled to blink through the second blinking speed with the intensity of the second intensity up to the third point.
  • FIG. 9 is a flowchart for explaining a process of controlling the blinking step by step according to a distance during a medium-scale accident according to an embodiment.
  • step S901 when it is confirmed that the number of vehicles having an accident is greater than a first reference value and smaller than a second reference value, the controller 30 may determine the accident occurring on the road as a medium-scale accident.
  • step S902 the controller 30 may control the light of the second color to blink through the first blinking speed with the intensity of the second intensity from the accident occurrence point to the second point.
  • step S903 the control unit 30 may check whether the accident management is completed after a predetermined time has elapsed. At this time, when it is confirmed that the congestion caused by the accident has been resolved, the control unit 30 may confirm that the accident management has been completed.
  • control unit 30 may control the light of the second color that is blinking from the accident occurrence point to the second point not to blink anymore.
  • step S904 the control unit 30 checks the blinking time during which the light of the second color that is blinking from the accident occurrence point to the second point maintains the blinking state, , it is possible to check whether the blinking time is longer than the preset reference time.
  • the reference time may be set differently depending on the embodiment.
  • step S904 If it is confirmed in step S904 that the blinking time is shorter than the reference time, it returns to step S902, and the control unit 30 sets the first blinking speed with the intensity of the second intensity of the light of the second color from the accident point to the second point. By controlling it to blink through, it is possible to maintain the blinking state.
  • step S905 the control unit 30 flashes the light of the second color from the accident occurrence point to the first point with the intensity of the second intensity through the first blinking speed. and control so that the light of the second color from the first point to the second point blinks with the intensity of the first intensity through the first blinking speed.
  • step S906 the control unit 30 may check whether the accident management is completed after a predetermined time has elapsed.
  • the controller 30 may control the light of the second color that is blinking from the accident occurrence point to the second point not to blink anymore.
  • step S906 If it is confirmed that the accident management is not completed in step S906, the control unit 30 returns to step S905, and the control unit 30 transmits the light of the second color from the accident occurrence point to the first point through the first flashing speed with the intensity of the second intensity.
  • the blinking state may be maintained by controlling the blinking, and controlling the light of the second color from the first point to the second point to blink through the first blinking speed with the intensity of the first intensity.
  • FIG. 10 is a flowchart for explaining a process of controlling the blinking step by step according to the distance during a large-scale accident according to an embodiment.
  • step S1001 when it is confirmed that the number of vehicles having an accident is greater than a second reference value, the controller 30 may determine an accident occurring on the road as a large-scale accident.
  • step S1002 the controller 30 may control the light of the second color to blink through the second blinking speed with the intensity of the second intensity from the accident occurrence point to the third point.
  • step S1003 the control unit 30 may check whether the accident management is completed after a predetermined time has elapsed. At this time, when it is confirmed that the congestion caused by the accident has been resolved, the control unit 30 may confirm that the accident management has been completed.
  • the controller 30 may control the light of the second color that is blinking from the accident occurrence point to the third point not to blink anymore.
  • step S1004 the control unit 30 checks the blinking time during which the light of the second color that is blinking from the accident occurrence point to the third point maintains the blinking state, , it is possible to check whether the blinking time is longer than the preset reference time.
  • the reference time may be set differently depending on the embodiment.
  • step S1004 If it is confirmed that the blinking time is shorter than the reference time in step S1004, the process returns to step S1002, and the control unit 30 determines the second blinking speed with the intensity of the second intensity of the light of the second color from the accident point to the third point. By controlling it to blink through, it is possible to maintain the blinking state.
  • step S1005 the control unit 30 blinks the light of the second color from the accident occurrence point to the first point with the second intensity of the second blinking speed. control so that the light of the second color from the first point to the second point blinks through the first blinking speed with the intensity of the second intensity, and the light of the second color from the second point to the third point It can be controlled to blink through the first blinking speed with an intensity of 1 intensity.
  • step S1006 the control unit 30 may check whether the accident management is completed after a predetermined time has elapsed.
  • the controller 30 may control the light of the second color that is blinking from the accident occurrence point to the third point not to blink anymore.
  • step S1006 If it is confirmed that the accident management is not completed in step S1006, the process returns to step S1005, and the control unit 30 allows the light of the second color from the accident occurrence point to the first point through the second flashing speed with the intensity of the second intensity. control to blink, and control so that the light of the second color from the first point to the second point blinks through the first blinking speed with the intensity of the second intensity, and the light of the second color from the second point to the third point By controlling the flashing through the first flashing speed with the intensity of the first intensity, the flashing state may be maintained.
  • FIG. 11 is a flowchart illustrating a process of classifying a surface of a vehicle according to an exemplary embodiment.
  • the controller 30 may identify a first vehicle, which is any one of a plurality of vehicles located in a section with congestion on the road, as an analysis target vehicle.
  • step S1301 the controller 30 may acquire 3D data on the surface of the first vehicle through the lidar.
  • the 3D data is a 3D image of the surface of the first vehicle.
  • the control unit 30 may be connected to a device equipped with a lidar through wired or wireless.
  • the controller 30 may acquire 2D data on the surface of the first vehicle through the camera.
  • the 2D data is a 2D image of the surface of the first vehicle.
  • the controller 30 may be connected to a device equipped with a camera through wired or wireless.
  • step S1303 the controller 30 may separate the union region of the 2D data and the 3D data to extract the first data obtained by merging the 2D data and the 3D data.
  • the controller 30 can compare the 2D data and the 3D data to identify overlapping union regions, separate the union regions from the 2D data and separate the union regions from the 3D data, and merge the separated union regions.
  • the first data may be extracted.
  • the first data may consist of 4 channels, 3 channels may be 2D data representing an RGB value, and 1 channel may be data representing a 3D depth value.
  • the controller 30 may generate a first input signal by encoding the first data.
  • the controller 30 may generate the first input signal by encoding the pixels of the first data with color information.
  • the color information may include, but is not limited to, RGB color information, brightness information, saturation information, and depth information.
  • the controller 30 may convert the color information into a numerical value, and may encode the first data in the form of a data sheet including the value.
  • the controller 30 may input the first input signal to the first artificial neural network previously learned in the road driving safety device 100 .
  • the first artificial neural network is composed of a feature extraction neural network and a classification neural network, and the feature extraction neural network sequentially stacks a convolutional layer and a pooling layer on an input signal.
  • the convolution layer includes a convolution operation, a convolution filter, and an activation function. The calculation of the convolution filter is adjusted according to the matrix size of the target input, but a 9X9 matrix is generally used.
  • the activation function generally uses, but is not limited to, a ReLU function, a sigmoid function, and a tanh function.
  • the pooling layer is a layer that reduces the size of the input matrix, and uses a method of extracting representative values by tying pixels in a specific area.
  • the average value or the maximum value is often used for the calculation of the pooling layer, but is not limited thereto.
  • the operation is performed using a square matrix, usually a 9x9 matrix.
  • the convolutional layer and the pooling layer are repeated alternately until the corresponding input becomes small enough while maintaining the difference.
  • the classification neural network has a hidden layer and an output layer.
  • Classification of the first artificial neural network for classifying the roughness level of the surface of the first vehicle The neural network consists of five or less hidden layers, and may include a total of 50 or less hidden layer nodes.
  • the activation function of the hidden layer uses a ReLU function, a sigmoid function, and a tanh function, but is not limited thereto.
  • a detailed description of the first artificial neural network will be described later with reference to FIG. 12 .
  • the controller 30 may obtain a first output signal based on a result of the input of the first artificial neural network.
  • the controller 30 may generate a first classification result for the surface of the first vehicle based on the first output signal.
  • the first classification result may include information on which stage the surface of the first vehicle is classified.
  • the control unit 30 As a result of checking the output value of the first output signal, when the output value is 1, the control unit 30 generates a first classification result that the surface of the first vehicle corresponds to stage 1, and when the output value is 2 , a first classification result may be generated as the surface of the first vehicle corresponds to the second stage. It can be seen that the higher the step, the rougher the surface of the first vehicle becomes.
  • step S1105 the controller 30 may analyze the first data to detect a crack generated on the surface of the first vehicle.
  • a crack generated on the surface of the first vehicle In the case of crack detection, only a portion confirmed to be larger than a certain size through image analysis may be detected as a crack occurring on the surface of the first vehicle.
  • control unit 30 may identify cracks generated on the surface of the first vehicle for each region, and distinguish a normal region from a damaged region.
  • the control unit 30 divides the first data into a plurality of areas such as a first area and a second area, and can check how many cracks are detected for each area, and the number of cracks is less than the first set value.
  • the detected area may be divided into a normal area, and an area in which cracks greater than or equal to the first set value are detected may be classified as a damaged area.
  • the first set value may be set differently depending on the embodiment.
  • step S1107 the controller 30 may extract second data from which the damaged area is deleted from the first data.
  • the image in the first data consists of a first region, a second region, and a third region, wherein the first region is divided into a damaged region, and the second region and the third region are divided into a normal region.
  • the controller 30 may extract an image including only the second region and the third region as the second data.
  • the controller 30 may generate a second input signal by encoding the second data.
  • the controller 30 may generate a second input signal by encoding pixels of the second data with color information.
  • the color information may include, but is not limited to, RGB color information, brightness information, saturation information, and depth information.
  • the controller 30 may convert the color information into a numerical value, and may encode the second data in the form of a data sheet including the value.
  • the controller 30 may input the second input signal to the second artificial neural network previously learned in the road driving safety device 100 .
  • the second artificial neural network consists of a feature extraction neural network and a classification neural network, and the feature extraction neural network sequentially stacks a convolutional layer and a pooling layer on an input signal.
  • the convolution layer includes a convolution operation, a convolution filter, and an activation function. The calculation of the convolution filter is adjusted according to the matrix size of the target input, but a 9X9 matrix is generally used.
  • the activation function generally uses, but is not limited to, a ReLU function, a sigmoid function, and a tanh function.
  • the pooling layer is a layer that reduces the size of the input matrix, and uses a method of extracting representative values by tying pixels in a specific area.
  • the average value or the maximum value is often used for the calculation of the pooling layer, but is not limited thereto.
  • the operation is performed using a square matrix, usually a 9x9 matrix.
  • the convolutional layer and the pooling layer are repeated alternately until the corresponding input becomes small enough while maintaining the difference.
  • the classification neural network has a hidden layer and an output layer.
  • Classification of the second artificial neural network for classifying the roughness level of the surface of the first vehicle The neural network consists of five or less hidden layers, and may include a total of 50 or less hidden layer nodes.
  • the activation function of the hidden layer uses a ReLU function, a sigmoid function, and a tanh function, but is not limited thereto.
  • a detailed description of the second artificial neural network will be described later with reference to FIG. 12 .
  • the controller 30 may obtain a second output signal based on a result of the input of the second artificial neural network.
  • the controller 30 may generate a second classification result for the surface of the first vehicle based on the second output signal.
  • the second classification result may include information on which stage the surface of the first vehicle is classified.
  • the control unit 30 As a result of checking the output value of the second output signal, when the output value is 1, the control unit 30 generates a second classification result as that the surface of the first vehicle corresponds to stage 1, and when the output value is 2 , a second classification result may be generated as the surface of the first vehicle corresponds to the second stage.
  • the controller 30 may set a final classification result for the surface of the first vehicle based on the first classification result and the second classification result.
  • the controller 30 may set any one of the first classification result and the second classification result as the final classification result for the surface of the first vehicle.
  • the controller 30 may determine whether there is a problem in the appearance of the first vehicle by using the final classification result, and through this, determine whether an accident has occurred in the first vehicle.
  • FIG. 12 is a diagram for explaining an artificial neural network according to an embodiment.
  • the artificial neural network 1200 may be any one of a first artificial neural network and a second artificial neural network.
  • information on which level of roughness the surface of the first vehicle is classified may be output as a first input signal generated by encoding the first data as an input.
  • the second input signal generated by encoding the second data may be input as an input, and information on which stage of the roughness stage of the first vehicle surface is classified may be output.
  • Encoding according to an embodiment may be performed by storing color information for each pixel of an image in the form of a digitized data sheet, and the color information includes RGB color, brightness information, saturation information, and depth information of one pixel. can, but is not limited to.
  • the artificial neural network 1200 is composed of a feature extraction neural network 1210 and a classification neural network 1220, and the feature extraction neural network 1210 separates the first vehicle region and the background region from the image. may be performed, and the classification neural network 1220 may perform an operation of determining whether the surface of the first vehicle is classified into any roughness stage in the image.
  • the change in each value of color information from the data sheet of the input signal encoding the image is at least 6 out of 8 pixels including one pixel.
  • a bundle of pixels that are detected as having a change of 30% or more may be used as a boundary between the area of the first vehicle and the background area, but is not limited thereto.
  • the feature extraction neural network 1210 proceeds by sequentially stacking a convolutional layer and a pooling layer on the input signal.
  • the convolution layer includes a convolution operation, a convolution filter, and an activation function.
  • the calculation of the convolution filter is adjusted according to the matrix size of the target input, but a 9X9 matrix is generally used.
  • the activation function generally uses, but is not limited to, a ReLU function, a sigmoid function, and a tanh function.
  • the pooling layer is a layer that reduces the size of the input matrix, and uses a method of extracting representative values by tying pixels in a specific area. In general, the average value or the maximum value is often used for the calculation of the pooling layer, but is not limited thereto.
  • the operation is performed using a square matrix, usually a 9x9 matrix.
  • the convolutional layer and the pooling layer are repeated alternately until the corresponding input becomes small enough while maintaining the difference.
  • the classification neural network 1220 checks the surface of the region of the first vehicle separated from the background through the feature extraction neural network 1210, and checks whether it is similar to the predefined roughness step surface state, It is possible to determine whether the surface is classified into which level of roughness level. In order to compare the roughness step-by-step surface state, information stored in the database of the road driving safety device 100 may be utilized.
  • the classification neural network 1220 has a hidden layer and an output layer, and is composed of five or less hidden layers, including a total of 50 or less hidden layer nodes, and the activation function of the hidden layer is a ReLU function and a sigmoid function. and tanh functions, but is not limited thereto.
  • the classification neural network 1220 may include only one output layer node in total.
  • the output of the classification neural network 1220 is an output value of which stage of the roughness stage the surface of the first vehicle is classified, and may indicate which stage of the roughness stage it corresponds to. For example, when the output value is 1, it may indicate that the surface of the first vehicle corresponds to the first stage, and when the output value is 2, it may indicate that the surface of the first vehicle corresponds to the second stage.
  • the artificial neural network 1200 may learn by receiving the first learning signal generated by the corrected correct answer input by the user when the user discovers a problem in the output according to the artificial neural network 1200 .
  • the problem of output according to the artificial neural network 1200 may mean a case in which an output value classified into another stage among the roughness stages is output with respect to the surface of the first vehicle.
  • the first learning signal is created based on the error between the correct answer and the output value, and in some cases, SGD using delta, a batch method, or a method following a backpropagation algorithm may be used.
  • the artificial neural network 1200 performs learning by modifying the existing weights according to the first learning signal, and may use momentum in some cases.
  • a cost function can be used to calculate the error, and a cross entropy function can be used as the cost function.
  • the learning contents of the artificial neural network 1200 will be described with reference to FIG. 13 .
  • FIG. 13 is a diagram for explaining a method of learning an artificial neural network according to an embodiment.
  • the learning apparatus may train the artificial neural network 1200 .
  • the learning apparatus may be a separate entity different from the road driving safety apparatus 100 , but is not limited thereto.
  • the artificial neural network 1200 includes an input layer to which training samples are input and an output layer to output training outputs, and may be learned based on a difference between the training outputs and the first labels.
  • the first labels may be defined based on a representative image registered for each roughness level.
  • the artificial neural network 1200 is connected as a group of a plurality of nodes, and is defined by weights between the connected nodes and an activation function that activates the nodes.
  • the learning apparatus may train the artificial neural network 1200 using a Gradient Decent (GD) technique or a Stochastic Gradient Descent (SGD) technique.
  • the learning apparatus may use a loss function designed by the outputs and labels of the artificial neural network 1200 .
  • the learning apparatus may calculate a training error using a predefined loss function.
  • the loss function may be predefined with a label, an output, and a parameter as input variables, where the parameter may be set by weights in the artificial neural network 1200 .
  • the loss function may be designed in a Mean Square Error (MSE) form, an entropy form, or the like, and various techniques or methods may be employed in an embodiment in which the loss function is designed.
  • MSE Mean Square Error
  • the learning apparatus may find weights affecting the training error by using a backpropagation technique.
  • the weights are relationships between nodes in the artificial neural network 1200 .
  • the learning apparatus may use the SGD technique using labels and outputs to optimize the weights found through the backpropagation technique. For example, the learning apparatus may update the weights of the loss function defined based on the labels, outputs, and weights using the SGD technique.
  • the learning apparatus may obtain the representative images 1301 for each level of the labeled training roughness from the database of the road safety apparatus 100 .
  • the learning apparatus may obtain pre-labeled information on the representative images 1301 for each roughness stage, and the representative images 1301 for each roughness stage may be labeled according to the pre-classified roughness stage.
  • the learning apparatus may acquire 1000 labeled training roughness step-by-step representative images 1301, and based on the labeled training roughness step-by-step representative images 1301, the first training roughness step-by-step vectors ( 1302) can be created.
  • Various methods may be employed to extract the first training roughness step vectors 1302 .
  • the learning apparatus may obtain the first training outputs 1303 by applying the first training roughness step vectors 1302 to the artificial neural network 1200 .
  • the learning apparatus may train the artificial neural network 1200 based on the first training outputs 1303 and the first labels 1304 .
  • the learning apparatus may train the artificial neural network 1200 by calculating the training errors corresponding to the first training outputs 1303 and optimizing the connection relationship of nodes in the artificial neural network 1200 to minimize the training errors. .
  • the embodiments described above may be implemented by a hardware component, a software component, and/or a combination of a hardware component and a software component.
  • the apparatus, methods and components described in the embodiments may include, for example, a processor, a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable gate (FPGA). array), a programmable logic unit (PLU), a microprocessor, or any other device capable of executing and responding to instructions, may be implemented using one or more general purpose or special purpose computers.
  • the processing device may execute an operating system (OS) and one or more software applications running on the operating system.
  • the processing device may also access, store, manipulate, process, and generate data in response to execution of the software.
  • OS operating system
  • the processing device may also access, store, manipulate, process, and generate data in response to execution of the software.
  • the processing device includes a plurality of processing elements and/or a plurality of types of processing elements. It can be seen that may include For example, the processing device may include a plurality of processors or one processor and one controller. Other processing configurations are also possible, such as parallel processors.
  • the method according to the embodiment may be implemented in the form of program instructions that can be executed through various computer means and recorded in a computer-readable medium.
  • the computer-readable medium may include program instructions, data files, data structures, and the like, alone or in combination.
  • the program instructions recorded on the medium may be specially designed and configured for the embodiment, or may be known and available to those skilled in the art of computer software.
  • Examples of the computer-readable recording medium include magnetic media such as hard disks, floppy disks and magnetic tapes, optical media such as CD-ROMs and DVDs, and magnetic such as floppy disks.
  • - includes magneto-optical media, and hardware devices specially configured to store and execute program instructions, such as ROM, RAM, flash memory, and the like.
  • Examples of program instructions include not only machine language codes such as those generated by a compiler, but also high-level language codes that can be executed by a computer using an interpreter or the like.
  • a hardware device may be configured to operate as one or more software modules to perform the operations of the embodiments, and vice versa.
  • the software may comprise a computer program, code, instructions, or a combination of one or more thereof, which configures a processing device to operate as desired or is independently or collectively processed You can command the device.
  • the software and/or data may be any kind of machine, component, physical device, virtual equipment, computer storage medium or device, to be interpreted by or to provide instructions or data to the processing device. , or may be permanently or temporarily embody in a transmitted signal wave.
  • the software may be distributed over networked computer systems and stored or executed in a distributed manner. Software and data may be stored in one or more computer-readable recording media.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Civil Engineering (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Structural Engineering (AREA)
  • Architecture (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Atmospheric Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Electromagnetism (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)

Abstract

La présente invention concerne un appareil de sécurité pour le déplacement dans des tunnels et sur toutes les routes, l'appareil étant monté sur des terres-plein centraux et des glissières de sécurité d'une route de façon à empêcher tout contact avec les véhicules. L'appareil de sécurité comprend : des boîtiers de sécurité installés sur la route à des intervalles réguliers dans une certaine section ; des unités d'éclairage installées dans les boîtiers de sécurité de façon à émettre de la lumière vers l'avant dans la direction de déplacement du véhicule ; des unités d'avertissement installées dans les boîtiers de sécurité de sorte que la lumière clignote vers l'arrière dans la direction de déplacement du véhicule ; des unités de détection qui détectent le mouvement de véhicules se déplaçant sur la route et détectent la présence d'un encombrement de la circulation sur la route par le déplacement des véhicules, et qui détectent si un accident s'est produit et si un encombrement de la circulation simple dû à une augmentation de véhicules s'est produit lorsque l'encombrement de la circulation sur la route est détecté ; et une unité de commande pour commander de sorte que la lumière est émise par les unités d'éclairage lorsque l'unité de détection détecte que des véhicules se déplacent sur la route, une lumière clignote dans une première couleur dans l'unité d'avertissement lorsque l'unité de détection détecte qu'un encombrement de la circulation simple s'est produit, et la lumière clignote dans une seconde couleur dans l'unité d'avertissement lorsque l'unité de détection détecte qu'un accident s'est produit.
PCT/KR2021/008418 2021-03-22 2021-07-02 Appareil de sécurité pour le déplacement dans des tunnels et sur toutes les routes WO2022203125A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020210036811A KR102269227B1 (ko) 2021-03-22 2021-03-22 터널 내 및 모든 도로 주행 안전 장치
KR10-2021-0036811 2021-03-22

Publications (1)

Publication Number Publication Date
WO2022203125A1 true WO2022203125A1 (fr) 2022-09-29

Family

ID=76628997

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2021/008418 WO2022203125A1 (fr) 2021-03-22 2021-07-02 Appareil de sécurité pour le déplacement dans des tunnels et sur toutes les routes

Country Status (2)

Country Link
KR (1) KR102269227B1 (fr)
WO (1) WO2022203125A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116824862A (zh) * 2023-08-28 2023-09-29 济南瑞源智能城市开发有限公司 一种智慧隧道交通运行管控方法、设备及介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102269227B1 (ko) * 2021-03-22 2021-06-25 주식회사 에스투에이치원 터널 내 및 모든 도로 주행 안전 장치

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100797394B1 (ko) * 2005-12-08 2008-01-28 한국전자통신연구원 노면 설치용 교통정체정보 제공 장치 및 그 방법
KR20090053013A (ko) * 2007-11-22 2009-05-27 한국전자통신연구원 센서네트워크를 이용한 사고감지 시스템 및 방법
KR101306759B1 (ko) * 2013-03-08 2013-09-10 주식회사 아이엑스 자동차전용도로 2차 사고 방지 시스템
KR101666003B1 (ko) * 2016-04-27 2016-10-13 주식회사 엠지브이보안시스템 입출차 사고 확인 시스템
KR20200012618A (ko) * 2018-07-27 2020-02-05 한국자동차연구원 차량용 안내정보 표시 장치
KR102269227B1 (ko) * 2021-03-22 2021-06-25 주식회사 에스투에이치원 터널 내 및 모든 도로 주행 안전 장치

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100797394B1 (ko) * 2005-12-08 2008-01-28 한국전자통신연구원 노면 설치용 교통정체정보 제공 장치 및 그 방법
KR20090053013A (ko) * 2007-11-22 2009-05-27 한국전자통신연구원 센서네트워크를 이용한 사고감지 시스템 및 방법
KR101306759B1 (ko) * 2013-03-08 2013-09-10 주식회사 아이엑스 자동차전용도로 2차 사고 방지 시스템
KR101666003B1 (ko) * 2016-04-27 2016-10-13 주식회사 엠지브이보안시스템 입출차 사고 확인 시스템
KR20200012618A (ko) * 2018-07-27 2020-02-05 한국자동차연구원 차량용 안내정보 표시 장치
KR102269227B1 (ko) * 2021-03-22 2021-06-25 주식회사 에스투에이치원 터널 내 및 모든 도로 주행 안전 장치

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116824862A (zh) * 2023-08-28 2023-09-29 济南瑞源智能城市开发有限公司 一种智慧隧道交通运行管控方法、设备及介质
CN116824862B (zh) * 2023-08-28 2023-12-01 济南瑞源智能城市开发有限公司 一种智慧隧道交通运行管控方法、设备及介质

Also Published As

Publication number Publication date
KR102269227B1 (ko) 2021-06-25

Similar Documents

Publication Publication Date Title
WO2022203125A1 (fr) Appareil de sécurité pour le déplacement dans des tunnels et sur toutes les routes
WO2016114488A1 (fr) Procédé de réglage de zone de détection pour détecter le passage de véhicules, et procédé de commande de signal de circulation l'utilisant
WO2017026642A1 (fr) Système de sécurité de feu de circulation pour passage piéton
WO2020050498A1 (fr) Procédé et dispositif destinés à détecter un milieu environnant à l'aide d'une segmentation d'image
KR100862561B1 (ko) 교통사고 검지 시스템
EP3844714A1 (fr) Procédé et appareil de segmentation d'image en utilisant un capteur d'événement
WO2018186583A1 (fr) Procédé d'identification d'obstacle sur un terrain de conduite et robot pour sa mise en œuvre
WO2017119557A1 (fr) Dispositif d'aide à la conduite et son procédé de commande
WO2020226258A1 (fr) Véhicule à conduite autonome et système de guidage relatif aux piétons et procédé l'utilisant
WO2019139310A1 (fr) Appareil de conduite autonome et procédé de conduite autonome d'un véhicule
RU2667338C1 (ru) Способ обнаружения объектов и устройство обнаружения объектов
WO2016088960A1 (fr) Procédé et système de détection, dans un environnement nocturne, de danger dû à la présence de piéton, pour système avancé d'aide à la conduite
WO2018105842A1 (fr) Système de détection d'incident à haute précision basé sur un radar
WO2020241930A1 (fr) Procédé d'estimation d'emplacement à l'aide de capteurs multiples et robot de mise en œuvre de ceux-ci
WO2016112557A1 (fr) Système de guidage à l'intérieur pour aveugles, faisant appel à des étiquettes électroniques, et procédé associé
KR102294286B1 (ko) 터널 내 및 모든 도로 주행 안전 장치
WO2015093823A1 (fr) Dispositif d'assistance à la conduite de véhicule et véhicule le comportant
WO2019045293A1 (fr) Procédé permettant de générer un trajet local orienté cible et robot permettant de mettre en œuvre celui-ci
WO2023120831A1 (fr) Procédé de désidentification et programme informatique enregistré sur un support d'enregistrement en vue de son exécution
WO2018230864A2 (fr) Procédé de détection de la profondeur d'un objet en prenant en considération la lumière extérieure, et dispositif l'exécutant
WO2019199112A1 (fr) Système et procédé de travail autonome et support d'enregistrement lisible par ordinateur
WO2020230931A1 (fr) Robot générant une carte sur la base d'un multi-capteur et d'une intelligence artificielle, configurant une corrélation entre des nœuds et s'exécutant au moyen de la carte, et procédé de génération de carte
CN112907981A (zh) 一种用于分流路口交通拥堵车辆的分流装置及其控制方法
WO2020085653A1 (fr) Procédé et système de suivi multi-piéton utilisant un fern aléatoire enseignant-élève
WO2015093853A1 (fr) Dispositif auxiliaire de conduite de véhicule et véhicule doté de celui-ci

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21933353

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE