CN117291864A - Visibility estimating device and method, and recording medium - Google Patents

Visibility estimating device and method, and recording medium Download PDF

Info

Publication number
CN117291864A
CN117291864A CN202210721991.7A CN202210721991A CN117291864A CN 117291864 A CN117291864 A CN 117291864A CN 202210721991 A CN202210721991 A CN 202210721991A CN 117291864 A CN117291864 A CN 117291864A
Authority
CN
China
Prior art keywords
learning
image data
visibility
estimating
completion model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210721991.7A
Other languages
Chinese (zh)
Inventor
境野英朋
那特纳帕特·盖维法特
帕特玛瓦蒂·理派松本
纳特纳日·克莱夫西里库尔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Weizhe Information Consulting Co ltd
Original Assignee
Weizhe Information Consulting Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Weizhe Information Consulting Co ltd filed Critical Weizhe Information Consulting Co ltd
Priority to CN202210721991.7A priority Critical patent/CN117291864A/en
Publication of CN117291864A publication Critical patent/CN117291864A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The server device inputs image data of a landscape that is reflected outdoors to the learning completion model, and acquires visibility data that is output by the learning completion model and that represents the visibility of the landscape that is reflected in the image data, thereby estimating the visibility of the landscape that is reflected in the image data. The learning completion model is a learning completion model in which learning is performed in advance based on learning data in which learning synthetic image data in which simulated fog is displayed and a correct value of visibility of a landscape displayed in the learning synthetic image data are associated with each other.

Description

Visibility estimating device and method, and recording medium
Technical Field
The present disclosure relates to a visibility estimating device, a visibility estimating method, and a recording medium having a visibility estimating program recorded thereon.
Background
Conventionally, a visibility estimating device capable of estimating visibility without requiring a special measuring device has been known (for example, patent document 1). The visibility estimating device acquires an image obtained by photographing outdoors by a photographing unit; estimating ambient light and a transmission map of light from the image; a sharpened image is generated based on the ambient light and the transmission map of light. Then, the visibility estimating means identifies the object from the sharpened image; converting the pixel scale of the object to a physical scale of the object based on information about the actual scale of the identified object; calculating the depth direction distance of the object according to the component of the depth direction of the physical scale of the object; applying a relation between the transmission map and the distance from the photographing unit to the image, calculating a light attenuation parameter of the atmosphere; based on the relational expression to which the calculated light attenuation parameters are applied, visibility is calculated from the depth direction distance.
Prior art literature
Patent literature
Patent document 1: JP-A2021-053789
Disclosure of Invention
Problems to be solved by the invention
In addition, when the outdoor visibility is estimated by inputting an image that reflects the outdoor to a learning completion model that is generated in advance by machine learning, the learning completion model needs to learn using a sufficient amount of learning data. However, the following problems exist: since the number of image data that reflects actual fog is small, it is difficult to obtain a learning completion model for estimating visibility with high accuracy.
The present disclosure has been made in view of the above-described circumstances, and an object thereof is to provide a visibility estimating device, a visibility estimating method, and a recording medium having a visibility estimating program recorded thereon, which are capable of estimating visibility with high accuracy using a learning completion model.
Means for solving the problems
In order to solve the above-described problems, a visibility estimating apparatus of the present disclosure includes an image acquiring section and an estimating section. The image acquisition unit acquires image data representing an outdoor landscape. The estimating section inputs the image data acquired by the image acquiring section to a learning completion model; and obtaining visibility data output from the learning completion model, the visibility data representing the visibility of the scenery reflected in the image data, thereby estimating the visibility of the scenery reflected in the image data. The learning completion model is a learning completion model in which learning is performed in advance based on learning data in which learning composite image data in which simulated fog is reflected is associated with a correct value of visibility of a landscape reflected in the learning composite image data.
Effects of the invention
According to the present disclosure, visibility can be estimated with high accuracy using the learning completion model.
Drawings
Fig. 1 is a schematic diagram showing a configuration example of a visibility estimating system according to an embodiment.
Fig. 2 is a block diagram showing a functional example of the server apparatus.
Fig. 3 is a diagram for explaining the learning completion model.
Fig. 4 is a diagram for explaining a method of generating synthetic image data for learning.
Fig. 5 is a diagram for explaining previous image data in which position information of a road lane is set.
Fig. 6 is a block diagram showing an outline configuration example of the learning device.
Fig. 7 is a schematic block diagram of a computer functioning as a server device or a learning device.
Fig. 8 is a diagram for explaining a process performed by the learning device.
Fig. 9 is a diagram for explaining a process performed by the server apparatus.
Fig. 10 is a diagram for explaining a road mode.
Fig. 11 shows experimental results of the proposed method according to the present embodiment.
Detailed Description
Next, this embodiment will be described with reference to the drawings. The embodiment described below is an embodiment in the case where the present invention is applied to a visibility estimating system.
[1, overview of the structure and function of visibility estimating System ]
First, a configuration and an outline function of a visibility estimating system according to an embodiment of the present invention will be described with reference to fig. 1.
As shown in fig. 1, the visibility estimating system 1 includes a server apparatus 10 and a terminal apparatus 20. The server device 10 calculates weather information such as visibility of each place from image data from cameras C provided at each place such as a road. The terminal device 20 displays weather information such as visibility provided by the server device 10.
Each terminal apparatus 20 and the server apparatus 10 as an example of the visibility estimating apparatus are connected through the network 3. A camera C as an example of the photographing means photographs outdoor weather conditions such as roads. The camera C communicates with the server apparatus 10 via wireless communication by the radio station 5. In addition, the network 3 may be, for example, the internet, a dedicated communication line (for example, community public television antenna system (Community Antenna Television, CATV) line), a mobile communication network (including a base station, etc.).
The server device 10 is a server device of a company that provides various weather information. The server apparatus 10 provides the terminal apparatus 20 with weather information of traffic facilities such as highways, ordinary roads, railways, airports, and the like, and/or weather information of airlines, estuaries, seas, and the like. Examples of weather information include weather, air temperature, humidity, air pressure, wind direction, wind speed, and visibility indicating a visual field.
The server device 10 may provide traffic information such as traffic jams and traffic regulations, running conditions of trains, planes, and the like, and sailing conditions of ships.
The server device 10 performs image analysis on the image data from the cameras C at each location, and calculates a value related to weather such as visibility. The server device 10 may perform image analysis on the image data from the camera C, and calculate and provide the road surface state, runway state, offshore state, traffic volume, and the like.
The terminal device 20 receives weather information including the estimation result of the road visibility from the server device 10 and displays the weather information.
The camera C is, for example, a color or black-and-white digital camera having an imaging element such as a charge coupled device (Charge Coupled Device, CCD) image sensor or a complementary metal oxide semiconductor (Complementary Metal Oxide Semiconductor, CMOS) image sensor. The camera C captures still images, moving images, and the like. The camera C may be a camera mounted on a moving body such as a vehicle, an airplane, or a train. The camera C may be a single-eye camera or a compound-eye camera. The camera C may be a camera mounted on a mobile terminal such as a smart phone.
The cameras C are provided at predetermined intervals and/or at predetermined positions on a road such as an expressway. A pole is provided beside the road, and a camera C is provided at a predetermined height. The camera C is directed, for example, in the direction of travel of the road. The camera C may be provided so as to be able to shoot from above the road.
[2, structure and function of server apparatus ]
Next, the structure and function of the server apparatus 10 will be described.
Fig. 2 is a block diagram showing an example of the functional configuration of the server apparatus 10 according to the embodiment. The server apparatus 10 is an example of the visibility estimating apparatus of the present disclosure, and as shown in fig. 2, the server apparatus 10 functionally includes a receiving section 100, an image data storing section 102, an image acquiring section 104, a learning completion model storing section 106, and an estimating section 108. The server apparatus 10 of the embodiment uses the learning completion model to estimate the visibility of the scenery reflected in the image data.
In the case of using a machine learning model such as a neural network model, it is preferable that a sufficient amount of learning data is used to learn the machine learning model, thereby generating a learning completion model. In many cases, a learning completion model with good accuracy cannot be generated when a sufficient amount of learning data cannot be collected. In this regard, the number of image data showing actual fog is small. Therefore, it is expected that even if the visibility is to be estimated using the image data in which fog is reflected and the learning completion model, the accuracy is low. The reason for this is that it is difficult to generate a learning completion model with good accuracy because the actual image data of fog is small.
Therefore, the server apparatus 10 according to the embodiment estimates the visibility of the landscape that is reflected in the image data of the subject whose visibility is to be estimated, using the learning completion model that is obtained by learning in advance from the synthetic image data of fog. Thus, the visibility can be estimated with high accuracy using the learning completion model. Hereinafter, description will be made specifically.
The receiving unit 100 successively receives image data captured by the camera C. Then, the receiving unit 100 stores the received image data in the image data storage unit 102.
The image data storage unit 102 stores image data captured by the camera C.
The image acquisition section 104 acquires image data of an object of which visibility is to be estimated from among a plurality of image data stored in the image data storage section 102.
The learning completion model storage unit 106 stores a learning completion model for estimating the visibility of the scenery reflected in the image data. Fig. 3 shows a learning completion model of the present embodiment. As shown in fig. 3, when image data is input, the learning completion model M of the present embodiment outputs visibility estimation data representing the visibility of a landscape that is reflected in the image data. The visibility estimation data is data representing a numerical value of visibility.
The learning completion model M of the present embodiment is a learning completion model in which learning is performed in advance based on learning data in which learning synthetic image data in which simulated fog is reflected is associated with a correct value of the visibility of a landscape reflected in the learning synthetic image data.
Fig. 4 is a diagram illustrating a method for generating learning composite image data according to the present embodiment. As shown in fig. 4 (STEP 1), the actual image data is combined with the left-side transparent grayscale image to generate learning combined image data for displaying the simulated fog on the right side of (STEP 1). The three pieces of learning composite image data are generated by making the transparency of the transparent grayscale image at the left end different.
As shown in fig. 4 (STEP 2), the actual image data is combined with Blurring Filter (Blurring Filter) to generate learning combined image data in which the simulated fog is displayed on the right end. By applying this Blurring Filter to the actual image data, blurring corresponding to the distance is added. For example, the learning composite image data may be generated by generating a blur corresponding to the distance from the preset exponential function and adding the blur to the actual image data. For example, when x is a vertical pixel position in the image data and y is a pixel value of the pixel position x, a blur corresponding to the distance is generated from an exponential function y=exp (- λx), and the blur is added to the actual image data to generate the learning composite image data. In this case, the variable λ is adjusted so that the distance between pixels in the image data corresponds to the actual physical distance. Thus, it is possible to generate learning composite image data in which the lower side of the image data has a low degree of blurring and the upper side of the image data has a high degree of blurring, thereby generating learning composite image data that appears as if it is fogged in an actual landscape.
As shown in fig. 4 (STEP 3), the actual image data is combined with the gray-scale image having high transparency in the center portion to generate the right-side combined image data for learning, which shows the simulated fog.
The actual image data may be combined with data of at least one of a simulated vehicle headlight, a simulated road shoulder lamp provided on a simulated road surface, a simulated local illumination of a building, a simulated solar light, and a simulated moonlight to generate combined image data for learning.
The actual image data may be combined with at least one of data representing a simulated sunny day, data representing a simulated cloud, data representing a simulated rain, and data representing a simulated snow to generate the learning combined image data.
Further, berlin noise (Perlin noise) may be added to the actual image data to generate learning synthetic image data in which fog, cloud, or the like is reflected.
The estimating unit 108 reads the learning completion model M stored in the learning completion model storage unit 106, and inputs the image data acquired by the image acquiring unit 104 to the learning completion model M. Thereby, the learning completion model M outputs visibility estimation data representing the visibility of the scenery reflected in the image data. The estimating unit 108 estimates the visibility of the scenery to be displayed in the image data, using the value indicated by the visibility estimating data as the visibility value.
In addition, when a road having a plurality of lanes is reflected in the image data, the estimating unit 108 estimates the visibility for each of the plurality of lanes reflected in the image data based on the position information of the plurality of lanes preset for the previous image data, the scenery reflected in the previous image data being the same as the scenery reflected in the image data.
Fig. 5 shows an example of previous image data. As shown in fig. 5, in the previous image data, the lane of the road is set with a black Line. Therefore, the estimating section 108 estimates the visibility for each of the plurality of lanes mapped in the image data based on the position information indicated by the black line in the previous image data.
The learning completion model M stored in the learning completion model storage unit 308 is obtained by, for example, learning in advance by the learning device 30 shown in fig. 6.
As shown in fig. 6, the learning device 30 functionally includes an actual image data storage unit 300, a synthesized image data generation unit 302, a learning data storage unit 304, a learning unit 306, and a learning completion model storage unit 308.
The actual image data storage unit 300 stores a plurality of actual image data actually captured.
The composite image data generating unit 302 performs the above-described processing on the plurality of actual image data stored in the actual image data storage unit 300, and generates composite image data for learning. Then, the composite image data generating unit 302 stores the learning data, which is the learning data that correlates the learning composite image data with the correct value of the visibility of the scenery that is reflected in the learning composite image data, in the learning data storage unit 304. The correct value of the visibility at this time may be set in advance according to the method of generating the learning synthetic image data, or may be manually added.
The learning data storage unit 304 stores learning data for generating the learning completion model M shown in fig. 3. Specifically, the learning data storage unit 304 stores learning data in which the learning synthetic image data generated by the synthetic image data generation unit 302 is associated with a correct value of the visibility of the scenery reflected in the learning synthetic image data. The learning data storage unit 304 stores learning data in which learning actual image data on which fog is displayed is associated with a correct value of the visibility of a landscape displayed in the learning actual image data.
The learning unit 306 reads the plurality of pieces of learning data stored in the learning data storage unit 304. Then, the learning unit 306 learns a predetermined machine learning model using a supervised machine learning algorithm based on the read plurality of learning data, thereby generating a learning completion model M.
The learning completion model storage unit 308 stores the learning completion model M generated by the learning unit 306.
Both the server apparatus 10 and the learning apparatus 30 can be realized by, for example, a computer 50 shown in fig. 7. The computer 50 includes a CPU 51, a memory 52 as a temporary storage area, and a nonvolatile storage unit 53. In addition, the computer 50 includes an input/output interface (I/F) 54 connected to an input/output device or the like (not shown), and a read/write (R/W) section 55 that controls reading and writing of data to and from the recording medium 59. The computer 50 further includes a network interface (I/F) 56 connected to a network such as the internet. The CPU 51, the memory 52, the storage 53, the input/output I/F54, the R/W55, and the network I/F56 are connected to each other via a bus 57.
The storage unit 53 may be implemented by a Hard Disk Drive (HDD), a solid state Disk (Solid State Drive, SSD), a flash memory, or the like. The storage unit 53 as a storage medium stores a program for causing the computer 50 to function. The CPU 51 reads a program from the storage unit 53, expands the program into the memory 52, and sequentially executes processes included in the program.
The functions realized by the program may be realized by a semiconductor integrated circuit, a more detailed application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or the like, for example.
< action of learning device 30 >
Next, the operation of the learning device 30 according to the embodiment will be described. The actual image data storage unit 300 stores a plurality of actual image data, and when the learning device 30 receives an instruction signal for learning processing, the learning processing routine shown in fig. 8 is executed.
In step S100, the composite image data generating section 302 acquires a plurality of real image data stored in the real image data storage section 300.
In step S102, the composite image data generating unit 302 generates composite image data for learning by combining the plurality of actual image data acquired in step S100 with simulated fog or the like. Next, the synthetic image data generating unit 302 associates the learning synthetic image data with the correct value of the visibility of the scenery reflected in the learning synthetic image data, and stores the learning synthetic image data as learning data in the learning data storage unit 304.
In step S104, the learning unit 306 acquires a plurality of pieces of learning data stored in the learning data storage unit 304.
In step S106, the learning unit 306 learns a predetermined machine learning model using a supervised machine learning algorithm based on the plurality of learning data acquired in step S104, thereby generating a learning completion model M.
In step S108, the learning unit 306 saves the learning completion model M generated in step S106 in the learning completion model storage unit 308, and ends the learning processing routine.
< action of the server apparatus 10 >
Next, the operation of the server device 10 according to the embodiment will be described. When the learning completion model M is input to the server apparatus 10, the server apparatus 10 saves the learning completion model M to the learning completion model storage section 106. The server device 10 receives the image data successively detected by the camera C and stores the image data in the image data storage unit 102. Upon receiving the indication signal to estimate the visibility, the server apparatus 10 executes the visibility estimation processing routine shown in fig. 9.
In step S200, the image acquisition section 104 acquires image data of an object for which visibility is to be estimated from the plurality of image data stored in the image data storage section 102.
In step S202, the estimating unit 108 reads the learning completion model M stored in the learning completion model storage unit 106.
In step S204, the estimating section 108 inputs the image data acquired in step S200 into the learning completion model M read in step S202, thereby estimating the visibility of the scenery reflected in the image data.
In step S206, the estimating unit 108 outputs the visibility value estimated in step S204 as a result, and ends the estimation processing routine.
Further, the estimated visibility value is transmitted to, for example, an external other server (not shown) or the terminal device 20.
As described above, the server apparatus of the present embodiment inputs image data showing an outdoor landscape to the learning completion model, and acquires the visibility data indicating the visibility of the landscape shown in the image data, which is output from the learning completion model, thereby estimating the visibility of the landscape shown in the image data. The learning completion model is a learning completion model in which learning is performed in advance based on learning data in which learning synthetic image data in which simulated fog is displayed and a correct value of visibility of a landscape displayed in the learning synthetic image data are associated with each other. Thus, the visibility can be estimated with high accuracy using the learning completion model.
The present invention is not limited to the above embodiments. The above embodiments are illustrative, and the present invention is intended to be within the scope of the present invention, having substantially the same structure and the same operational effects as the technical idea described in the claims of the present invention.
For example, the estimating unit 108 may estimate the visibility of the scenery reflected in the image data by specifying the pattern of the road reflected in the image data and inputting the image data to a learning model obtained by learning in advance for each pattern of the road. As shown in fig. 10, the installation positions of the cameras C are various, and the image data captured by the cameras C are also various. Therefore, the estimating unit 108 may estimate the visibility of the scenery reflected in the image data by determining the pattern of the road reflected in the image data and inputting the image data to a learning completion model obtained by learning in advance for each pattern of the road. Thus, the visibility can be estimated more accurately.
Examples (example)
Next, examples will be described. In this example, experimental results concerning the effects of the proposed method described in this embodiment are shown. Fig. 11 shows experimental results of the proposed method according to the present embodiment.
As shown in the upper part of fig. 11, the larger the number of the synthesized images of fog (indicated as an artificial image in fig. 11), the lower the average error of the visibility estimation.
As shown in the middle of fig. 11, it is understood that the average error of the visibility estimation is lower as various noises are added to the synthesized image at the time of model learning.
As shown in the lower part of fig. 11, it is found that the average error in the visibility estimation is more reduced when the road shape pattern is classified than when the road shape pattern is not classified.
Therefore, according to the method of the present embodiment, it is known that the visibility can be estimated with high accuracy by using the learning completion model obtained by learning based on the learning synthetic image data that reflects the simulated fog.
[ description of reference numerals ]
1: visibility estimation system
10: server device
C: camera with camera body

Claims (8)

1. A visibility estimating apparatus, comprising:
an image acquisition unit configured to acquire image data representing a landscape outside a room; and
an estimating section that inputs the image data acquired by the image acquiring section to a learning completion model, and acquires visibility data representing a visibility of a landscape reflected in the image data, which is output by the learning completion model, thereby estimating the visibility of the landscape reflected in the image data, wherein,
the learning completion model is a learning completion model in which learning is performed in advance based on learning data in which learning synthetic image data in which simulated fog is reflected is associated with a correct value of visibility of a landscape reflected in the learning synthetic image data.
2. The apparatus for estimating visibility according to claim 1, wherein,
at least a part of the learning synthetic image data of the plurality of learning synthetic image data is added with noise.
3. The visibility estimating device according to claim 1 or 2, characterized in that,
at least a part of the learning synthetic image data of the plurality of learning synthetic image data is synthesized with: data of at least one of a simulated vehicle headlight, a road shoulder lamp provided on a simulated road surface, a simulated local illumination of a building, a simulated sunlight, and a simulated moonlight.
4. The visibility estimating device according to claim 1 or 2, characterized in that,
at least a part of the learning synthetic image data of the plurality of learning synthetic image data is synthesized with: at least one of data representing a simulated sunny day, data representing a simulated cloud, data representing a simulated rain, and data representing a simulated snow.
5. The visibility estimating device according to claim 1 or 2, characterized in that,
at least a part of the learning composite image data of the plurality of learning composite image data is added with blur corresponding to distance.
6. The visibility estimating device according to claim 1 or 2, characterized in that,
the estimating section is specifically configured to determine a pattern of a road mapped in the image data, and input the image data to a learning completion model obtained by learning in advance for each pattern of the road, thereby estimating visibility of a landscape mapped in the image data.
7. A visibility estimation method performed by a computer, the method comprising:
acquiring image data showing a landscape outside; and
inputting the acquired image data to a learning completion model, and acquiring visibility data representing the visibility of a landscape reflected in the image data, which is output by the learning completion model, thereby estimating the visibility of the landscape reflected in the image data, in the visibility estimating method,
the learning completion model is a learning completion model in which learning is performed in advance based on learning data in which learning composite image data in which simulated fog is reflected is associated with a correct value of visibility of a landscape reflected in the learning composite image data.
8. A computer-readable recording medium, characterized in that a visibility estimating program for causing the computer to execute:
acquiring image data showing a landscape outside; and
inputting the acquired image data into a learning completion model, and acquiring visibility data representing the visibility of a landscape reflected in the image data, which is output from the learning completion model, thereby estimating the visibility of the landscape reflected in the image data, wherein,
the learning completion model is a learning completion model in which learning is performed in advance based on learning data in which learning composite image data in which simulated fog is reflected is associated with a correct value of visibility of a landscape reflected in the learning composite image data.
CN202210721991.7A 2022-06-17 2022-06-17 Visibility estimating device and method, and recording medium Pending CN117291864A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210721991.7A CN117291864A (en) 2022-06-17 2022-06-17 Visibility estimating device and method, and recording medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210721991.7A CN117291864A (en) 2022-06-17 2022-06-17 Visibility estimating device and method, and recording medium

Publications (1)

Publication Number Publication Date
CN117291864A true CN117291864A (en) 2023-12-26

Family

ID=89237782

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210721991.7A Pending CN117291864A (en) 2022-06-17 2022-06-17 Visibility estimating device and method, and recording medium

Country Status (1)

Country Link
CN (1) CN117291864A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112912689A (en) * 2019-09-19 2021-06-04 纬哲纽咨信息咨询有限公司 Visibility estimation device, visibility estimation method, and recording medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112912689A (en) * 2019-09-19 2021-06-04 纬哲纽咨信息咨询有限公司 Visibility estimation device, visibility estimation method, and recording medium

Similar Documents

Publication Publication Date Title
JP5776545B2 (en) Road surface inspection program and road surface inspection device
US8818026B2 (en) Object recognition device and object recognition method
JP4647514B2 (en) Aerial image processing apparatus and aerial image processing method
JP4493050B2 (en) Image analysis apparatus and image analysis method
CN110415544B (en) Disaster weather early warning method and automobile AR-HUD system
CN110135302B (en) Method, device, equipment and storage medium for training lane line recognition model
CN109631776B (en) Automatic measurement method for icing thickness of high-voltage transmission line conductor
CN110046584B (en) Road crack detection device and detection method based on unmanned aerial vehicle inspection
CN112446246B (en) Image occlusion detection method and vehicle-mounted terminal
JP7092615B2 (en) Shadow detector, shadow detection method, shadow detection program, learning device, learning method, and learning program
CN110084218A (en) The rainwater distributed data treating method and apparatus of vehicle
CN110736472A (en) indoor high-precision map representation method based on fusion of vehicle-mounted all-around images and millimeter wave radar
CN117291864A (en) Visibility estimating device and method, and recording medium
JP2022039188A (en) Position attitude calculation method and position attitude calculation program
JP7444148B2 (en) Information processing device, information processing method, program
CN113408454A (en) Traffic target detection method and device, electronic equipment and detection system
CN113192353A (en) Map generation data collection device, map generation data collection method, and vehicle
CN113743151A (en) Method and device for detecting road surface sprinkled object and storage medium
JP2016166794A (en) Image creating apparatus, image creating method, program for image creating apparatus, and image creating system
US20230081098A1 (en) Deterioration diagnosis device, deterioration diagnosis method, and recording medium
CN117291865A (en) Visibility estimating device and method, and recording medium
CN113643374A (en) Multi-view camera calibration method, device, equipment and medium based on road characteristics
JP2012203722A (en) Feature selection system, feature selection program, and feature selection method
CN112233079A (en) Method and system for fusing images of multiple sensors
JP6901647B1 (en) Visibility estimation device, visibility estimation method, and recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination