CN116310598B - Obstacle detection method and device for severe weather - Google Patents

Obstacle detection method and device for severe weather Download PDF

Info

Publication number
CN116310598B
CN116310598B CN202310550189.0A CN202310550189A CN116310598B CN 116310598 B CN116310598 B CN 116310598B CN 202310550189 A CN202310550189 A CN 202310550189A CN 116310598 B CN116310598 B CN 116310598B
Authority
CN
China
Prior art keywords
image
classification information
intermediate image
processed
rainwater
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310550189.0A
Other languages
Chinese (zh)
Other versions
CN116310598A (en
Inventor
肖涛
韩兆宇
徐卫星
姚俊俊
戚原野
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou Haitu Information Technology Co ltd
Original Assignee
Changzhou Haitu Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou Haitu Information Technology Co ltd filed Critical Changzhou Haitu Information Technology Co ltd
Priority to CN202310550189.0A priority Critical patent/CN116310598B/en
Publication of CN116310598A publication Critical patent/CN116310598A/en
Application granted granted Critical
Publication of CN116310598B publication Critical patent/CN116310598B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The embodiment of the application provides a method and a device for detecting obstacles in severe weather, and belongs to the technical field of image processing. The method comprises the steps of collecting a current environment image; classifying the current environment image based on a pre-training model to obtain classification information, wherein the classification information comprises first classification information used for representing bad weather and second classification information used for representing normal weather; if the classification information is first classification information, performing secondary processing on the environment image to be processed carrying the first classification information to obtain a processed target image; the environment image to be processed is an image carrying the first classification information in the current environment image; detecting an obstacle in the target image. The application can accelerate the detection speed without reducing the detection accuracy of the obstacle, and improves the driving safety in severe weather.

Description

Obstacle detection method and device for severe weather
Technical Field
The application relates to the technical field of image processing, in particular to a method and a device for detecting obstacles in severe weather.
Background
In severe weather (foggy, rainy, snowy), the visibility of people during driving is greatly reduced, and in order to recognize forward obstacles (people, vehicles, and other obstacles) in severe weather, especially during driving in severe weather, human eyes are limited in vision, and even if the forward obstacle is observed, there is insufficient time to move away from the direction or brake.
In order to solve the above problems, there are three processing methods in the prior art: (1) color correction algorithm based on channel restoration and brightness channel enhancement; (2) the DehazeNet proposed based on CNN takes a foggy fuzzy image as input, outputs the transmissivity of the foggy fuzzy image, and recovers a foggy clear image based on an atmospheric scattering model theory; (3) object detection is performed using millimeter wave radar and thermal imaging.
However, the mode (1) is only to make the pictures clearer and more visible, so that the result of sample optimization is achieved, and the effect of removing fog, rain and snow is not achieved.
The defogging algorithm in the mode (2) has better effect, but has single effect;
mode (3) uses millimeter wave radar and thermal imaging, while more accurate identification, adds equipment and also adds more cost.
Therefore, how to solve the above-mentioned problems is a problem that needs to be solved at present.
Disclosure of Invention
The application provides a method and a device for detecting obstacles in severe weather, and aims to solve the problems.
In a first aspect, the present application provides a method for detecting an obstacle in severe weather, the method comprising:
collecting a current environment image;
classifying the current environment image based on a pre-training model to obtain classification information, wherein the classification information comprises first classification information used for representing bad weather and second classification information used for representing normal weather;
if the classification information is first classification information, performing secondary processing on the environment image to be processed carrying the first classification information to obtain a processed target image; the environment image to be processed is an image carrying the first classification information in the current environment image;
detecting an obstacle in the target image.
Optionally, the pre-training model includes a convolution layer of 5*5, an activation function layer, and a fully-connected layer of 1*1.
Optionally, the pre-training model includes 2 convolutional layers 3*3, an activation function layer, and a fully-connected layer 1*1.
Optionally, the first classification information includes first sub-classification information for characterizing bad weather as rain; the secondary processing is performed on the environment image to be processed carrying the first classification information to obtain a processed target image, and the secondary processing comprises the following steps:
denoising the environment image to be processed carrying the first sub-classification information to obtain a first intermediate image from which the fine rain is removed;
removing rainwater which is dropped on front windshield of the automobile on the first intermediate image to obtain a second intermediate image;
and layering the second intermediate image to obtain a target image after removing rainwater.
Optionally, the removing the rainwater dropping on the front windshield of the automobile on the first intermediate image to obtain a second intermediate image includes:
inputting the second intermediate image into a preset ConvLSTM model for multiple times to form an attention mechanism heat map;
and extracting the characteristics of the rainlayers in the attention mechanism heat map by using a preset VGG16 model to obtain a second intermediate image.
Optionally, the layering processing is performed on the second intermediate image to obtain a target image after removing rainwater, including:
layering the second intermediate image according to rainwater in different directions to obtain an image layer and a rainwater layer of the second intermediate image;
and separating the image layer from the rainwater layer to obtain a target image after the rainwater layer is removed.
Optionally, the first classification information includes second sub-classification information for characterizing bad weather as snow; the secondary processing is performed on the environment image to be processed carrying the first classification information to obtain a processed target image, and the secondary processing comprises the following steps:
performing pixel-by-pixel filtering processing on the environment image to be processed carrying the second sub-classification information to obtain a third intermediate image after removing thin snow and semitransparent snow;
performing threshold level processing on the third intermediate image to obtain a black-white fourth intermediate image;
and removing the dot-shaped objects on the fourth intermediate image to obtain the target image after removing the snow.
Optionally, the first classification information includes third sub-classification information for characterizing bad weather as fog; the secondary processing is performed on the environment image to be processed carrying the first classification information to obtain a processed target image, and the secondary processing comprises the following steps:
and processing the environment image to be processed carrying the third sub-classification information based on DehazeNet to obtain a defogged target image.
In a possible embodiment, the detecting the obstacle in the target image includes:
cutting the target image to cut useless parts to obtain an image to be detected;
and detecting the obstacle in the image to be detected by using a preset small target detection algorithm.
In a second aspect, the present application provides an obstacle detection device for use in severe weather, the device comprising:
the acquisition module is used for acquiring the current environment image;
the classification module is used for classifying the current environment image based on a pre-training model to obtain classification information, wherein the classification information comprises first classification information used for representing bad weather and second classification information used for representing normal weather;
the processing module is used for carrying out secondary processing on the environment image to be processed carrying the first classification information if the classification information is the first classification information, so as to obtain a processed target image; the environment image to be processed is an image carrying the first classification information in the current environment image;
and the detection module is used for detecting the obstacle in the target image.
The obstacle detection method and device for severe weather provided by the application are characterized in that the current environment image is acquired; classifying the current environment image based on a pre-training model to obtain classification information, wherein the classification information comprises first classification information used for representing bad weather and second classification information used for representing normal weather; the method comprises the steps of processing images in severe weather, and reducing the processing of images in normal weather; to rapidly detect an obstacle in the target image. Therefore, the application can accelerate the detection speed without reducing the detection accuracy of the obstacle, and improves the driving safety in severe weather.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of an electronic device according to a first embodiment of the present application;
fig. 2 is a flowchart of a method for detecting an obstacle in severe weather according to a second embodiment of the present application;
FIG. 3 is a schematic diagram of a pre-trained model for use in the method of obstacle detection in severe weather shown in FIG. 2;
FIG. 4 is a schematic illustration of the target image of FIG. 2 before being cut for use in the method for detecting obstructions in severe weather;
FIG. 5 is a schematic view of the object image shown in FIG. 2 after cutting for use in the method for detecting an obstacle in severe weather;
fig. 6 is a schematic functional block diagram of an obstacle detecting device according to a third embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In the first embodiment, fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application, and in the present application, an electronic device 100 for implementing an example of a method and an apparatus for detecting an obstacle in severe weather according to an embodiment of the present application may be described by using the schematic diagram shown in fig. 1.
As shown in fig. 1, an electronic device 100 includes one or more processors 102, one or more memory devices 104, and an image capture device 106, interconnected by a bus system and/or other forms of connection mechanisms (not shown). It should be noted that the components and structures of the electronic device 100 shown in fig. 1 are exemplary only and not limiting, and that the electronic device may have some of the components shown in fig. 1 or may have other components and structures not shown in fig. 1, as desired.
The processor 102 may be a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities, and may control other components in the electronic device 100 to perform desired functions.
It should be appreciated that the processor 102 in embodiments of the present application may be a central processing unit (central processing unit, CPU), which may also be other general purpose processors, digital signal processors (digital signal processor, DSP), application specific integrated circuits (application specific integrated circuit, ASIC), off-the-shelf programmable gate arrays (field programmable gate array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 104 may include one or more computer program products, which may include various forms of computer-readable storage media.
It should be appreciated that the storage device 104 in embodiments of the present application may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. The volatile memory may be random access memory (random access memory, RAM) which acts as an external cache. By way of example but not limitation, many forms of random access memory (random access memory, RAM) are available, such as Static RAM (SRAM), dynamic Random Access Memory (DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate synchronous dynamic random access memory (DDR SDRAM), enhanced Synchronous Dynamic Random Access Memory (ESDRAM), synchronous Link DRAM (SLDRAM), and direct memory bus RAM (DR RAM).
Wherein one or more computer program instructions may be stored on the computer readable storage medium, the processor 102 may execute the program instructions to implement client functions and/or other desired functions in embodiments of the present application as described below (implemented by the processor).
The image acquisition device 106 may be a vehicle-mounted camera to acquire an environment image outside the vehicle in real time.
In a second embodiment, referring to a flowchart of a method for detecting an obstacle in severe weather shown in fig. 2, the method specifically includes the steps of:
step S201, a current environment image is acquired.
It should be understood that the current environmental image is an environmental image outside the current vehicle. For example, an environmental image in front of a vehicle is acquired in real time by an in-vehicle camera.
And step S202, classifying the current environment image based on a pre-training model to obtain classification information.
Wherein the classification information includes first classification information for characterizing bad weather and second classification information for characterizing normal weather.
Optionally, the pre-training model includes a convolution layer of 5*5, an activation function layer, and a fully-connected layer of 1*1.
To further reduce the parameters of the process, 2 3*3 convolution layers were used instead of 1 5*5 to construct a new pre-training model consisting of 2 3*3 convolution layers, an activation function layer, and a full-join layer of 1*1.
It will be appreciated that the convolution of 2 3*3 of the convolution kernels of 1 5*5 has the same receptive field, as shown in fig. 3, with 5*5 pixels being exemplified, and the left side being 5*5 convolutions, with one pixel point remaining; after the 3*3 convolution is performed twice, one pixel point remains, so that the recognition effect is not reduced. Then the calculation amount is considered in the next step, the calculation amount of the parameters of the convolution layer is k×k×c1×c2 (this is an official calculation formula), and the 5*5 convolution parameter is 5×5×c=25c 2 Two 3*3 convolution parameters is 3 x C+3 x 3 x C =18c 2 Obviously fewer, so 2 3*3 convolutions are more advantageous.
It should be noted that the image size is greater than 10×10 pixels.
It should be understood that the full connection layer of the traditional network structure occupies about 80%, and by changing the traditional network structure and replacing the full connection layer by utilizing the 1*1 convolution structure, a large number of parameters can be saved, the running speed can be faster, and the accuracy of the classification network is not greatly influenced.
Alternatively, the pre-training model uses MobileNetV3 for image classification. Firstly, a pre-training model is used for transfer learning, a large data set ImageNet is used as a pre-training source (the pre-training source of other traffic types can be replaced, the specific scene training effect is better), model weights obtained through pre-training are transferred to a classification model, so that a better model can be obtained by using fewer samples, a good classification effect is achieved, the detection time is effectively saved, the generalization performance can be increased, and various driving conditions can be dealt with.
Next, a multi-layer perceptron (abbreviated MLP, hereinafter MLP) is added. The MLP includes: 5*5 is convolved with a 1*1 fully connected layer, with an activation function Relu layer in between.
It can be understood that, in order to improve the classification efficiency, after the current environmental image is collected, the current environmental image is cleaned, so that the current environmental image is filtered by the broken image, the current environmental image is denoised, the blurred current environmental image is clarified, so as to obtain a clear and lossless image, and then step S202 is performed on the clear and lossless image.
Step S203, if the classification information is the first classification information, performing secondary processing on the to-be-processed environmental image carrying the first classification information, so as to obtain a processed target image.
The environment image to be processed is an image carrying the first classification information in the current environment image.
As a bad weather processing scenario, the first classification information includes first sub-classification information for characterizing bad weather as rain, step S203 includes: denoising the environment image to be processed carrying the first sub-classification information to obtain a first intermediate image from which the fine rain is removed; removing rainwater which is dropped on front windshield of the automobile on the first intermediate image to obtain a second intermediate image; and layering the second intermediate image to obtain a target image after removing rainwater.
It will be appreciated that rain may be regarded as a noise to obtain a first intermediate image from which the rain has been removed by subjecting the image to a pixel-by-pixel filtering process.
Optionally, the removing the rainwater dropping on the front windshield of the automobile on the first intermediate image to obtain a second intermediate image includes: inputting the second intermediate image into a preset ConvLSTM (Long Short-Term Memory network) model for a plurality of times to form an attention mechanism heat map; and extracting the characteristics of the rain layer in the attention mechanism heat map by using a preset VGG (Visual Geometry Group) model to obtain a second intermediate image.
It can be appreciated that the rain can be characterized more obviously by inputting the preset ConvLSTM model a plurality of times. In addition, in order to obtain more excellent image quality, the VGG16 is utilized to extract the characteristics of a high layer, so that a rainwater layer is extracted, obvious difference can be formed between the rainwater layer and a background image, meanwhile, rainwater dripping on the front windshield of an automobile can be removed, the influence of the rainwater is reduced, and the accuracy of obstacle detection is improved conveniently.
Optionally, the layering processing is performed on the second intermediate image to obtain a target image after removing rainwater, including: layering the second intermediate image according to rainwater in different directions to obtain an image layer and a rainwater layer of the second intermediate image; and separating the image layer from the rainwater layer to obtain a target image after the rainwater layer is removed.
It will be appreciated that in the present application, for rain water on non-glass, the image is layered again according to the rain water in different directions, the bottom layer (base layer, that is, image layer) and each rain layer (the rain layer may be extracted multiple times), and the low-frequency background layer (base layer) and the high-frequency line feature layer (rain layer) of the rain water image are separated, so that the target image after the rain water layer is removed is obtained.
It can be appreciated that by removing the rainwater on the glass first and then removing the rainwater on the non-glass, the effect of removing the rainwater can be further improved, so that the influence of the rainwater is reduced, and the accuracy of detecting the obstacle is further improved.
As another bad weather processing scenario, the first classification information includes second sub-classification information for characterizing bad weather as snow, and step S203 includes: performing pixel-by-pixel filtering processing on the environment image to be processed carrying the second sub-classification information to obtain a third intermediate image after removing thin snow and semitransparent snow; performing threshold level processing on the third intermediate image to obtain a black-white fourth intermediate image; and removing the dot-shaped objects on the fourth intermediate image to obtain the target image after removing the snow.
It can be appreciated that the snow removing method can effectively reduce the difficulty of removing the snow so as to improve the detection accuracy of the obstacle.
As still another severe weather processing scenario, the first classification information includes third sub-classification information for characterizing that severe weather is fog, step S203 includes: and processing the environment image to be processed carrying the third sub-classification information based on DehazeNet to obtain a defogged target image.
Step S204, detecting an obstacle in the target image.
As one embodiment, step S204 includes: cutting the target image to cut useless parts to obtain an image to be detected; and detecting the obstacle in the image to be detected by using a preset small target detection algorithm.
Alternatively, the small target detection algorithm may employ yolov5.
It will be appreciated that cutting the target image to cut the unwanted portions so that only the image of the wanted portion needs to be detected, a large number of unwanted portions can be reduced, reducing run time.
It can be appreciated that a group of smaller anchors can be added to the small target detection algorithm, so that the search box is smaller, and the time can be greatly reduced.
For example, as shown in fig. 4, assuming that fig. 4 is a target image processed in steps S202-S203, an image to be detected is obtained by cutting the target image to cut an unnecessary portion, as shown in fig. 5; and then detecting the obstacle in the image to be detected by using a preset small target detection algorithm. It can be seen that by cutting first and then detecting, a large number of useless parts can be reduced, the running time is shortened, the detection efficiency is quickened, and the fast driving safety driving protection navigation under severe weather can be effectively realized.
It can be appreciated that compared with the traditional method for identifying objects in severe weather, the method for detecting the obstacle in severe weather provided by the embodiment of the application reduces a large number of parameters, accelerates the running speed, can detect the obstacle faster, and does not reduce the accuracy of obstacle detection.
In the third embodiment, referring to fig. 6, an obstacle detecting apparatus 400 for use in severe weather includes: the system comprises an acquisition module 410, a classification module 420, a processing module 430 and a detection module 440. The specific functions of each module are as follows:
an acquisition module 410, configured to acquire a current environmental image;
the classification module 420 is configured to classify the current environmental image based on a pre-training model, so as to obtain classification information, where the classification information includes first classification information for characterizing bad weather and second classification information for characterizing normal weather;
the processing module 430 is configured to perform secondary processing on the to-be-processed environmental image carrying the first classification information if the classification information is the first classification information, so as to obtain a processed target image; the environment image to be processed is an image carrying the first classification information in the current environment image;
and a detection module 440, configured to detect an obstacle in the target image.
It should be noted that, the specific functions of the respective modules of the obstacle detecting apparatus 400 for severe weather may be described with reference to the second embodiment, and will not be repeated here.
Further, the present embodiment also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processing device performs the steps of any one of the methods for detecting an obstacle in bad weather provided in the above embodiment.
The computer program product for the method and the device for detecting the obstacle in severe weather provided by the embodiments of the present application includes a computer readable storage medium storing program codes, and the instructions included in the program codes may be used to execute the method described in the foregoing method embodiment, and specific implementation may refer to the method embodiment and will not be described herein.
It should be noted that the foregoing embodiments may be implemented in whole or in part by software, hardware (such as a circuit), firmware, or any other combination. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The computer program product comprises one or more computer instructions or computer programs. When the computer instructions or computer program are loaded or executed on a computer, the processes or functions described in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center by wired (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more sets of available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium. The semiconductor medium may be a solid state disk.
It should be understood that the term "and/or" is merely an association relationship describing the associated object, and means that three relationships may exist, for example, a and/or B may mean: there are three cases, a alone, a and B together, and B alone, wherein a, B may be singular or plural. In addition, the character "/" herein generally indicates that the associated object is an "or" relationship, but may also indicate an "and/or" relationship, and may be understood by referring to the context.
In the present application, "at least one" means one or more, and "a plurality" means two or more. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b, or c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural.
It should be understood that, in various embodiments of the present application, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The above description is only of the preferred embodiments of the present application and is not intended to limit the present application, but various modifications and variations can be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application. It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.

Claims (8)

1. A method for detecting an obstacle in severe weather is characterized in that,
the method comprises the following steps:
collecting a current environment image;
classifying the current environment image based on a pre-training model to obtain classification information, wherein the classification information comprises first classification information used for representing bad weather and second classification information used for representing normal weather;
if the classification information is first classification information, the first classification information comprises first sub-classification information used for representing that severe weather is rain; performing secondary processing on the environment image to be processed carrying the first classification information to obtain a processed target image, wherein the secondary processing comprises the following steps:
denoising the environment image to be processed carrying the first sub-classification information to obtain a first intermediate image after removing the fine rain, wherein the environment image to be processed is an image carrying the first classification information in the current environment image;
removing rainwater which is dropped on front windshield of the automobile on the first intermediate image to obtain a second intermediate image;
layering the second intermediate image to obtain a target image after rainwater is removed;
detecting an obstacle in the target image;
wherein the first classification information includes second sub-classification information for characterizing severe weather as snow; the secondary processing is performed on the environment image to be processed carrying the first classification information to obtain a processed target image, and the secondary processing comprises the following steps:
performing pixel-by-pixel filtering processing on the environment image to be processed carrying the second sub-classification information to obtain a third intermediate image after removing thin snow and semitransparent snow;
performing threshold level processing on the third intermediate image to obtain a black-white fourth intermediate image;
and removing the dot-shaped objects on the fourth intermediate image to obtain the target image after removing the snow.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the pre-training model includes a convolution layer of 5*5, an activation function layer, and a full connection layer of 1*1.
3. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the pre-training model includes 2 convolutional layers 3*3, an activation function layer, and a fully-connected layer 1*1.
4. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the step of removing the rainwater which is dropped on the front windshield of the automobile on the first intermediate image to obtain a second intermediate image comprises the following steps:
inputting the second intermediate image into a preset ConvLSTM model for multiple times to form an attention mechanism heat map;
and extracting the characteristics of the rainlayers in the attention mechanism heat map by using a preset VGG16 model to obtain a second intermediate image.
5. The method of claim 4, wherein the step of determining the position of the first electrode is performed,
the layering treatment is carried out on the second intermediate image to obtain a target image after rainwater is removed, and the layering treatment comprises the following steps:
layering the second intermediate image according to rainwater in different directions to obtain an image layer and a rainwater layer of the second intermediate image;
and separating the image layer from the rainwater layer to obtain a target image after the rainwater layer is removed.
6. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the first classification information comprises third sub-classification information for characterizing bad weather as fog; the secondary processing is performed on the environment image to be processed carrying the first classification information to obtain a processed target image, and the secondary processing comprises the following steps:
and processing the environment image to be processed carrying the third sub-classification information based on DehazeNet to obtain a defogged target image.
7. The method according to any one of claims 1 to 6, wherein,
the detecting of the obstacle in the target image includes:
cutting the target image to cut useless parts to obtain an image to be detected;
and detecting the obstacle in the image to be detected by using a preset small target detection algorithm.
8. An obstacle detection device used in severe weather is characterized in that,
the device comprises:
the acquisition module is used for acquiring the current environment image;
the classification module is used for classifying the current environment image based on a pre-training model to obtain classification information, wherein the classification information comprises first classification information used for representing bad weather and second classification information used for representing normal weather;
the processing module is used for judging whether the classified information is first classified information or not, and if so, the first classified information comprises first sub-classified information used for representing that severe weather is rain; performing secondary processing on the environment image to be processed carrying the first classification information to obtain a processed target image, wherein the secondary processing comprises the following steps:
denoising the environment image to be processed carrying the first sub-classification information to obtain a first intermediate image after removing the fine rain, wherein the environment image to be processed is an image carrying the first classification information in the current environment image;
removing rainwater which is dropped on front windshield of the automobile on the first intermediate image to obtain a second intermediate image;
layering the second intermediate image to obtain a target image after rainwater is removed;
a detection module for detecting an obstacle in the target image;
wherein the first classification information includes second sub-classification information for characterizing severe weather as snow; the processing module is further configured to:
performing pixel-by-pixel filtering processing on the environment image to be processed carrying the second sub-classification information to obtain a third intermediate image after removing thin snow and semitransparent snow;
performing threshold level processing on the third intermediate image to obtain a black-white fourth intermediate image;
and removing the dot-shaped objects on the fourth intermediate image to obtain the target image after removing the snow.
CN202310550189.0A 2023-05-16 2023-05-16 Obstacle detection method and device for severe weather Active CN116310598B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310550189.0A CN116310598B (en) 2023-05-16 2023-05-16 Obstacle detection method and device for severe weather

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310550189.0A CN116310598B (en) 2023-05-16 2023-05-16 Obstacle detection method and device for severe weather

Publications (2)

Publication Number Publication Date
CN116310598A CN116310598A (en) 2023-06-23
CN116310598B true CN116310598B (en) 2023-08-22

Family

ID=86794406

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310550189.0A Active CN116310598B (en) 2023-05-16 2023-05-16 Obstacle detection method and device for severe weather

Country Status (1)

Country Link
CN (1) CN116310598B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116468959B (en) * 2023-06-15 2023-09-08 清软微视(杭州)科技有限公司 Industrial defect classification method, device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104537622A (en) * 2014-12-31 2015-04-22 中国科学院深圳先进技术研究院 Method and system for removing raindrop influence in single image
CN104834912A (en) * 2015-05-14 2015-08-12 北京邮电大学 Weather identification method and apparatus based on image information detection
CN113128347A (en) * 2021-03-24 2021-07-16 北京中科慧眼科技有限公司 RGB-D fusion information based obstacle target classification method and system and intelligent terminal
CN114140346A (en) * 2021-11-15 2022-03-04 深圳集智数字科技有限公司 Image processing method and device
CN115376108A (en) * 2022-09-07 2022-11-22 南京邮电大学 Obstacle detection method and device in complex weather
WO2022263908A1 (en) * 2021-06-14 2022-12-22 Sensetime International Pte. Ltd. Methods and apparatuses for determining object classification

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107729929B (en) * 2017-09-30 2021-03-19 百度在线网络技术(北京)有限公司 Method and device for acquiring information

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104537622A (en) * 2014-12-31 2015-04-22 中国科学院深圳先进技术研究院 Method and system for removing raindrop influence in single image
CN104834912A (en) * 2015-05-14 2015-08-12 北京邮电大学 Weather identification method and apparatus based on image information detection
CN113128347A (en) * 2021-03-24 2021-07-16 北京中科慧眼科技有限公司 RGB-D fusion information based obstacle target classification method and system and intelligent terminal
WO2022263908A1 (en) * 2021-06-14 2022-12-22 Sensetime International Pte. Ltd. Methods and apparatuses for determining object classification
CN114140346A (en) * 2021-11-15 2022-03-04 深圳集智数字科技有限公司 Image processing method and device
CN115376108A (en) * 2022-09-07 2022-11-22 南京邮电大学 Obstacle detection method and device in complex weather

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于注意力生成对抗网络的单幅图像去雨方法;朱德利等;《计算机工程与应用》;第215-222页 *

Also Published As

Publication number Publication date
CN116310598A (en) 2023-06-23

Similar Documents

Publication Publication Date Title
CN107274445B (en) Image depth estimation method and system
EP2568438B1 (en) Image defogging method and system
CN112132156B (en) Image saliency target detection method and system based on multi-depth feature fusion
TWI607901B (en) Image inpainting system area and method using the same
CN109145798B (en) Driving scene target identification and travelable region segmentation integration method
CN111428875A (en) Image recognition method and device and corresponding model training method and device
US9224052B2 (en) Method for in-image periodic noise pixel inpainting
KR101845769B1 (en) Car rear detection system using convolution neural network, and method thereof
US10929715B2 (en) Semantic segmentation using driver attention information
CN112990065B (en) Vehicle classification detection method based on optimized YOLOv5 model
CN116310598B (en) Obstacle detection method and device for severe weather
CN106570439B (en) Vehicle detection method and device
CN110807384A (en) Small target detection method and system under low visibility
CN112731436A (en) Multi-mode data fusion travelable area detection method based on point cloud up-sampling
CN111062347B (en) Traffic element segmentation method in automatic driving, electronic equipment and storage medium
CN110991414B (en) Traffic element high-precision segmentation method, electronic equipment and storage medium
CN111833367A (en) Image processing method and device, vehicle and storage medium
CN110070122B (en) Convolutional neural network fuzzy image classification method based on image enhancement
CN111339808B (en) Vehicle collision probability prediction method, device, electronic equipment and storage medium
KR20190047243A (en) Apparatus and method for warning contamination of camera lens
CN116152778A (en) Vehicle detection method and system with enhanced fusion of infrared and visible light images
Lakmal et al. Pothole detection with image segmentation for advanced driver assisted systems
CN116206229A (en) Target detection method and device
Gao et al. RASWNet: an algorithm that can remove all severe weather features from a degraded image
WO2020001630A1 (en) Ternary image obtaining method and apparatus, and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant