WO2023124442A1 - 积水深度检测方法及装置 - Google Patents

积水深度检测方法及装置 Download PDF

Info

Publication number
WO2023124442A1
WO2023124442A1 PCT/CN2022/126492 CN2022126492W WO2023124442A1 WO 2023124442 A1 WO2023124442 A1 WO 2023124442A1 CN 2022126492 W CN2022126492 W CN 2022126492W WO 2023124442 A1 WO2023124442 A1 WO 2023124442A1
Authority
WO
WIPO (PCT)
Prior art keywords
depth
area
target sub
image
detected
Prior art date
Application number
PCT/CN2022/126492
Other languages
English (en)
French (fr)
Inventor
石瑞姣
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Publication of WO2023124442A1 publication Critical patent/WO2023124442A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats

Definitions

  • the present disclosure relates to the technical field of data processing, in particular to a method and device for detecting the depth of accumulated water.
  • Water on the road is one of the important factors affecting travel, traffic and driving safety, especially in the case of unknown water depth, rushing into the water will cause losses to people's life safety and property safety. Therefore, it is very necessary to detect the depth of pavement surface water.
  • the common road surface water depth detection includes the following:
  • the first is to set up a water level gauge on road sections that are prone to water accumulation (such as low-lying road sections, under bridge holes, etc.), and to know the depth of water accumulation by manually reading the scale of the water level gauge.
  • the second is to install a stagnant water detector on the vehicle.
  • the waterlogged detector can detect the waterlogged depth of the road section.
  • a method for detecting the depth of accumulated water comprising: acquiring a first color image and a first depth image of an area to be detected at a first moment by a photographing device, and the first depth image is used to record the first depth image at the first moment is the depth value of each location in the area to be detected; according to the first color image, the water accumulation detection is carried out in the area to be detected, and the position information of the target sub-area in the state of water accumulation in the area to be detected is obtained; according to the position information of the target sub-area, The first depth image and the pre-stored second depth image determine the water depth of the target sub-area.
  • the second depth image is used to record the depth values of each location in the area to be detected at the second moment. The moment when the area is in a non-flooded state.
  • the above-mentioned determination of the water accumulation depth of the target sub-region according to the position information of the target sub-region, the first depth image and the pre-stored second depth image includes: according to the position information of the target sub-region and the first depth Image, determine the first depth value, the first depth value is the depth value of the water surface of the target sub-region at the first moment; according to the position information of the target sub-region and the second depth image, determine the second depth value, the second depth value is The depth value of the lowest point of the target sub-area at the second moment; the difference between the second depth value and the first depth value is used as the water accumulation depth of the target sub-area.
  • determining the second depth value according to the position information of the target sub-region and the second depth image includes: determining the depth of the target sub-region at the second moment according to the position information of the target sub-region and the second depth image. Depth values of various locations: from the depth values of various locations in the target sub-area at the second moment, the largest depth value is selected as the second depth value.
  • determining the second depth value according to the position information of the target sub-region and the second depth image includes: determining the depth of the target sub-region at the second moment according to the position information of the target sub-region and the second depth image. The three-dimensional coordinates of each location; according to the three-dimensional coordinates of each location of the target sub-region at the second moment, surface fitting is performed to obtain the corresponding surface of the target sub-region; the depth value of the lowest point of the surface corresponding to the target sub-region is used as the second depth value.
  • the above-mentioned determination of the water depth of the target sub-region according to the position information of the target sub-region, the first depth image and the pre-stored second depth image includes: according to the position information of the target sub-region and the first depth Image, determine the first average depth value, the first average depth value is the average value of the depth values of the various places in the target sub-region at the first moment; according to the position information of the target sub-region and the second depth image, determine the second average depth value, the second average depth value is the average value of the depth values of the various locations in the target sub-region at the second moment; the difference between the second average depth value and the first average depth value is used as the ponding depth of the target sub-region .
  • performing water accumulation detection in the area to be detected, and obtaining the position information of the target sub-area in the state of water accumulation in the area to be detected includes: performing water accumulation detection on the area to be detected according to the first color image Road segmentation, obtaining the position information of the vehicle driving area in the area to be detected; according to the position information of the vehicle driving area in the area to be detected and the first color image, detect the water accumulation in the driving area of the vehicle, and obtain the state of water accumulation in the driving area of the vehicle The location information of the target sub-region of .
  • the above method before determining the water accumulation depth of the target sub-area, further includes: performing road segmentation on the area to be detected according to the first color image, and obtaining the position information of the vehicle driving area in the area to be detected.
  • the vehicle driving area According to the vehicle driving area According to the location information of the target sub-area and the position information of the target sub-area, it is judged whether the target sub-area is located in the vehicle driving area; according to the position information of the target sub-area, the first depth image and the pre-stored second depth image, determine the water accumulation in the target sub-area
  • the depth includes: if the target sub-area is located in the vehicle driving area, then determining the water depth of the target sub-area according to the position information of the target sub-area, the first depth image and the pre-stored second depth image.
  • the above method further includes: acquiring a third color image and a third depth image of the area to be detected at a third moment by a photographing device, the third moment being located after the second moment; inputting the third color image into the weather
  • the category recognition model determines the weather type of the third color image, and the weather category includes rainy day or non-rainy day; when the weather category of the third color image is non-rainy day, determines the similarity between the third color image and the second color image,
  • the second color image is a color image obtained by photographing the area to be detected by the photographing device at the second moment; when the similarity between the third color image and the second color image is less than a preset threshold, the third depth image is used to update the third color image.
  • Two-depth image determines the weather type of the third color image, and the weather category includes rainy day or non-rainy day; when the weather category of the third color image is non-rainy day, determines the similarity between the third color image and the second color image.
  • the above method further includes: sending the water accumulation depth of the target sub-area to the terminal device.
  • the above method also includes: comparing the maximum wading depth supported by the vehicle with the water accumulation depth of the target sub-area at the first moment; if the maximum wading depth supported by the vehicle is greater than that of the target sub-area at the first moment
  • the water depth of the vehicle terminal sends the first reminder message to the vehicle terminal.
  • the first reminder message is used to indicate that the vehicle can safely pass through the target sub-area; or, if the maximum wading depth supported by the vehicle is less than or equal to the target sub-area at the first moment
  • the second prompt information is sent to the vehicle terminal, and the second prompt information is used to warn that there is danger in the target sub-area.
  • the above method further includes: performing lane identification in the area to be detected, and determining the position information of each lane in the area to be detected; The lanes affected by the area; according to the lanes affected by the target sub-area and the water depth of the target sub-area, a prompt message is sent to the vehicle terminal.
  • the above-mentioned sending of prompt information to the vehicle terminal according to the lanes affected by the target sub-area and the water accumulation depth of the target sub-area includes: comparing the maximum wading depth supported by the vehicle with the water accumulation depth of the target sub-area; If the maximum wading depth supported by the vehicle is greater than the water depth of the target sub-area, send the first prompt information to the vehicle terminal, the first prompt information is used to indicate that the vehicle can safely pass through the target sub-area; if the maximum wading depth supported by the vehicle is less than Or equal to the depth of water in the target sub-area, according to the lanes affected by the target sub-area, determine whether there is a lane in which vehicles can pass in the area to be detected; if there is a lane in which vehicles can pass in the area to be detected, send a third prompt message to the vehicle terminal , the third prompt information is used to indicate the lane that the vehicle can pass through; if there is no lane that the vehicle can pass through in the area to be
  • the acquisition of the first color image and the first depth image of the region to be detected at the first moment by the photographing device includes: acquiring the first color image and the first depth image of the region to be detected by the photographing device under the condition that preset conditions are met.
  • the above preset condition further includes: the distance between the vehicle terminal and the photographing device is smaller than the preset distance.
  • a water accumulation depth detection device in another aspect, includes: an image acquisition module, which is used to obtain a first color image and a first depth image of the area to be detected at the first moment, and the first depth image is used to record each location in the area to be detected at the first moment
  • the depth value of the water accumulation detection module is used to detect the water accumulation in the area to be detected according to the first color image, and obtain the position information of the target sub-area in the water accumulation state in the area to be detected; the depth detection module is used to detect the water accumulation in the area to be detected according to the target
  • the position information of the sub-area, the first depth image and the pre-stored second depth image determine the water accumulation depth of the target sub-area
  • the second depth image is used to record the depth value of each location in the area to be detected at the second moment,
  • the second moment is the moment when the area to be detected is in a state of no water accumulation.
  • the above-mentioned depth detection module is specifically configured to determine the first depth value according to the position information of the target sub-region and the first depth image, and the first depth value is the depth value of the water surface of the target sub-region at the first moment ; According to the position information of the target sub-region and the second depth image, determine the second depth value, the second depth value is the depth value of the lowest point of the target sub-region at the second moment; the difference between the second depth value and the first depth value The difference between is used as the water depth of the target sub-area.
  • the above-mentioned depth detection module is specifically configured to determine the depth value of each location of the target sub-region at the second moment according to the position information of the target sub-region and the second depth image; from the second moment, the target sub-region Among the depth values of each location, select the largest depth value as the second depth value.
  • the above-mentioned depth detection module is specifically configured to determine the three-dimensional coordinates of each location of the target sub-region at the second moment according to the position information of the target sub-region and the second depth image; Perform surface fitting on the three-dimensional coordinates of each location to obtain the surface corresponding to the target sub-area; use the depth value of the lowest point of the surface corresponding to the target sub-area as the second depth value.
  • the above-mentioned depth detection module is specifically configured to determine the first average depth value according to the position information of the target sub-region and the first depth image, and the first average depth value is each location of the target sub-region at the first moment The average value of the depth value; According to the position information of the target sub-region and the second depth image, determine the second average depth value, the second average depth value is the average value of the depth values of the various places in the target sub-region at the second moment; The difference between the second average depth value and the first average depth value is used as the ponding depth of the target sub-region.
  • the above-mentioned water accumulation depth detection device further includes: an image processing module; the image processing module is used to perform road segmentation on the area to be detected according to the first color image, and obtain the position information of the vehicle driving area in the area to be detected
  • the above water accumulation detection module is specifically used to detect water accumulation in the vehicle driving area according to the position information of the vehicle driving area in the area to be detected and the first color image, and obtain the target sub-area in the water accumulation state in the vehicle driving area location information.
  • the above-mentioned water accumulation depth detection device further includes: an image processing module; the image processing module is used to perform road segmentation on the area to be detected according to the first color image, and obtain the position information of the vehicle driving area in the area to be detected
  • the water accumulation detection module is also used to judge whether the target sub-area is located in the vehicle driving area according to the position information of the vehicle driving area and the position information of the target sub-area; the above-mentioned depth detection module is specifically used for if the target sub-area is located in In the driving area of the vehicle, the water accumulation depth of the target sub-area is determined according to the position information of the target sub-area, the first depth image and the pre-stored second depth image.
  • the above-mentioned image acquisition module is also used to acquire the third color image and the third depth image of the area to be detected at the third moment, and the third moment is located after the second moment; the above-mentioned image processing module is also used Inputting the third color image into the weather category recognition model to determine the weather type of the third color image, the weather category includes rainy day or non-rainy day; when the weather category of the third color image is non-rainy day, determine the third color image and the second The similarity between the color images, the second color image is the color image taken by the area to be detected at the second moment; when the similarity between the third color image and the second color image is less than the preset threshold, the first The three-depth image updates the second-depth image.
  • the above-mentioned apparatus for detecting the depth of accumulated water further includes: a communication module; the communication module is configured to send the depth of accumulated water in the target sub-area to the terminal device.
  • the above-mentioned ponding depth detection device further includes: a data processing module and a communication module; the above-mentioned data processing module is used to compare the maximum wading depth supported by the vehicle and the ponding depth of the target sub-area at the first moment ; The above communication module is used to send the first prompt information to the vehicle terminal if the maximum wading depth supported by the vehicle is greater than the water accumulation depth of the target sub-area at the first moment, and the first prompt information is used to indicate that the vehicle can safely pass through the target sub-area; or, if the maximum wading depth supported by the vehicle is less than or equal to the water depth of the target sub-area at the first moment, send a second prompt message to the vehicle terminal, and the second prompt message is used to warn the target sub-area of danger .
  • the above-mentioned water accumulation depth detection device further includes: a data processing module and a communication module.
  • the above data processing module is used to perform lane identification in the area to be detected, determine the position information of each lane in the area to be detected; Lane: According to the lane affected by the target sub-area and the water depth of the target sub-area, a prompt message is generated.
  • the above-mentioned communication module is used for sending prompt information to the vehicle terminal.
  • the above-mentioned data processing module is specifically used to compare the maximum wading depth supported by the vehicle with the water accumulation depth of the target sub-area; if the maximum wading depth supported by the vehicle is greater than the water accumulation depth of the target sub-area, generate the first One prompt information, the first prompt information is used to indicate that the vehicle can safely pass through the target sub-area; if the maximum wading depth supported by the vehicle is less than or equal to the depth of water in the target sub-area, according to the lanes affected by the target sub-area, it is judged to be detected Whether there is a lane that vehicles can pass in the area; if there is a lane that vehicles can pass in the area to be detected, generate the third prompt information, and the third prompt information is used to indicate the lane that the vehicle can pass; if there is no lane that can pass through the vehicle in the area to be detected , generating fourth prompt information, where the fourth prompt information is used to prompt the user to modify the driving route.
  • the above-mentioned image acquisition module is further configured to acquire the first color image and the first depth image of the area to be detected at the first moment when the preset condition is met; wherein the preset condition includes: The detection area is located on the driving route corresponding to the vehicle terminal.
  • the above preset condition further includes: the distance between the vehicle terminal and the photographing device is smaller than the preset distance.
  • a device for detecting the depth of accumulated water includes a memory and a processor; the memory and the processor are coupled; the memory is used to store computer program codes, and the computer program codes include computer instructions.
  • the processor executes the computer instructions, the device is made to execute the method for detecting the depth of accumulated water as described in any of the above embodiments.
  • a non-transitory computer-readable storage medium stores computer program instructions, and when the computer program instructions run on the processor, the processor executes one or more of the methods for detecting the depth of accumulated water as described in any of the above embodiments. steps.
  • a computer program product includes computer program instructions.
  • the computer program instructions When the computer program instructions are executed on the computer, the computer program instructions cause the computer to perform one or more steps in the method for detecting the depth of accumulated water as described in any of the above embodiments. .
  • a computer program When the computer program is executed on a computer, the computer program causes the computer to execute one or more steps in the method for detecting the depth of accumulated water as described in any of the above embodiments.
  • Fig. 1 is a structural diagram of a water accumulation depth detection system according to some embodiments
  • Fig. 2 is the imaging principle diagram of TOF camera according to some embodiments.
  • FIG. 3 is a schematic diagram of a depth value acquired by a TOF camera according to some embodiments.
  • Fig. 4 is another structural diagram of a water accumulation depth detection system according to some embodiments.
  • Figure 5 is a block diagram of a computing device according to some embodiments.
  • FIG. 6 is a first flowchart of a method for detecting the depth of accumulated water according to some embodiments
  • Fig. 7 is a sample diagram of a depth image with holes and noise points according to some embodiments.
  • Fig. 8 is a structural diagram of a depth image restoration system according to some embodiments.
  • Fig. 9 is an application scene diagram 1 of a method for detecting the depth of accumulated water according to some embodiments.
  • Fig. 10 is an application scene diagram of a water accumulation detection model according to some embodiments.
  • Fig. 11 is a structural diagram of a water accumulation detection model according to some embodiments.
  • Fig. 12 is a second flow chart of a method for detecting the depth of accumulated water according to some embodiments.
  • Fig. 13 is an application scene diagram 2 of a method for detecting the depth of accumulated water according to some embodiments
  • Fig. 14 is a third flowchart of a method for detecting the depth of accumulated water according to some embodiments.
  • Fig. 15 is a flowchart four of a method for detecting the depth of accumulated water according to some embodiments.
  • Figure 16 is a schematic diagram of a Deeplab v3+ semantic segmentation model according to some embodiments.
  • Fig. 17 is a flowchart five of a method for detecting the depth of accumulated water according to some embodiments.
  • Figure 18 is a location map of target sub-areas and vehicle driving areas according to some embodiments.
  • Fig. 19 is a flowchart of an image updating method according to some embodiments.
  • FIG. 20 is a block diagram of a residual network ResNet according to some embodiments.
  • Fig. 21 is a flowchart six of a method for detecting the depth of accumulated water according to some embodiments.
  • Fig. 22 is a seventh flowchart of a method for detecting the depth of accumulated water according to some embodiments.
  • Fig. 23 is a flowchart eighth of a method for detecting the depth of accumulated water according to some embodiments.
  • Fig. 24 is a flowchart nine of a method for detecting the depth of accumulated water according to some embodiments.
  • Fig. 25 is a position map 1 of target sub-areas and lanes according to some embodiments.
  • Fig. 26 is a second position map of target sub-areas and lanes according to some embodiments.
  • Fig. 27 is a structural diagram of a water depth detection device according to some embodiments.
  • first and second are used for descriptive purposes only, and cannot be understood as indicating or implying relative importance or implicitly specifying the quantity of indicated technical features. Thus, a feature defined as “first” and “second” may explicitly or implicitly include one or more of these features. In the description of the embodiments of the present disclosure, unless otherwise specified, "plurality” means two or more.
  • At least one of A, B and C has the same meaning as “at least one of A, B or C” and both include the following combinations of A, B and C: A only, B only, C only, A and B A combination of A and C, a combination of B and C, and a combination of A, B and C.
  • a and/or B includes the following three combinations: A only, B only, and a combination of A and B.
  • the term “if” is optionally interpreted to mean “when” or “at” or “in response to determining” or “in response to detecting,” depending on the context.
  • the phrases “if it is determined that " or “if [the stated condition or event] is detected” are optionally construed to mean “when determining ! or “in response to determining ! depending on the context Or “upon detection of [stated condition or event]” or “in response to detection of [stated condition or event]”.
  • the method for detecting the depth of accumulated water detects the depth of accumulated water by putting related equipment (such as a water level gauge or an accumulated water detector, etc.) into the accumulated water. It can be seen that when the ponding water is deep, the ponding depth detection method of the related art has a certain safety risk because it needs to contact related equipment (such as a water level gauge or a ponding water detector, etc.) with the ponding water.
  • related equipment such as a water level gauge or an accumulated water detector, etc.
  • an embodiment of the present disclosure provides a method for detecting the depth of accumulated water.
  • the method uses a camera having a depth camera to obtain depth information when an area is in a non-water-accumulated state, and depth information when the area is in a water-accumulated state. , to determine the change of the depth information of the area before and after water accumulation, so as to determine the water accumulation depth of the area. It can be seen that the method for detecting the depth of accumulated water provided by the embodiments of the present disclosure does not need to contact the accumulated water, thereby avoiding the risk of wading when the user does not know the depth of the accumulated water.
  • the maximum depth of accumulated water may not be accurately determined due to the inaccurate placement of the water gauge or the accumulated water detector (that is, not placed at the lowest point of the accumulated water area),
  • the method for detecting the depth of accumulated water provided by the embodiments of the present disclosure can accurately determine the maximum depth of accumulated water because it can accurately determine the change of the depth information of each point (for example, the lowest point) in an area before and after the accumulated water.
  • the method for detecting the depth of accumulated water provided by the embodiments of the present disclosure may be applied to scenarios such as vehicle assisted driving, vehicle automatic driving, pedestrian travel navigation, etc., which is not limited thereto.
  • the server can send the water depth of the water accumulation area to the vehicle terminal.
  • the vehicle terminal can send an alarm message to remind the user to avoid the accumulated water.
  • the vehicle terminal can re-plan the driving route according to the water accumulation in each area on the current driving route.
  • the server can store the waterlogged area
  • the accumulated water depth is sent to the terminal device.
  • the user can open the map application on his terminal device, and check the water accumulation depth of the water accumulation area in the target area (for example, the area near the user) on the interface of the map application.
  • an embodiment of the present disclosure provides a schematic diagram of a system for detecting the depth of accumulated water.
  • the water depth detection system includes: a server 10 and a photographing device 20 .
  • the server 10 and the photographing device 20 may be connected in a wired or wireless manner.
  • the photographing device 20 can be set near the area to be detected.
  • the region to be detected is a vehicle traveling section
  • the photographing device may be installed on a street lamp, a traffic signal light or a tree near the vehicle traveling section.
  • Embodiments of the present disclosure do not limit the specific installation manner and specific installation location of the photographing device 20 .
  • the photographing device 20 can be used to photograph a color image and a depth image of the region to be detected.
  • a depth image refers to an image in which the depth values (distances in the vertical direction) from the photographing device 20 to each point in the scene are taken as pixel values.
  • the photographing device may adopt a color camera to capture color images.
  • the color camera may be an RGB camera.
  • the RGB camera adopts the RGB color mode, through the changes of the three color channels of red (red, R), green (greed, G), and blue (blue, B) and their superposition to obtain a variety of images.
  • color Generally, an RGB camera provides three basic color components by three different cables, and three independent charge coupled device (CCD) sensors are used to obtain three color signals.
  • CCD charge coupled device
  • the shooting device may use a depth camera to capture depth images.
  • the depth camera may be a time of flight (time of flight, TOF) camera.
  • TOF camera adopts TOF technology, as shown in Figure 2, the imaging principle of the TOF camera is as follows: the modulated pulsed infrared light is emitted according to the laser light source, and it is reflected after encountering an object. The time difference or phase difference of reflection and reflection is used to convert the distance between the TOF camera and the object to be photographed, and then according to the distance between the TOF camera and the object to be photographed, the depth value of each point in the scene is obtained.
  • the shooting device 20 when detecting the depth value of point M in the scene, firstly, take the shooting device 20 as the origin, take the shooting direction of the shooting device 20 as the Z axis, and take the two sides of the vertical plane of the shooting device 20
  • the first axis is the X axis and the Y axis, and a three-dimensional Cartesian coordinate system is established. According to the time difference between emitting the light source and receiving the light source emitted back from point M, the distance D between the photographing device 20 and point M is calculated.
  • the photographing device 20 collects the angle information between point M and the photographing device 20 when shooting, it can be based on the angle ⁇ between the line between point M and the photographing device 20 and the Z axis, and the distance between the photographing device 20 and M
  • the server 10 is used to obtain the image captured by the photographing device 20 , and based on the image captured by the photographing device 20 , determine the water accumulation depth of the sub-area in the water accumulation state in the area to be detected.
  • the server 10 can be an independent physical server, or a server cluster or distributed system composed of multiple physical servers, and can also provide cloud services, cloud databases, cloud computing, cloud storage, network services, Cloud servers for basic cloud computing services such as cloud communications, middleware services, domain name services, security services, content distribution networks, and big data servers.
  • the water accumulation depth detection system may further include a terminal device 30 .
  • the terminal device 30 and the server 10 may be connected in a wired or wireless manner.
  • the terminal device 30 is used to obtain information related to the detection of the depth of accumulated water through the server 10, and can display the information related to the detection of the depth of accumulated water to the user in the form of voice, text, and the like.
  • the terminal device 30 may be a mobile phone, a tablet computer, a desktop, a laptop, a handheld computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, and a cell phone, a personal Digital assistant (personal digital assistant, PDA), augmented reality (augmented reality, AR) ⁇ virtual reality (virtual reality, VR) equipment, etc.
  • the terminal device 30 may be a vehicle terminal. Vehicle terminals are front-end devices for vehicle communication and management, and can be installed in various vehicles.
  • the server 10 can be integrated with the camera 20 .
  • the server 10 can be integrated with the terminal device 30 .
  • the above-mentioned server 10 and terminal device 30 have similar basic hardware structures, and both include elements included in the computing device shown in FIG. 5 .
  • the following takes the computing device shown in FIG. 5 as an example to introduce the hardware structure of the server 10 and the terminal device 30 .
  • the computing device may include a processor 41 , a memory 42 , a communication interface 43 , and a bus 44 .
  • the processor 41 , the memory 42 and the communication interface 43 may be connected through a bus 44 .
  • the processor 41 is the control center of the computing device, and may be one processor, or a general term for multiple processing elements.
  • the processor 41 may be a general-purpose CPU, or other general-purpose processors.
  • the general-purpose processor may be a microprocessor or any conventional processor.
  • the processor 41 may include one or more CPUs, such as CPU 0 and CPU 1 shown in FIG. 5 .
  • Memory 42 may be read-only memory (read-only memory, ROM) or other types of static storage devices that can store static information and instructions, random access memory (random access memory, RAM) or other types that can store information and instructions
  • the dynamic storage device can also be an electrically erasable programmable read-only memory (electrically erasable programmable read-only memory, EEPROM), a magnetic disk storage medium or other magnetic storage device, or can be used to carry or store instructions or data structures. desired program code and any other medium that can be accessed by a computer, but is not limited thereto.
  • the memory 42 may exist independently of the processor 41, and the memory 42 may be connected to the processor 41 through the bus 44 to store instructions or program codes.
  • the processor 41 invokes and executes the instructions or program codes stored in the memory 42, it can realize the method for detecting the depth of accumulated water provided by the following embodiments of the present disclosure.
  • the software programs stored in the memory 42 are different, so the functions realized by the server 10 and the terminal device 30 are different.
  • the functions performed by each device will be described in conjunction with the flow chart below.
  • the memory 42 may also be integrated with the processor 41 .
  • the communication interface 43 is used to connect the computing device to other devices through a communication network, and the communication network may be Ethernet, radio access network (radio access network, RAN), wireless local area network (wireless local area networks, WLAN) and the like.
  • the communication interface 43 may include a receiving unit for receiving data, and a sending unit for sending data.
  • the bus 44 may be an industry standard architecture (industry standard architecture, ISA) bus, a peripheral component interconnect (PCI) bus, or an extended industry standard architecture (extended industry standard architecture, EISA) bus, etc.
  • ISA industry standard architecture
  • PCI peripheral component interconnect
  • EISA extended industry standard architecture
  • the bus can be divided into address bus, data bus, control bus and so on. For ease of representation, only one thick line is used in FIG. 5 , but it does not mean that there is only one bus or one type of bus.
  • FIG. 5 does not constitute a limitation to the computing device.
  • the computing device may include more or less components than shown in the illustration, or combine some components, or a different arrangement of components.
  • the ponded water depth detection method provided by the embodiment of the present disclosure can be executed by a ponded water depth detection device, and the ponded water depth detection device can be the server 10 in the ponded water depth detection system shown in FIG. 1 , or the processor of the server 10 .
  • the water depth detection device is used as an example for illustration.
  • the embodiment of the present disclosure provides a method for detecting the depth of accumulated water, which includes the following steps:
  • the server acquires a first color image and a first depth image of a region to be detected at a first moment through a photographing device.
  • the area to be detected is an area that needs to detect the depth of accumulated water.
  • the area to be detected may be a low-lying road section, an underpass bridge or a tunnel, etc. where water accumulation is likely to occur.
  • the area to be detected can be determined by the server.
  • K areas in a city are equipped with corresponding shooting devices, and the server may regard these K areas as areas to be detected, and K is a positive integer.
  • the region to be detected can be directly or indirectly determined by the user.
  • K areas in a city are equipped with corresponding shooting devices, and M areas in the K areas are located in the area of the vehicle driven by the user.
  • the server can select these M areas as areas to be detected. In this way, the server does not need to detect the depth of water accumulation in other areas other than the M areas, so as to save computing resources.
  • the first depth image is used to record the depth values of various locations in the area to be detected at the first moment.
  • the depth value of a location is used to reflect the vertical distance between the location and the camera.
  • the first color image is used to reflect the real topography of the area to be detected at the first moment.
  • the depth camera and the color camera are aligned. That is, for the target point in the region to be detected, the coordinates of the pixel point corresponding to the target point in the first depth image are the same as the coordinates of the pixel point corresponding to the target point in the first color image.
  • the target location is any location in the area to be detected.
  • the depth information will be lost when the depth camera collects the depth image, resulting in holes and noise points in the depth image.
  • Holes appear as black areas in white areas in the depth image, and noise points appear as some isolated pixels or pixel blocks that cause strong visual effects in the depth image.
  • the black area in the white image is the hole, and the pixels in the white area that affect the visual effect are the noise points.
  • restoration processing can be performed on the original depth image captured by the photographing device, so as to eliminate holes and noise points in the original depth image.
  • the original depth image is input into a pre-established image inpainting model for inpainting, and an inpainted depth image is obtained.
  • the image inpainting model can be implemented based on a U-Net network, which is not limited in the present disclosure.
  • U-Net is an algorithm that uses a fully convolutional network for semantic segmentation.
  • the U-Net network is a fully convolutional network.
  • the structure of the U-Net network is an encoder-decoder structure. The first half of the encoder uses down-sampling to extract the hole features or noise features in the depth image; the second half of the decoder uses up-sampling to segment the depth image. hole area or noise area.
  • FIG. 8 it is a schematic diagram of an image inpainting model based on a U-Net network provided by an embodiment of the present disclosure. As shown in Figure 8, taking repairing the hole in the depth image as an example, briefly describe the operation steps of the image repair model for image repair:
  • Step a1 Input the original depth image into the image inpainting model for hole detection, and extract the mask of the hole area in the depth image when holes are detected in the original depth image.
  • step a2 the original depth image and the mask of the hole area in the depth image are stitched together, and then input into the U-Net network.
  • Step a3 the U-Net network extracts hole features from the input image, segments the hole area in the depth image, and performs point-by-point multiplication with the mask of the hole area in the inverted depth image to obtain the hole area depth image.
  • Step a4 Carry out a hole filling operation on the depth image of the hole area, and perform a point-by-point addition operation on the filled depth image and the original depth image to obtain a repaired depth image.
  • the hole area or noise area can be repaired in a targeted manner.
  • the above-mentioned first depth image may be a depth image after restoration processing, so as to ensure that the depth value obtained from the first depth image is accurate.
  • the server executes the method for detecting the depth of accumulated water provided in the embodiments of the present disclosure after enabling the function of detecting the accumulated water depth.
  • the server if the server disables the function of detecting the depth of accumulated water, the server does not execute the method for detecting the depth of accumulated water provided in the embodiment of the present disclosure.
  • the server enables the function of detecting the depth of accumulated water by default.
  • the server judges whether to enable the function of detecting the depth of accumulated water according to the current weather conditions. For example, when the weather in the current city is rainy, the server turns on the function of detecting the depth of ponded water; or, when the weather in the current city is not rainy, the server turns off the function of detecting the depth of ponded water.
  • the server may determine to enable/disable the water depth detection function according to an instruction of the terminal device.
  • the user can perform the first operation on the vehicle terminal to instruct to turn on the water depth detection function when the user is driving the vehicle.
  • the vehicle terminal sends to the server a first instruction for instructing to enable the function of detecting the depth of accumulated water; the server activates the function of detecting the depth of accumulated water according to the first instruction.
  • the user may perform a second operation on the vehicle terminal to instruct to turn off the function of detecting the depth of accumulated water.
  • the vehicle terminal sends to the server a second instruction for instructing to turn off the water depth detection function; the server turns off the water depth detection function according to the second instruction.
  • the server acquires the first color image and the first depth image of the area to be detected at the first moment through the photographing device.
  • the above preset conditions include: the area to be detected is located on the driving route corresponding to the vehicle terminal.
  • the server only needs to detect the depth of accumulated water in each area on the driving route of the vehicle terminal, and does not need to detect the depth of accumulated water in areas not on the driving route, which is beneficial to reduce the calculation load of the server.
  • the above preset condition may also include that the distance between the vehicle terminal and the photographing device corresponding to the area to be detected is smaller than the preset distance.
  • the server detects that the driving route of the vehicle terminal at this time is: "go straight ahead for 1000 meters on the road ahead", and the area to be detected is located 300 meters in front of the vehicle terminal , then at this time, the server acquires the first color image and the first depth image of the area to be detected at the first moment through the photographing device.
  • the trigger condition for the server to detect the depth of water accumulation is determined, so that the vehicle terminal can know the depth of water accumulation in the area before reaching the detection area of accumulation water, avoiding the need to Risk of wading.
  • the server acquires the first color image and the first depth image of the region to be detected at the first moment through the photographing device, which may be specifically implemented as: the server sends a photographing instruction to the photographing device, and the photographing instruction is used to instruct the photographing device to photograph the area to be detected.
  • the depth image and the color image of the detection area after that, the server receives the first color image and the first depth image from the camera.
  • the first color image and the first depth image may be photographed by the photographing device before receiving the photographing instruction, or may be photographed by the photographing device after receiving the photographing instruction.
  • the server detects water accumulation in the area to be detected according to the first color image, and acquires position information of a target sub-area in a state of water accumulation in the area to be detected.
  • the position information of the target sub-region is used to indicate the coordinates of the pixel region corresponding to the target sub-region in the first color image.
  • the server may input the first color image into the water accumulation area identification model to obtain the position information of the target sub-area in the water accumulation state in the area to be detected.
  • the shape and coordinates of the pixel region corresponding to the target subregion in the first color image can be determined.
  • the shape of the pixel region corresponding to the target subregion in the first color image is rectangular, and the coordinates of the pixel region corresponding to the target subregion in the first color image are (Xmin, Ymin, Xmax, Ymax), and Xmin represents the pixel region Xmax indicates the maximum value of the abscissa of the pixel area, Ymin indicates the minimum value of the ordinate of the pixel area, and Ymax indicates the maximum value of the ordinate of the pixel area.
  • the server may first adjust the resolution of the first color image to a preset resolution, so as to meet the processing requirements of the water accumulation area identification model.
  • the water accumulation area recognition model can be the YOLOv5-m network in the object recognition and localization algorithm (you only look once, YOLO) based on deep neural network.
  • the YOLOv5-m network uses a separate convolutional neural network (CNN) model to achieve end-to-end target detection.
  • CNN convolutional neural network
  • the YOLOv5-m network structure includes: backbone network (Backbone), neck (Neck) and prediction part (Prediction).
  • the backbone network is the feature extraction network, including slice structure (Focus), convolution module (Conv), bottleneck layer (C3) and spatial pyramid pooling (SPP);
  • the neck uses the class feature pyramid structure to perform high-level features and low-level features Fusion, enhanced feature representation;
  • the prediction part adopts the form of multi-scale prediction, which makes the network suitable for target detection of different scales, and has stronger generalization ability.
  • Step b1 Adjust the resolution of the first color image to a preset resolution of 640*640, and input it into the YOLOv5-m network for identifying waterlogged areas.
  • Step b2 through the backbone network in the YOLOv5-m network, extract the position and pixel of each pixel from the first color image, and extract the pixel from the first color image by analyzing the relationship between each pixel and surrounding pixels Characteristic information of waterlogged areas.
  • Step b3 through the neck in the YOLOv5-m network, feature fusion is performed on the high-level features and bottom-level features of the water-logged area to enhance the feature information of the water-logged area.
  • the low-level features have the advantages of high resolution, more location information and detailed information, but fewer convolution operations, low semantics, and more noise; high-level features have the advantages of high semantics, but low resolution , poor perception of details. Therefore, by fusing the underlying features and high-level features, the integration of dominant features can be achieved and the feature information of the waterlogged area can be enhanced.
  • Step b4 through the prediction part in the YOLOv5-m network, according to the feature information of the water accumulation area output by the neck, perform multi-scale water accumulation area detection on the first color image, and output the target sub-area in the water accumulation state location information.
  • the above-mentioned multi-scale water accumulation area detection refers to scaling the first color image in different scales to obtain an image pyramid, extracting feature information of water accumulation areas of different scales from images of each layer, and performing water accumulation area detection. detection. In this way, the accuracy of identifying the flooded area can be improved.
  • the server determines the water accumulation depth of the target sub-area at the first moment according to the location information of the target sub-area, the first depth image, and the pre-stored second depth image.
  • the second depth image is used to record the depth values of various locations in the region to be detected at the second moment.
  • the second moment is the moment when the area to be detected is in a state of no water accumulation.
  • the second moment precedes the first moment.
  • the coordinates of the pixel point corresponding to the target point in the first depth image are the same as the coordinates of the pixel point corresponding to the target point in the second depth image.
  • the target location is any location in the area to be detected.
  • the second depth image may be pre-stored in a database of the server, or stored in a database of other devices.
  • the depth information of an area when it is not in a state of water accumulation is obtained by the photographing device, and the depth information when the area is in a state of water accumulation is used to determine whether the area is in a state of water accumulation.
  • the change of the depth information before and after the accumulation of water, so that the depth of the accumulation of water in this area can be determined.
  • the method for detecting the depth of accumulated water provided by the embodiments of the present disclosure does not require contact with accumulated water, thereby avoiding the risk of wading when the user does not know the depth of accumulated water.
  • step S103 may be specifically implemented as the following steps:
  • the server determines a first depth value according to the location information of the target sub-region and the first depth image.
  • the first depth value is the depth value of the water surface of the target sub-region at the first moment.
  • the depth value of any point in the target sub-area can be regarded as the depth value of the water surface of the target sub-area. Therefore, the depth value of any point in the target sub-region in the first depth image may be used as the first depth value.
  • the server can directly extract the target sub-region from the first depth image according to the coordinates of the boundary points of the target sub-region The depth value of the boundary point of the target sub-area; the server may use the depth value of the boundary point of the target sub-area as the first depth value.
  • the server determines a second depth value according to the location information of the target sub-region and the second depth image.
  • the second depth value is the depth value of the lowest point of the target sub-region at the second moment. It should be understood that the lowest point is the lowest point of the terrain, that is, the point with the smallest altitude.
  • step S1032a may adopt any of the following implementation manners:
  • the server extracts the depth values of each location in the target sub-region from the second depth image according to the location information of the target sub-region. Since the depth value of a location reflects the vertical distance between the location and the camera, the larger the depth value of the location, the greater the vertical distance between the location and the camera, and the higher the altitude of the location. Low. Therefore, the server can compare the depth values of various locations in the target sub-area, and use the location with the largest depth value as the lowest point in the target sub-area. Correspondingly, the server uses the largest depth value among the depth values of various locations in the target sub-area as the second depth value.
  • the first implementation mode has the advantages of simple operation and small amount of calculation.
  • Implementation Mode 2 The server extracts the three-dimensional coordinates (x, y, depth) of each location of the target sub-region from the second depth image according to the location information of the target sub-region. Afterwards, the server performs surface fitting based on the three-dimensional coordinates of each location in the target sub-region to obtain a curved surface corresponding to the target sub-region, and the curved surface is used to reflect the three-dimensional topography of the target sub-region. Afterwards, the server extracts the depth value (that is, the second depth value) of the lowest point of the target sub-region from the fitted surface.
  • the server extracts the three-dimensional coordinates (x, y, depth) of each location of the target sub-region from the second depth image according to the location information of the target sub-region. Afterwards, the server performs surface fitting based on the three-dimensional coordinates of each location in the target sub-region to obtain a curved surface corresponding to the target sub-region, and the curved surface is used to reflect the three-dimensional topography
  • the above algorithm for surface fitting may use the least square method.
  • the advantage of the second implementation manner is that local burrs in the terrain caused by noise points in the second depth image can be avoided, so as to accurately determine the second depth value.
  • Step S1032a may also be implemented in other manners, which are not limited in this embodiment of the present disclosure.
  • step S1031a may be executed first, and then step S1032a is executed; or, step S1032a is executed first, and then step S1031a is executed; or, step S1031a and step S1032a are executed simultaneously.
  • the server uses the difference between the second depth value and the first depth value as the water accumulation depth of the target sub-area at the first moment.
  • the water accumulation depth of the target sub-region at the first moment is equal to the second depth value minus the first depth value.
  • first depth value is the depth value of the water surface of the target sub-region at the first moment
  • second depth value is the depth value of the lowest point of the target sub-region at the second moment
  • the water depth determined in the example is the maximum water depth.
  • the maximum water-logged depth is usually used as the judging criterion, so in the embodiment of the present disclosure, when measuring the water-logged depth of the target sub-area, the measurement is the depth of the target sub-area. Maximum water depth.
  • step S103 may be specifically implemented as the following steps:
  • the server determines a first average depth value according to the location information of the target sub-region and the first depth image.
  • the first average depth value is an average value of the depth values of various locations in the target sub-region at the first moment.
  • the server extracts the depth value of each location of the target sub-region at the first moment from the first depth image according to the position information of the target sub-region. Afterwards, the server calculates the average value of the depth values of various locations in the target sub-area at the first moment to obtain a first average depth value.
  • the server may extract the first depth image according to the coordinates of the boundary point of the target sub-region. Furthermore, the server may directly use the depth value of the boundary point of the target sub-area as the first average depth value. Based on this implementation manner, the calculation process can be simplified and the calculation amount can be reduced.
  • the server determines a second average depth value according to the location information of the target sub-region and the second depth image.
  • the second average depth value is the average value of the depth values of various locations in the target sub-region at the second moment.
  • the server extracts the depth value of each location of the target sub-region at the second moment from the second depth image according to the position information of the target sub-region. Afterwards, the server calculates the average value of the depth values of various locations in the target sub-area at the second moment to obtain a second average depth value.
  • the server uses the difference between the second average depth value and the first average depth value as the ponding depth of the target sub-area at the first moment.
  • the water accumulation depth of the target sub-region at the first moment is equal to the second average depth value minus the first average depth value.
  • first average depth value and second average depth value are the average value of the depth values of various locations in the target sub-area at different times
  • the water depth determined by the embodiment shown in Figure 14 is the average Water depth.
  • the waterlogged water surface is not necessarily a calm water surface.
  • rain or other vehicles will produce splashes or waves on the waterlogged water surface, making the measured waterlogged water
  • the maximum depth is not accurate enough. Therefore, in the embodiments of the present disclosure, the average water depth is used as the water depth of the target sub-region, which can improve the accuracy of water depth detection.
  • step S102 when applied to vehicle-related application scenarios such as vehicle assisted driving or vehicle unmanned driving, based on the embodiment shown in FIG. 6 , as shown in FIG. 15 , step S102 can be specifically implemented as:
  • the server performs road segmentation on the area to be detected according to the first color image, and acquires position information of the vehicle driving area in the area to be detected.
  • the position information of the vehicle driving area refers to the pixel coordinates of the vehicle driving area in the first color image.
  • the vehicle terminal can only drive in the driving area of the vehicle, it is necessary to segment the area to be detected by road and obtain the location information of the driving area of the vehicle.
  • the first color image is input to the road segmentation model to obtain the position information of the vehicle driving area in the area to be detected.
  • the road segmentation model can be constructed based on the Deeplab v3+ semantic segmentation algorithm.
  • Figure 16 shows a schematic diagram of a Deeplab v3+ semantic segmentation model, and the process of road segmentation will be described in detail below in conjunction with Figure 16.
  • Step c1 input the first color image into the Deeplab v3+ semantic segmentation model.
  • the Deeplab v3+ semantic segmentation model consists of an encoder and a decoder.
  • Step c2. Pass the first color image through a dilated convolution (DCNN) module in the encoder to obtain the first color image processed by the DCNN module.
  • DCNN dilated convolution
  • the three convolution layers in the module are set to different expansion rates, so that the sampling interval of the original data becomes larger.
  • step c2 The purpose of step c2 is to increase the receptive field of the convolution kernel and reduce the size loss of the first color image.
  • Step c3 input the first color image processed by the DCNN module into the spatial pyramid pool (atrous spatial pyramid pooling, ASPP) module in the encoder, so as to scale the first color image in different scales to obtain an image pyramid to achieve multiple Scale feature extraction to obtain feature vectors of fixed size.
  • ASPP atrous spatial pyramid pooling
  • the ASPP module includes: 1 ⁇ 1 convolution (Conv), 3 ⁇ 3 Conv with an expansion rate of 6, 3 ⁇ 3 Conv with an expansion rate of 12, 3 ⁇ 3 Conv with an expansion rate of 18, and Image Pooling.
  • a depthwise separable convolution is applied in an ASPP module.
  • Depth separable convolution means that different convolution kernels are used for different input channels to perform convolution, and the ordinary convolution operation is decomposed into two processes of depth convolution and point-to-point convolution, which can improve the efficiency of convolution operations. , to improve the convolution effect.
  • Step c4 Input the first color image obtained through the processing of the ASPP module and the feature vector of the first color image into the decoder to obtain the segmentation result of the vehicle driving area in the first color image.
  • the decoder adopts the low-level features extraction module to obtain detailed information of the vehicle driving area in the first color image, such as edge (edge), corner (corner), color (color), pixel (pixels), Gradients, etc., to restore the boundary information of the vehicle driving area, and then obtain the segmentation results along the boundary of the vehicle driving area through upsampling.
  • the depthwise separable convolution is applied in the decoder, thereby improving the efficiency of the convolution operation of the decoder and improving the convolution effect.
  • the server detects water accumulation in the vehicle driving area according to the position information of the vehicle driving area in the area to be detected and the first color image, and acquires position information of a target sub-area in a waterlogging state in the vehicle driving area.
  • the water accumulation detection model is used for water accumulation detection. Specifically, the first color image and the position information of the vehicle running area in the area to be detected are input into the water accumulation detection model, and the position coordinates of the target sub-area in the state of water accumulation in the vehicle driving area are output.
  • the water accumulation detection method provided in Figure 15 only detects the vehicle driving area, which reduces the data of the input image and saves computing resources. At the same time, the step of judging whether the accumulated water is in the driving area of the vehicle is omitted, and the efficiency of detecting the accumulated water is improved.
  • the water depth detection method when applied to vehicle-related application scenarios such as vehicle assisted driving or vehicle unmanned driving, based on the embodiment shown in FIG. 6 , as shown in FIG. 17 , the water depth detection method also include steps before step S103:
  • the server performs road segmentation on the area to be detected according to the first color image, and acquires position information of the vehicle driving area in the area to be detected.
  • step S104 for the specific implementation manner of step S104, reference may be made to the description of the above step S1021, which will not be repeated here.
  • the server judges whether the target sub-area is located in the vehicle driving area according to the location information of the vehicle driving area and the location information of the target sub-area.
  • the case where all the target sub-areas are located within the vehicle driving area, and the case where the target sub-area is partly located within the vehicle driving area are both considered to be within the vehicle driving area.
  • a rectangular coordinate system is established,
  • the position information (Xmin1, Ymin1, Xmax1, Ymax1) of the target sub-area can be obtained according to step S102, and the position information (Xmin2, Ymin2, Xmax2, Ymax2) of the vehicle driving area can be obtained according to step S104.
  • step S103 when the target sub-area is located in the vehicle driving area, step S103 is performed.
  • the server can only detect the water accumulation depth in the water accumulation area in the vehicle driving area, without the need for non-vehicle driving The depth of water accumulation is detected in the water accumulation area in the area, so as to save computing resources of the server.
  • the embodiment of the present disclosure also provides a method for updating the second depth image, so that the database The depth image that can reflect the current topographical conditions of the area to be detected is stored in it. As shown in Figure 19, the method includes the following steps:
  • the server obtains a third color image and a third depth image of the region to be detected at a third moment through a photographing device.
  • the third moment is located after the second moment.
  • the third depth image is used to record the depth values of various locations in the area to be detected at the third moment.
  • the third color image is used to reflect the real topography of the area to be detected at the third moment.
  • the server inputs the third color image into the weather category recognition model to determine the weather type of the third color image.
  • the aforementioned weather categories include rainy days or non-rainy days.
  • the above-mentioned weather category identification model may be a classifier for binary classification, and binary classification refers to a classifier for identifying two types of weather, rainy days and non-rainy days.
  • a classifier is a method for classifying samples in data mining, including algorithms such as decision trees, logistic regression, naive Bayes, and neural networks.
  • a naive Bayesian classifier may be used to identify rainy days and non-rainy days.
  • the Naive Bayesian classifier is a probability network based on the Bayesian formula.
  • the Bayesian formula satisfies the following formula (1):
  • P( ⁇ ) represents the initial probability that ⁇ has, that is, the prior probability of ⁇ , which reflects the background knowledge about the chance of ⁇ being the correct hypothesis.
  • P(x) represents the full probability of the set x to be observed, that is, the probability of x when a certain hypothesis is not established, and P(x
  • the naive Bayesian classifier is trained for weather category recognition, and the weather category recognition formula of the Naive Bayesian classifier can be obtained.
  • the weather category identification formula can satisfy the following formula (2):
  • ⁇ (x) map represents the maximum possibility that the weather category in the third color image x belongs to rainy day or non-rainy day
  • x j represents the jth attribute of the third color image x
  • P( ⁇ n ) satisfies the following formula ( 3) and P(x j
  • n is the total number of element factors in the element set
  • ⁇ i is the i-th element factor in the element set.
  • the above factors are some factors that are closely related to the recognition of the third color image, such as saturation, hue, brightness, etc. Reasonable classification and data mining of these factors can obtain significant correlations.
  • x ij is the j-th attribute of the i-th element factor of the third color image x
  • the attribute of the element factor is the data mining of the element factor, such as the mean value of saturation, mean value of hue, mean value of brightness, etc.
  • the above training of naive Bayesian classifier for weather category recognition can be realized by using a residual neural network (ResNet).
  • ResNet residual neural network
  • ResNet refers to adding a direct connection channel to the neural network to directly transmit the input original image information (the input information in Figure 20 is x) to the subsequent accumulation layer, so that the subsequent network
  • the layer may not need to learn the entire image information output by the previous network layer, but directly learn the residual output by the previous network layer (the residual in Figure 20 is F(x)). In this way, ResNet can speed up the training process and improve the accuracy of weather category recognition.
  • the weather type of the third color image is a rainy day
  • the third depth image cannot be used to reflect the topography of the area to be detected when it is not in a state of water accumulation. Therefore, there is no need to consider updating the second depth image with the third depth image.
  • the server next executes step S203.
  • the server determines the similarity between the third color image and the second color image.
  • the second color image is a color image of the region to be detected captured by the photographing device at the second moment.
  • the second color image can be used to reflect the real topography of the area to be detected at the second moment.
  • the second color image may be pre-stored in the database of the server, or in the database of other devices.
  • the server may determine the similarity between the third color image and the second color image by using a normalized correlation (NC) matching algorithm in a template matching method.
  • NC normalized correlation
  • the template matching method refers to that, given a template image and a matching image, find the part most similar to the template image in the matching image.
  • the specific implementation process is to let the template image slide on the matching image, calculate the similarity of each position in units of pixels, and finally obtain the maximum similarity between the template image and the matching image.
  • the template image is the second color image
  • the matching image is the third color image.
  • the similarity between the second color image and the third color image is determined by calculating the correlation coefficient between the second color image and the third color image.
  • the normalized correlation coefficient matching algorithm may satisfy the following formula (5):
  • (x, y) represents the position coordinates of the pixel in the image
  • T (x, y) represents the pixel in the second color image
  • I (x, y) represents the pixel in the third color image
  • R ( x, y) represents the similarity between the second color image and the third color image.
  • the server judges whether the similarity between the third color image and the second color image is less than or equal to a preset threshold.
  • the server updates the second depth image with the third depth image.
  • the aforementioned preset threshold may be 0.9.
  • the preset threshold may be determined according to actual conditions, which is not limited in this embodiment of the present disclosure.
  • the similarity between the third color image and the second color image is greater than the preset threshold, it means that the area to be detected has not changed significantly, and the pre-stored second depth image can still reflect the actual topography of the area to be detected , so there is no need to update the second depth image.
  • the similarity between the third color image and the second color image is less than or equal to the preset threshold, it indicates that the region to be detected has changed significantly, and the pre-stored second depth image can no longer reflect the actual state of the region to be detected. terrain conditions, it is therefore necessary to update the second depth image with the third depth image.
  • the server updating the second depth image with the third depth image may be specifically implemented as: deleting the second depth image from the database, and storing the third depth image in the database.
  • the third depth image can play the role played by the original second depth image.
  • the server in addition, in addition to updating the second depth image with the third depth image, the server also updates the second color image with the third color image.
  • the third color image and the third depth image of the area to be detected at the third moment are obtained by the shooting device, and the third color image is analyzed by using the weather category recognition model
  • the identification of the weather category is carried out to ensure that the weather in the third color image is not rainy, which can eliminate the interference of weather causes on the image similarity calculation.
  • by calculating the similarity between the third depth image and the second depth image it is determined whether the topography of the region to be detected changes, and then it is determined whether to update the second depth image with the third depth image.
  • the topography of the area to be detected can be understood by calculating the similarity between images, which is practical without the user’s on-site inspection; on the other hand, the similarity between the third depth image and the second depth image
  • the depth is lower than the preset threshold, delete the second depth image and replace the second depth image with the third depth image, which can reduce the storage space and ensure that the pre-stored depth image can accurately reflect the topography of the area to be detected at the current moment .
  • the depth of accumulated water determined by the method for detecting the accumulated water depth provided in the embodiment of the present application may be applied to various scenarios.
  • the application of the water depth determined by the water depth detection method to vehicle-related application scenarios such as vehicle assisted driving or vehicle unmanned driving will be exemplarily described below.
  • the method for detecting the depth of accumulated water may also include the following steps:
  • the server obtains the maximum wading depth supported by the vehicle.
  • the maximum wading depth supported by the above vehicle depends on the model of the vehicle terminal. For example: tire height, chassis height, door frame height from the ground, exhaust pipe height from the ground, etc.
  • the maximum wading depth supported by the vehicle is two-thirds of the tire height, that is, when the accumulated water depth is greater than or equal to two-thirds of the tire height, it is determined that the vehicle Terminals cannot safely wade through water.
  • the server compares the maximum wading depth supported by the vehicle with the water accumulation depth of the target sub-area at the first moment.
  • the server sends a first prompt message to the vehicle terminal.
  • the above-mentioned first prompt information is used to indicate that the vehicle can safely pass through the target sub-area.
  • the maximum wading depth supported by the vehicle is 60cm. If the water accumulation depth of the target sub-area at the first moment is 50cm, then the maximum wading depth supported by the vehicle is 60cm greater than the water accumulation depth of 50cm in the target sub-area at the first moment, then it is determined that the vehicle terminal can safely ford, and The vehicle terminal sends the first prompt information.
  • the vehicle terminal may send the first prompt information to the driver.
  • the server sends a second prompt message to the vehicle terminal.
  • the above-mentioned second prompt information is used to warn that there is danger in the target sub-area.
  • the maximum wading depth supported by the vehicle is 60cm. If the water accumulation depth of the target sub-area at the first moment is 70cm, then the maximum wading depth supported by the vehicle of 60cm is less than the water accumulation depth of 70cm in the target sub-area at the first moment, then it is determined that the vehicle terminal cannot safely ford, Send the second prompt information to the vehicle terminal.
  • the prompt information (for example, the first prompt information or the second prompt information) sent by the server to the vehicle terminal may be voice prompt information or text prompt information.
  • the vehicle terminal when the vehicle is driven by a driver, the vehicle terminal will send the second prompt information to the driver after receiving the second prompt information.
  • the driver can make a detour in advance according to the prompt of the second prompt information, so as to avoid the driver from driving the vehicle into a relatively dangerous water accumulation area.
  • the vehicle terminal automatically controls the vehicle to detour in advance after receiving the second prompt information, so as to avoid more dangerous water accumulation areas.
  • the embodiment shown in Figure 21 brings at least the following beneficial effects: According to the maximum wading depth supported by the vehicle and the water depth of the water accumulation area, it is judged whether the vehicle terminal can safely ford, and if it cannot safely ford Notifying the driver in time can effectively improve the safety of vehicle driving and reduce the occurrence of safety accidents.
  • the judging process of judging whether the vehicle terminal can safely ford is all implemented by the server, and the judgment result is sent to the vehicle terminal in the form of prompt information, which reduces the burden of the vehicle terminal.
  • the amount of calculation improves the applicability of the method for detecting the depth of accumulated water in different types of vehicle terminals.
  • the method for detecting the depth of accumulated water may also include the following steps:
  • the server sends the water depth of the target sub-area at the first moment to the vehicle terminal.
  • the vehicle terminal compares the maximum wading depth supported by the vehicle with the water accumulation depth of the target sub-area at the first moment.
  • the vehicle terminal sends a first prompt message.
  • the vehicle terminal sends a second prompt message.
  • the embodiment shown in Figure 22 brings at least the following beneficial effects: According to the maximum wading depth supported by the vehicle and the water depth of the water accumulation area, it is judged whether the vehicle terminal can safely ford, and if it cannot safely ford Notifying the driver in time can effectively improve the safety of vehicle driving and reduce the occurrence of safety accidents.
  • the judging process of judging whether the vehicle terminal can ford safely is handed over to the vehicle terminal, which can reduce the calculation amount of the server.
  • the method for detecting the depth of accumulated water may also include the following steps:
  • the server performs lane recognition in the area to be detected, and determines the location information of each lane in the area to be detected.
  • the server performs lane line recognition in the area to be detected according to the first color image to obtain the structural features of the lane line.
  • the structural features of the lane markings include: straight lane markings, dashed lane markings, hyperbolic lane markings, and the like.
  • the server performs lane recognition in the area to be detected according to the structural characteristics of the lane lines, and obtains the number of lanes in the area to be detected, and the relative positional relationship between each lane and the lane line.
  • the server determines the position information of each lane in the area to be detected according to the number of lanes in the area to be detected and the relative positional relationship between each lane and the lane line.
  • the server can know that there are three lanes in the area to be detected, and can determine the specific positions of the three lanes.
  • the server determines the lanes affected by the target sub-area according to the position information of each lane in the area to be detected and the position information of the target sub-area.
  • the lane can be considered as the lane affected by the target sub-area.
  • the lane may be considered as the lane affected by the target sub-area.
  • the preset conditions may include one or more of the following:
  • the width of the part of the target sub-area on the lane is greater than a preset value.
  • Condition 2 The ratio between the width of the part of the target sub-region on the lane and the width of the lane is greater than a preset ratio.
  • the server sends prompt information to the vehicle terminal according to the lanes affected by the target sub-area and the depth of water accumulation in the target sub-area.
  • step Sc3 may be specifically implemented as the following steps:
  • the server obtains the maximum wading depth supported by the vehicle.
  • the server compares the maximum wading depth supported by the vehicle with the water accumulation depth of the target sub-area.
  • the server sends a first prompt message to the vehicle terminal.
  • the server judges whether there is a lane that the vehicle can pass in the area to be detected according to the lanes affected by the target sub-area.
  • the lane affected by the target sub-area can be considered as a lane that vehicles cannot pass through, so as to avoid vehicles Danger when wading.
  • the server may first determine the target lane in the area to be detected whose passing direction is the same as the traveling direction of the vehicle according to the traveling direction of the vehicle. Afterwards, the server judges whether the target lanes are all lanes affected by the target sub-area. If there is at least one lane in the target lane that is not affected by the target sub-area, the server may determine that there is a lane in the area to be detected that the vehicle can pass through, and then may execute the following step Sc35. Or, if the target lanes are all lanes affected by the target sub-area, the server can determine that there is no lane in which vehicles can pass in the area to be detected, and then can execute the following step Sc36.
  • the server sends third prompt information to the vehicle terminal.
  • the third prompt information is used to indicate the lane in which the vehicle can pass in the area to be detected. Further, the third prompt information is also used to indicate the lanes that vehicles cannot pass in the area to be detected.
  • the server The third prompt information sent to the vehicle terminal may be: "the road ahead, lane 3 is impassable, lane 2 is passable".
  • the server sends fourth prompt information to the vehicle terminal.
  • the fourth prompt information is used for prompting to modify the driving route.
  • the server sends the vehicle terminal
  • the fourth prompt message sent may be: "There is no passable lane on the road ahead, please change the driving route in advance".
  • the embodiment shown in Figure 24 brings at least the following beneficial effects: comprehensively considering the impact of the waterlogging situation in the target sub-area on the lane, so that more useful prompt information (that is, the above-mentioned first prompt information, the third prompt information) can be provided for the driver. information or the fourth reminder information) to help drivers deal with different water accumulation conditions in the target sub-area more effectively. For example, when the accumulated water in the target sub-area affects all the lanes in the direction of the vehicle, the driver can be reminded to change the route in time, so as to prevent the driver from driving the vehicle to a place close to the target sub-area only to find that he cannot pass the target sub-area. area.
  • the water accumulation depth detection device 300 may include: an image acquisition module 301 , a water accumulation detection module 302 and a depth detection module 303 .
  • the above-mentioned water depth detection device 300 may further include: an image processing module 304 , a communication module 305 and a data processing module 306 .
  • the image acquiring module 301 is configured to acquire a first color image and a first depth image of the area to be detected at a first moment, and the first depth image is used to record depth values of various locations in the area to be detected at the first moment.
  • the water accumulation detection module 302 is configured to detect water accumulation in the area to be detected according to the first color image, and obtain position information of a target sub-area in a state of water accumulation in the area to be detected.
  • the depth detection module 303 is configured to determine the water depth of the target sub-region according to the position information of the target sub-region, the first depth image and the pre-stored second depth image, and the second depth image is used to record the Depth values of various locations in the detection area are detected, and the second moment is the time when the area to be detected is in a state of no water accumulation.
  • the above-mentioned depth detection module 303 is specifically configured to determine the first depth value according to the position information of the target sub-region and the first depth image, and the first depth value is the depth of the water surface of the target sub-region at the first moment value; according to the position information of the target sub-region and the second depth image, determine the second depth value, the second depth value is the depth value of the lowest point of the target sub-region at the second moment; the second depth value and the first depth value The difference between is used as the ponding depth of the target sub-area.
  • the above-mentioned depth detection module 303 is specifically configured to determine the depth value of each location of the target sub-region at the second moment according to the position information of the target sub-region and the second depth image; Among the depth values of various locations in the area, the largest depth value is selected as the second depth value.
  • the above-mentioned depth detection module 303 is specifically configured to determine the three-dimensional coordinates of each location of the target sub-region at the second moment according to the position information of the target sub-region and the second depth image; The three-dimensional coordinates of each location in the area are subjected to surface fitting to obtain the surface corresponding to the target sub-area; the depth value of the lowest point of the surface corresponding to the target sub-area is used as the second depth value.
  • the above-mentioned depth detection module 303 is specifically configured to determine the first average depth value according to the position information of the target sub-region and the first depth image, and the first average depth value is for each of the target sub-regions at the first moment.
  • the difference between the second average depth value and the first average depth value is used as the ponding depth of the target sub-region.
  • the above-mentioned water depth detection device 300 further includes: an image processing module 304; the image processing module 304 is used to perform road segmentation on the area to be detected according to the first color image, and obtain the vehicle driving area in the area to be detected The position information of the vehicle driving area; the water accumulation detection module 302 is specifically used to detect the water accumulation in the vehicle driving area according to the position information of the vehicle driving area in the area to be detected and the first color image, and obtain the state of water accumulation in the vehicle driving area The location information of the target sub-region of .
  • the above-mentioned water depth detection device 300 further includes: an image processing module 304; the image processing module 304 is used to perform road segmentation on the area to be detected according to the first color image, and obtain the vehicle driving area in the area to be detected position information; the water accumulation detection module 302 is also used to judge whether the target sub-area is located in the vehicle driving area according to the position information of the vehicle driving area and the position information of the target sub-area; the above-mentioned depth detection module 303 is specifically used for If the target sub-area is located in the vehicle driving area, the water accumulation depth of the target sub-area is determined according to the position information of the target sub-area, the first depth image and the pre-stored second depth image.
  • an image processing module 304 is used to perform road segmentation on the area to be detected according to the first color image, and obtain the vehicle driving area in the area to be detected position information
  • the water accumulation detection module 302 is also used to judge whether the target sub-area is located in the vehicle driving area according to the position information of the vehicle
  • the above-mentioned image acquisition module 301 is further configured to acquire a third color image and a third depth image of the area to be detected at a third moment, where the third moment is located after the second moment; the above-mentioned image processing module 304, It is also used to input the third color image into the weather category recognition model to determine the weather type of the third color image, and the weather category includes rainy or non-rainy days; when the weather category of the third color image is non-rainy, determine the third color image and The similarity between the second color images, the second color image is the color image taken at the second moment of the area to be detected; when the similarity between the third color image and the second color image is less than a preset threshold, The second depth image is updated with the third depth image.
  • the above-mentioned water depth detection apparatus 300 further includes: a communication module 305; the communication module 305 is configured to send the water depth of the target sub-area to the terminal device.
  • the above-mentioned water depth detection device 300 also includes: a communication module 305 and a data processing module 306;
  • the above-mentioned communication module 305 is used to send the first reminder information to the vehicle terminal if the maximum wading depth supported by the vehicle is greater than the water accumulation depth of the target sub-area at the first moment, and the first reminder information is used to indicate The vehicle can safely pass through the target sub-area; or, if the maximum wading depth supported by the vehicle is less than or equal to the water depth of the target sub-area at the first moment, a second prompt message is sent to the vehicle terminal, and the second prompt message is used for warning The target sub-area is dangerous.
  • the above-mentioned water depth detection device further includes: a data processing module 306 and a communication module 305 .
  • the above-mentioned data processing module 306 is used to carry out lane recognition in the area to be detected, and determine the position information of each lane in the area to be detected; according to the position information of each lane in the area to be detected and the position information of the target sub-area, determine the influence of the target sub-area lanes; according to the lanes affected by the target sub-area and the water depth of the target sub-area, a prompt message is generated.
  • the above-mentioned communication module 305 is configured to send prompt information to the vehicle terminal.
  • the above-mentioned data processing module 306 is specifically used to compare the maximum wading depth supported by the vehicle with the water accumulation depth of the target sub-area; if the maximum wading depth supported by the vehicle is greater than the water accumulation depth of the target sub-area, generate The first prompt information, the first prompt information is used to indicate that the vehicle can safely pass through the target sub-area; Whether there is a lane in which vehicles can pass in the detection area; if there is a lane in which vehicles can pass in the area to be detected, a third prompt message is generated, and the third prompt information is used to indicate the lane in which vehicles can pass; if there is no lane in which vehicles can pass in the area to be detected The lane generates fourth prompt information, and the fourth prompt information is used to prompt the user to modify the driving route.
  • the above-mentioned image acquisition module 301 is further configured to acquire the first color image and the first depth image of the area to be detected at the first moment when the preset conditions are met; wherein the preset conditions include: The area to be detected is located on the driving route corresponding to the vehicle terminal.
  • the above preset condition further includes: the distance between the vehicle terminal and the photographing device is smaller than the preset distance.
  • Some embodiments of the present disclosure provide a computer-readable storage medium (for example, a non-transitory computer-readable storage medium), where computer program instructions are stored in the computer-readable storage medium, and when the computer program instructions are run on a processor , so that the processor executes one or more steps in the method for detecting the depth of accumulated water as described in any one of the above embodiments.
  • a computer-readable storage medium for example, a non-transitory computer-readable storage medium
  • the above-mentioned computer-readable storage medium may include, but is not limited to: magnetic storage devices (such as hard disks, floppy disks, or tapes, etc.), optical disks (such as compact disks (compact disks, CDs), digital versatile disks, etc. disk, DVD), etc.), smart cards and flash memory devices (for example, erasable programmable read-only memory (EPROM), card, stick or key drive, etc.).
  • Various computer-readable storage media described in this disclosure can represent one or more devices and/or other machine-readable storage media for storing information.
  • the term "machine-readable storage medium" may include, but is not limited to, wireless channels and various other media capable of storing, containing and/or carrying instructions and/or data.
  • Some embodiments of the present disclosure also provide a computer program product.
  • the computer program product includes computer program instructions. When the computer program instructions are executed on the computer, the computer program instructions cause the computer to execute one or more steps in the method for detecting the depth of accumulated water as described in the above-mentioned embodiments.
  • Some embodiments of the present disclosure also provide a computer program.
  • the computer program When the computer program is executed on the computer, the computer program causes the computer to execute one or more steps in the method for detecting the depth of accumulated water as described in the above-mentioned embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
  • Measurement Of Optical Distance (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Traffic Control Systems (AREA)

Abstract

积水深度检测方法及装置,该方法包括:通过拍摄装置获取待检测区域在第一时刻的第一彩色图像和第一深度图像;根据第一彩色图像,对待检测区域进行积水检测,获取待检测区域中处于积水状态的目标子区域的位置信息;根据目标子区域的位置信息、第一深度图像以及预先存储的第二深度图像,确定目标子区域的积水深度。

Description

积水深度检测方法及装置
本申请要求于2021年12月29日提交国家知识产权局、申请号为202111644105.7、申请名称为“积水深度检测方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开涉及数据处理技术领域,尤其涉及积水深度检测方法及装置。
背景技术
路面积水是影响出行、交通以及驾驶安全的重要因素之一,尤其是在未知积水深度的情况下,贸然涉水会给人们的生命安全和财产安全造成损失。因此,对路面积水深度进行检测是非常有必要的。
现阶段,常见的路面积水深度检测包括以下几种:
第一种,在容易积水的路段(例如低洼路段、桥洞下等)设立水位尺,由人工读取水位尺的刻度来获知积水的深度。
第二种,在车辆上安装积水检测仪,在车辆经过积水的路段时,积水检测仪可检测该路段的积水深度。
发明内容
一方面,提供一种积水深度检测方法,所述方法包括:通过拍摄装置获取待检测区域在第一时刻的第一彩色图像和第一深度图像,第一深度图像用于记录在第一时刻时待检测区域中各个地点的深度值;根据第一彩色图像,对待检测区域进行积水检测,获取待检测区域中处于积水状态的目标子区域的位置信息;根据目标子区域的位置信息、第一深度图像以及预先存储的第二深度图像,确定目标子区域的积水深度,第二深度图像用于记录在第二时刻时待检测区域中各个地点的深度值,第二时刻为待检测区域处于未积水状态的时刻。
在一些实施例中,上述根据目标子区域的位置信息、第一深度图像以及预先存储的第二深度图像,确定目标子区域的积水深度,包括:根据目标子区域的位置信息以及第一深度图像,确定第一深度值,第一深度值为第一时刻时目标子区域的水面的深度值;根据目标子区域的位置信息以及第二深度图像,确定第二深度值,第二深度值为第二时刻时目标子区域的最低点的深度值;以第二深度值与第一深度值之间的差值作为目标子区域的积水深度。
另一些实施例中,上述根据目标子区域的位置信息以及第二深度图像,确定第二深度值,包括:根据目标子区域的位置信息以及第二深度图像,确 定第二时刻时目标子区域的各个地点的深度值;从第二时刻时目标子区域的各个地点的深度值中,选择最大的深度值作为第二深度值。
另一些实施例中,上述根据目标子区域的位置信息以及第二深度图像,确定第二深度值,包括:根据目标子区域的位置信息以及第二深度图像,确定第二时刻时目标子区域的各个地点的三维坐标;根据第二时刻时目标子区域的各个地点的三维坐标,进行曲面拟合,得到目标子区域对应的曲面;以目标子区域对应的曲面的最低点的深度值作为第二深度值。
另一些实施例中,上述根据目标子区域的位置信息、第一深度图像以及预先存储的第二深度图像,确定目标子区域的积水深度,包括:根据目标子区域的位置信息以及第一深度图像,确定第一平均深度值,第一平均深度值为第一时刻时目标子区域的各个地点的深度值的平均值;根据目标子区域的位置信息以及第二深度图像,确定第二平均深度值,第二平均深度值为第二时刻时目标子区域的各个地点的深度值的平均值;以第二平均深度值与第一平均深度值之间的差值作为目标子区域的积水深度。
另一些实施例中,上述根据第一彩色图像,对待检测区域进行积水检测,获取待检测区域中处于积水状态的目标子区域的位置信息,包括:根据第一彩色图像,对待检测区域进行道路分割,获取待检测区域中车辆行驶区域的位置信息;根据待检测区域中车辆行驶区域的位置信息和第一彩色图像,对车辆行驶区域进行积水检测,获取车辆行驶区域中处于积水状态的目标子区域的位置信息。
另一些实施例中,在确定目标子区域的积水深度之前,上述方法还包括:根据第一彩色图像,对待检测区域进行道路分割,获取待检测区域中车辆行驶区域的位置信息根据车辆行驶区域的位置信息和目标子区域的位置信息,判断目标子区域是否位于车辆行驶区域内;根据目标子区域的位置信息、第一深度图像以及预先存储的第二深度图像,确定目标子区域的积水深度,包括:若目标子区域位于车辆行驶区域内,则根据目标子区域的位置信息、第一深度图像以及预先存储的第二深度图像,确定目标子区域的积水深度。
另一些实施例中,上述方法还包括:通过拍摄装置获取待检测区域在第三时刻时的第三彩色图像和第三深度图像,第三时刻位于第二时刻之后;将第三彩色图像输入天气类别识别模型,确定第三彩色图像的天气类型,天气类别包括雨天或者非雨天;在第三彩色图像的天气类别为非雨天时,确定第三彩色图像与第二彩色图像之间的相似度,第二彩色图像为拍摄装置在第二时刻时拍摄待检测区域而得到的彩色图像;在第三彩色图像与第二彩色图像 之间的相似度小于预设阈值时,以第三深度图像更新第二深度图像。
另一些实施例中,上述方法还包括:向终端设备发送目标子区域的积水深度。
另一些实施例中,上述方法还包括:比较车辆支持的最大涉水深度以及目标子区域在第一时刻时的积水深度;若车辆支持的最大涉水深度大于目标子区域在第一时刻时的积水深度,向车辆终端发送第一提示信息,第一提示信息用于表示车辆能够安全通过目标子区域;或者,若车辆支持的最大涉水深度小于或等于目标子区域在第一时刻时的积水深度,向车辆终端发送第二提示信息,第二提示信息用于警告目标子区域存在危险。
另一些实施例中,上述方法还包括:对待检测区域进行车道识别,确定待检测区域中各个车道的位置信息;根据待检测区域中各个车道的位置信息以及目标子区域的位置信息,确定目标子区域所影响的车道;根据目标子区域所影响的车道以及目标子区域的积水深度,向车辆终端发送提示信息。
另一些实施例中,上述根据目标子区域所影响的车道以及目标子区域的积水深度,向车辆终端发送提示信息,包括:比较车辆支持的最大涉水深度与目标子区域的积水深度;若车辆支持的最大涉水深度大于目标子区域的积水深度,向车辆终端发送第一提示信息,第一提示信息用于表示车辆能够安全通过目标子区域;若车辆支持的最大涉水深度小于或等于目标子区域的积水深度,根据目标子区域所影响的车道,判断待检测区域是否存在车辆能够通行的车道;若待检测区域存在车辆能够通行的车道,向车辆终端发送第三提示信息,第三提示信息用于指示车辆能够通行的车道;若待检测区域不存在车辆能够通行的车道,向车辆终端发送第四提示信息,第四提示信息用于提示用户修改驾驶路线。
另一些实施例中,上述通过拍摄装置获取待检测区域在第一时刻的第一彩色图像和第一深度图像,包括:在满足预设条件的情况下,通过拍摄装置获取待检测区域在第一时刻的第一彩色图像和第一深度图像;其中,预设条件包括:待检测区域位于车辆终端对应的行驶路线上。
另一些实施例中,上述预设条件还包括:车辆终端与拍摄装置之间的距离小于预设距离。
另一方面,提供一种积水深度检测装置。积水深度检测装置包括:图像获取模块,用于取待检测区域在第一时刻的第一彩色图像和第一深度图像,第一深度图像用于记录在第一时刻时待检测区域中各个地点的深度值;积水检测模块,用于根据第一彩色图像,对待检测区域进行积水检测,获取 待检测区域中处于积水状态的目标子区域的位置信息;深度检测模块,用于根据目标子区域的位置信息、第一深度图像以及预先存储的第二深度图像,确定目标子区域的积水深度,第二深度图像用于记录在第二时刻时待检测区域中各个地点的深度值,第二时刻为待检测区域处于未积水状态的时刻。
在一些实施例中,上述深度检测模块,具体用于根据目标子区域的位置信息以及第一深度图像,确定第一深度值,第一深度值为第一时刻时目标子区域的水面的深度值;根据目标子区域的位置信息以及第二深度图像,确定第二深度值,第二深度值为第二时刻时目标子区域的最低点的深度值;以第二深度值与第一深度值之间的差值作为目标子区域的积水深度。
另一些实施例中,上述深度检测模块,具体用于根据目标子区域的位置信息以及第二深度图像,确定第二时刻时目标子区域的各个地点的深度值;从第二时刻时目标子区域的各个地点的深度值中,选择最大的深度值作为第二深度值。
另一些实施例中,上述深度检测模块,具体用于根据目标子区域的位置信息以及第二深度图像,确定第二时刻时目标子区域的各个地点的三维坐标;根据第二时刻时目标子区域的各个地点的三维坐标,进行曲面拟合,得到目标子区域对应的曲面;以目标子区域对应的曲面的最低点的深度值作为第二深度值。
另一些实施例中,上述深度检测模块,具体用于根据目标子区域的位置信息以及第一深度图像,确定第一平均深度值,第一平均深度值为第一时刻时目标子区域的各个地点的深度值的平均值;根据目标子区域的位置信息以及第二深度图像,确定第二平均深度值,第二平均深度值为第二时刻时目标子区域的各个地点的深度值的平均值;以第二平均深度值与第一平均深度值之间的差值作为目标子区域的积水深度。
另一些实施例中,上述积水深度检测装置还包括:图像处理模块;该图像处理模块,用于根据第一彩色图像,对待检测区域进行道路分割,获取待检测区域中车辆行驶区域的位置信息;上述积水检测模块,具体用于根据待检测区域中车辆行驶区域的位置信息和第一彩色图像,对车辆行驶区域进行积水检测,获取车辆行驶区域中处于积水状态的目标子区域的位置信息。
另一些实施例中,上述积水深度检测装置还包括:图像处理模块;该图像处理模块,用于根据第一彩色图像,对待检测区域进行道路分割,获取待检测区域中车辆行驶区域的位置信息;所述积水检测模块,还用于根据车辆行驶区域的位置信息和目标子区域的位置信息,判断目标子区域是否位 于车辆行驶区域内;上述深度检测模块,具体用于若目标子区域位于车辆行驶区域内,则根据目标子区域的位置信息、第一深度图像以及预先存储的第二深度图像,确定目标子区域的积水深度。
另一些实施例中,上述图像获取模块,还用于获取待检测区域在第三时刻时的第三彩色图像和第三深度图像,第三时刻位于第二时刻之后;上述图像处理模块,还用于将第三彩色图像输入天气类别识别模型,确定第三彩色图像的天气类型,天气类别包括雨天或者非雨天;在第三彩色图像的天气类别为非雨天时,确定第三彩色图像与第二彩色图像之间的相似度,第二彩色图像为待检测区域在第二时刻时拍摄得到的彩色图像;在第三彩色图像与第二彩色图像之间的相似度小于预设阈值时,以第三深度图像更新第二深度图像。
另一些实施例中,上述积水深度检测装置还包括:通信模块;该通信模块,用于向终端设备发送目标子区域的积水深度。
另一些实施例中,上述积水深度检测装置还包括:数据处理模块和通信模块;上述数据处理模块,用于比较车辆支持的最大涉水深度以及目标子区域在第一时刻时的积水深度;上述通信模块,用于若车辆支持的最大涉水深度大于目标子区域在第一时刻时的积水深度,向车辆终端发送第一提示信息,第一提示信息用于表示车辆能够安全通过目标子区域;或者,若车辆支持的最大涉水深度小于或等于目标子区域在第一时刻时的积水深度,向车辆终端发送第二提示信息,第二提示信息用于警告目标子区域存在危险。
另一些实施例中,上述积水深度检测装置还包括:数据处理模块和通信模块。上述数据处理模块,用于对待检测区域进行车道识别,确定待检测区域中各个车道的位置信息;根据待检测区域中各个车道的位置信息以及目标子区域的位置信息,确定目标子区域所影响的车道;根据目标子区域所影响的车道以及目标子区域的积水深度,生成提示信息。上述通信模块,用于向车辆终端发送提示信息。
另一些实施例中,上述数据处理模块,具体用于比较车辆支持的最大涉水深度与目标子区域的积水深度;若车辆支持的最大涉水深度大于目标子区域的积水深度,生成第一提示信息,第一提示信息用于表示车辆能够安全通过目标子区域;若车辆支持的最大涉水深度小于或等于目标子区域的积水深度,根据目标子区域所影响的车道,判断待检测区域是否存在车辆能够通行的车道;若待检测区域存在车辆能够通行的车道,生成第三提示信息,第三提示信息用于指示车辆能够通行的车道;若待检测区域不存在车辆能够通行 的车道,生成第四提示信息,第四提示信息用于提示用户修改驾驶路线。
另一些实施例中,上述图像获取模块,还用于在满足预设条件的情况下,获取待检测区域在第一时刻的第一彩色图像和第一深度图像;其中,预设条件包括:待检测区域位于车辆终端对应的行驶路线上。
另一些实施例中,上述预设条件还包括:车辆终端与拍摄装置之间的距离小于预设距离。
另一方面,提供一种积水深度检测装置,该装置包括存储器和处理器;存储器和处理器耦合;存储器用于存储计算机程序代码,计算机程序代码包括计算机指令。其中,当处理器执行计算机指令时,使得该装置执行如上述任一实施例中所述的积水深度检测方法。
又一方面,提供一种非瞬态的计算机可读存储介质。所述计算机可读存储介质存储有计算机程序指令,所述计算机程序指令在处理器上运行时,使得所述处理器执行如上述任一实施例所述的积水深度检测方法中的一个或多个步骤。
又一方面,提供一种计算机程序产品。所述计算机程序产品包括计算机程序指令,在计算机上执行所述计算机程序指令时,所述计算机程序指令使计算机执行如上述任一实施例所述的积水深度检测方法中的一个或多个步骤。
又一方面,提供一种计算机程序。当所述计算机程序在计算机上执行时,所述计算机程序使计算机执行如上述任一实施例所述的积水深度检测方法中的一个或多个步骤。
附图说明
为了更清楚地说明本公开中的技术方案,下面将对本公开一些实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本公开的一些实施例的附图,对于本领域普通技术人员来讲,还可以根据这些附图获得其他的附图。此外,以下描述中的附图可以视作示意图,并非对本公开实施例所涉及的产品的实际尺寸、方法的实际流程、信号的实际时序等的限制。
图1为根据一些实施例的积水深度检测系统的结构图;
图2为根据一些实施例的TOF摄像头的成像原理图;
图3为根据一些实施例的TOF摄像头获取深度值的原理图;
图4为根据一些实施例的积水深度检测系统的另一种结构图;
图5为根据一些实施例的计算装置的结构图;
图6为根据一些实施例的积水深度检测方法的流程图一;
图7为根据一些实施例的一种存在孔洞和噪声点的深度图像的样本图;
图8为根据一些实施例的一种深度图像修复系统的结构图;
图9为根据一些实施例的一种积水深度检测方法的应用场景图一;
图10为根据一些实施例的一种积水检测模型的应用场景图;
图11为根据一些实施例的一种积水检测模型的结构图;
图12为根据一些实施例的积水深度检测方法的流程图二;
图13为根据一些实施例的一种积水深度检测方法的应用场景图二;
图14为根据一些实施例的积水深度检测方法的流程图三;
图15为根据一些实施例的积水深度检测方法的流程图四;
图16为根据一些实施例的一种Deeplab v3+语义分割模型的示意图;
图17为根据一些实施例的积水深度检测方法的流程图五;
图18为根据一些实施例的目标子区域与车辆行驶区域的位置图;
图19为根据一些实施例的一种图像更新方法的流程图;
图20为根据一些实施例的残差网络ResNet中的一种模块图;
图21为根据一些实施例的积水深度检测方法的流程图六;
图22为根据一些实施例的积水深度检测方法的流程图七;
图23为根据一些实施例的积水深度检测方法的流程图八;
图24为根据一些实施例的积水深度检测方法的流程图九;
图25为根据一些实施例的目标子区域与车道的位置图一;
图26为根据一些实施例的目标子区域与车道的位置图二;
图27为根据一些实施例的一种积水深度检测装置的结构图。
具体实施方式
下面将结合附图,对本公开一些实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。基于本公开所提供的实施例,本领域普通技术人员所获得的所有其他实施例,都属于本公开保护的范围。
除非上下文另有要求,否则,在整个说明书和权利要求书中,术语“包括(comprise)”及其其他形式例如第三人称单数形式“包括(comprises)”和现在分词形式“包括(comprising)”被解释为开放、包含的意思,即为“包含,但不限于”。在说明书的描述中,术语“一个实施例(one embodiment)”、“一些实施例(some embodiments)”、“示例性实施例(exemplary  embodiments)”、“示例(example)”、“特定示例(specific example)”或“一些示例(some examples)”等旨在表明与该实施例或示例相关的特定特征、结构、材料或特性包括在本公开的至少一个实施例或示例中。上述术语的示意性表示不一定是指同一实施例或示例。此外,所述的特定特征、结构、材料或特点可以以任何适当方式包括在任何一个或多个实施例或示例中。
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本公开实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
“A、B和C中的至少一个”与“A、B或C中的至少一个”具有相同含义,均包括以下A、B和C的组合:仅A,仅B,仅C,A和B的组合,A和C的组合,B和C的组合,及A、B和C的组合。
“A和/或B”,包括以下三种组合:仅A,仅B,及A和B的组合。
如本文中所使用,根据上下文,术语“如果”任选地被解释为意思是“当……时”或“在……时”或“响应于确定”或“响应于检测到”。类似地,根据上下文,短语“如果确定……”或“如果检测到[所陈述的条件或事件]”任选地被解释为是指“在确定……时”或“响应于确定……”或“在检测到[所陈述的条件或事件]时”或“响应于检测到[所陈述的条件或事件]”。
本文中“适用于”或“被配置为”的使用意味着开放和包容性的语言,其不排除适用于或被配置为执行额外任务或步骤的设备。
另外,“基于”的使用意味着开放和包容性,因为“基于”一个或多个所述条件或值的过程、步骤、计算或其他动作在实践中可以基于额外条件或超出所述的值。
如本文所使用的那样,“约”或“近似”包括所阐述的值以及处于特定值的可接受偏差范围内的平均值,其中所述可接受偏差范围如由本领域普通技术人员考虑到正在讨论的测量以及与特定量的测量相关的误差(即,测量系统的局限性)所确定。
如背景技术所述,相关技术提供的积水深度检测方法通过将相关设备(例如水位尺或者积水检测仪等)放入到积水,以检测积水的深度。可见,在积水较深时,相关技术的积水深度检测方法由于需要将相关设备(例如水位尺或者积水检测仪等)与积水接触,存在一定的安全风险。
对此,本公开实施例提供一种积水深度检测方法,该方法通过具有深度摄像头的拍摄装置来获取一个区域处于未积水状态时的深度信息,以及该区 域处于积水状态时的深度信息,以确定该区域在积水前后的深度信息的变化情况,从而能够确定该区域的积水深度。可见,本公开实施例提供的积水深度检测方法无需与积水进行接触,从而可以避免用户在未知积水深度时的涉水风险。
另外,相比较于相关技术提供的积水深度检测方法可能由于水位尺或者积水检测仪放置位置不准确(也即未放置在积水区域的最低点)导致不能准确地确定积水最大深度,本公开实施例提供的积水深度检测方法由于可以准确地确定一个区域中各个点(例如最低点)在积水前后的深度信息的变化情况,因此可以准确地确定积水最大深度。
本公开实施例提供的积水深度检测方法可以应用于车辆辅助行驶、车辆自动驾驶、行人出行导航等场景,对此不作限定。
以积水深度检测方法应用于车辆辅助行驶或者车辆自动驾驶的场景为例,在服务器基于本公开实施例提供的积水深度检测方法,确定车辆当前行驶路线上的积水区域的积水深度之后,服务器可以将积水区域的积水深度发送给车辆终端。在积水深度大于车辆的最大涉水深度时,车辆终端可以发出报警信息,以提示用户绕开积水。进一步的,车辆终端可以根据当前行驶路线上各个区域的积水情况,重新规划行驶路线。
以积水深度检测方法应用于行人出行导航场景为例,在服务器基于本公开实施例提供的积水深度检测方法,确定用户所在城市的积水区域的积水深度之后,服务器可以将积水区域的积水深度发送给终端设备。用户在其终端设备上可以打开地图应用,并在地图应用的界面上查看目标区域(例如用户附近的区域)内的积水区域的积水深度。
如图1所示,本公开实施例提供了一种积水深度检测系统的示意图。该积水深度检测系统包括:服务器10和拍摄装置20。其中,服务器10和拍摄装置20之间可以通过有线或者无线的方式进行连接。
拍摄装置20可以设置于待检测区域附近。例如,以待检测区域为车辆行驶路段为例,拍摄装置可以安装于该车辆行驶路段附近的路灯、交通信号灯或者树木上。本公开实施例不限制拍摄装置20的具体安装方式以及具体安装位置。
拍摄装置20可用于拍摄待检测区域的彩色图像和深度图像。深度图像(depth image)是指将拍摄装置20到场景中各个点的深度值(在竖直方向上的距离)作为像素值的图像。
在一些实施例中,拍摄装置可以采用彩色摄像头来拍摄彩色图像。
示例性的,彩色摄像头可以为RGB摄像头。其中,RGB摄像头采用RGB色彩模式,通过红(red,R)、绿(greed,G)、蓝(blue,B)三个颜色通道的变化以及它们相互之间的叠加来得到各式各样的颜色。通常,RGB摄像头由三根不同的线缆给出了三个基本彩色成分,用三个独立的电荷耦合器件(charge coupled device,CCD)传感器来获取三种彩色信号。
在一些实施例中,拍摄装置可以采用深度摄像头来拍摄深度图像。
示例性的,深度摄像头可以为飞行时间(time of flight,TOF)摄像头。TOF摄像头采用TOF技术,如图2所示,TOF摄像头的成像原理如下:根据激光光源发出经调制的脉冲红外光,遇到物体后反射,光源探测器接收经物体反射的光源,通过计算光源发射和反射的时间差或相位差,来换算TOF摄像头与被拍摄物体之间的距离,进而根据TOF摄像头与被拍摄物体之间的距离,得到场景中各个点的深度值。
示例性的,如图3所示,在检测场景中的M点的深度值时,首先以拍摄装置20为原点,以拍摄装置20的拍摄方向为Z轴,以拍摄装置20的垂直平面的两个轴向为X轴和Y轴,建立三维直角坐标系。根据发出光源与接收到M点发射回来的光源之间的时间差,计算拍摄装置20与M点之间的距离D。由于拍摄装置20在拍摄时采集M点与拍摄装置20之间的角度信息,因此可以根据M点与拍摄装置20之间的连线与Z轴之间的夹角θ,以及拍摄装置20与M点之间的距离D,计算M点的深度值。具体的,M点的深度值=Dcosθ。
服务器10用于获取拍摄装置20所拍摄到的图像,并基于拍摄装置20所拍摄到的图像,确定待检测区域中处于积水状态的子区域的积水深度。
在一些实施例中,服务器10可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云服务、云数据库、云计算、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、内容分发网络、大数据服务器等基础云计算服务的云服务器。
可选的,如图4所示,该积水深度检测系统还可以包括终端设备30。终端设备30和服务器10之间可以通过有线或者无线的方式进行连接。
终端设备30用于通过服务器10来获取积水深度检测的相关信息,并可以将积水深度检测的相关信息以语音、文字等方式展示给用户。
在一些实施例中,终端设备30可以是手机、平板电脑、桌面型、膝上型、手持计算机、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本,以及蜂窝电话、个人数字助理(personal digital assistant,PDA)、增强现实(augmented reality,AR)\虚拟现实(virtual reality,VR)设备等。或者,终端设备30可以是车辆终端。车辆终端是用于车辆通信和管理的前端设备,可以安装在各种车辆内。
在一些实施例中,服务器10可以和拍摄装置20集成在一起。或者,服务器10可以和终端设备30集成在一起。
上述服务器10和终端设备30的基本硬件结构类似,都包括图5所示计算装置所包括的元件。下面以图5所示的计算装置为例,介绍服务器10和终端设备30的硬件结构。
如图5所示,计算装置可以包括处理器41,存储器42、通信接口43、总线44。处理器41,存储器42以及通信接口43之间可以通过总线44连接。
处理器41是计算装置的控制中心,可以是一个处理器,也可以是多个处理元件的统称。例如,处理器41可以是一个通用CPU,也可以是其他通用处理器等。其中,通用处理器可以是微处理器或者是任何常规的处理器等。
作为一种实施例,处理器41可以包括一个或多个CPU,例如图5中所示的CPU 0和CPU 1。
存储器42可以是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(random access memory,RAM)或者可存储信息和指令的其他类型的动态存储设备,也可以是电可擦可编程只读存储器(electrically erasable programmable read-only memory,EEPROM)、磁盘存储介质或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。
一种可能的实现方式中,存储器42可以独立于处理器41存在,存储器42可以通过总线44与处理器41相连接,用于存储指令或者程序代码。处理器41调用并执行存储器42中存储的指令或程序代码时,能够实现本公开下述实施例提供的积水深度检测方法。
在本公开实施例中,对于服务器10和终端设备30而言,存储器42 中存储的软件程序不同,所以服务器10和终端设备30实现的功能不同。关于各设备所执行的功能将结合下面的流程图进行描述。
另一种可能的实现方式中,存储器42也可以和处理器41集成在一起。
通信接口43,用于计算装置与其他设备通过通信网络连接,所述通信网络可以是以太网,无线接入网(radio access network,RAN),无线局域网(wireless local area networks,WLAN)等。通信接口43可以包括用于接收数据的接收单元,以及用于发送数据的发送单元。
总线44,可以是工业标准体系结构(industry standard architecture,ISA)总线、外部设备互连(peripheral component interconnect,PCI)总线或扩展工业标准体系结构(extended industry standard architecture,EISA)总线等。该总线可以分为地址总线、数据总线、控制总线等。为便于表示,图5中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
需要指出的是,图5中示出的结构并不构成对该计算装置的限定,除图5所示部件之外,该计算装置可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
下面结合说明书附图,对本公开提供的实施例进行具体介绍。
本公开实施例提供的积水深度检测方法可以由积水深度检测装置来执行,该积水深度检测装置可以为图1所示的积水深度检测系统中的服务器10,或者服务器10的处理器。下文中以积水深度检测装置为服务器进行举例说明。
如图6所示,本公开实施例提供了一种积水深度检测方法,该方法包括以下步骤:
S101、服务器通过拍摄装置获取待检测区域在第一时刻的第一彩色图像和第一深度图像。
其中,待检测区域为需要进行积水深度检测的区域。例如,待检测区域可以为低洼路段、下穿式立交桥或隧道等容易产生积水的区域。
在一些实施例中,待检测区域可以由服务器来确定。例如,一个城市中的K个区域均安装有对应的拍摄装置,服务器可以将这K个区域均认为是待检测区域,K为正整数。
在另一些实施例中,待检测区域可以由用户以直接或间接的方式来确定。例如,在应用于车辆辅助驾驶或者车辆无人驾驶等与车辆相关的场景下, 一个城市中的K个区域均安装有对应的拍摄装置,K个区域中的M个区域位于用户驾驶的车辆的行驶路线上,则服务器可以选择这M个区域作为待检测区域。这样,服务器可以不用对M个区域之外的其他区域进行积水深度检测,以节省计算资源。
第一深度图像用于记录在第一时刻时待检测区域中各个地点的深度值。一个地点的深度值用于反映该地点与拍摄装置之间在竖直方向上的距离。
第一彩色图像用于反映在第一时刻时待检测区域的真实地貌。
应理解,对于拍摄装置来说,深度摄像头和彩色摄像头是对齐的。也即,对于待检测区域的目标地点来说,目标地点在第一深度图像中对应的像素点的坐标,与目标地点在第一彩色图像中对应的像素点的坐标是相同的。目标地点是待检测区域中的任一地点。
在实际拍摄的过程中,由于光线等不可控因素会导致深度摄像头在采集深度图像时,出现深度信息丢失的现象,使得深度图像中出现孔洞和噪声点。孔洞在深度图像中表现为白色区域中的一个个黑色区域,噪声点在深度图像中表现为一些引起较强视觉效果的孤立像素点或像素块。如图7所示,白色图像中的黑色区域即为孔洞,白色区域中影响视觉效果的像素点即为噪声点。
因此,可以对拍摄装置所拍摄的原始深度图像进行修复处理,以消除原始深度图像中的孔洞和噪声点。
作为一种可能的实现方式,将原始深度图像输入预先建立好的图像修复模型中进行修复处理,得到修复好的深度图像。示例性的,该图像修复模型可以基于U-Net网络来实现,本公开对此不作限定。
其中,U-Net是一种使用全卷积网络进行语义分割的算法。U-Net网络是一个全卷积网络。U-Net网络的结构为编码器-解码器结构,前半部分编码器采用下采样操作,提取出深度图像中的孔洞特征或噪声特征;后半部分解码器采用上采样操作,分割出深度图像中的孔洞区域或噪声区域。
参见图8,为本公开实施例提供的一种基于U-Net网络的图像修复模型的示意图。如图8所示,以修复深度图像中的孔洞为例,简述图像修复模型进行图像修复的操作步骤:
步骤a1、将原始深度图像输入图像修复模型中进行孔洞检测,在检测到原始深度图像中存在孔洞的情况下,提取出深度图像中孔洞区域的 掩膜。
步骤a2、将原始深度图像和深度图像中孔洞区域的掩膜进行图像拼接后,输入U-Net网络中。
步骤a3、U-Net网络从输入的图像中提取出孔洞特征,分割出深度图像中的孔洞区域,与取反后的深度图像中孔洞区域的掩膜进行逐点相乘的操作,得到孔洞区域的深度图像。
步骤a4、对孔洞区域的深度图像进行孔洞填充操作,并将填充后的深度图像与原始深度图像进行逐点相加的操作,得到修复好的深度图像。
如此,图像修复模型基于U-Net网络分割出孔洞区域或噪声区域后,可以有针对性的对孔洞区域或噪声区域进行修复。
可选的,上述第一深度图像可以是经过修复处理之后的深度图像,以保证从第一深度图像中获取到的深度值是准确的。
在一些实施例中,服务器在开启积水深度检测功能之后,执行本公开实施例提供的积水深度检测方法。相应的,若服务器关闭积水深度检测功能,则服务器不执行本公开实施例提供的积水深度检测方法。
一种可选的实现方式中,服务器默认开启积水深度检测功能。
另一种可选的实现方式中,服务器根据当前天气情况,判断是否开启积水深度检测功能。例如,在当前城市的天气为雨天时,服务器开启积水深度检测功能;或者,在当前城市的天气为非雨天时,服务器关闭积水深度检测功能。
又一种可选的实现方式中,服务器可以根据终端设备的指令,确定开启/关闭积水深度检测功能。
例如,在应用于车辆辅助驾驶或者车辆无人驾驶等与车辆相关的应用场景的情况下,用户在驾驶车辆的过程中,用户可以在车辆终端上进行第一操作以指示开启积水深度检测功能。响应于用户的第一操作,车辆终端向服务器发送用于指示开启积水深度检测功能的第一指令;服务器根据该第一指令,开启积水深度检测功能。
或者,用户可以在车辆终端上进行第二操作以指示关闭积水深度检测功能。响应于用户的第二操作,车辆终端向服务器发送用于指示关闭积水深度检测功能的第二指令;服务器根据该第二指令,关闭积水深度检测功能。
在一些实施例中,在满足预设条件的情况下,服务器通过拍摄装置获取待检测区域在第一时刻的第一彩色图像和第一深度图像。
可选的,在应用于车辆辅助驾驶或者车辆无人驾驶等与车辆相关的应用场景的情况下,上述预设条件包括:待检测区域位于车辆终端对应的行驶路线上。这样,服务器仅需要对车辆终端的行驶路线上的各个区域进行积水深度检测,而无需对非行驶路线上的区域进行积水深度检测,有利于减少服务器的运算量。
进一步的,上述预设条件还可以包括车辆终端与待检测区域对应的拍摄装置之间的距离小于预设距离。
示例性的,若上述预设距离为300米,如图9所示,若服务器检测到车辆终端此时的行驶路线为:“前方道路直行1000米”,而待检测区域位于车辆终端前方300米处,那么此时服务器通过拍摄装置获取待检测区域在第一时刻的第一彩色图像和第一深度图像。
如此,根据车辆终端的位置的行驶路线确定服务器进行积水深度检测的触发条件,使得车辆终端在到达积水检测区域之前即可获知该区域的积水深度,避免了在未知积水深度时的涉水风险。
可选的,服务器通过拍摄装置获取待检测区域在第一时刻的第一彩色图像和第一深度图像,可以具体实现为:服务器向拍摄装置发送拍摄指令,该拍摄指令用于指示拍摄装置拍摄待检测区域的深度图像和彩色图像;之后,服务器接收来自于拍摄装置的第一彩色图像和第一深度图像。
可选的,第一彩色图像和第一深度图像可以是拍摄装置在接收到拍摄指令之前拍摄的,也可以是拍摄装置在接收到拍摄指令之后拍摄的。
S102、服务器根据第一彩色图像,对待检测区域进行积水检测,获取待检测区域中处于积水状态的目标子区域的位置信息。
其中,目标子区域的位置信息用于指示目标子区域在第一彩色图像中对应的像素区域的坐标。
作为一种可能的实现方式中,如图10所示,服务器可以将第一彩色图像输入至积水区域识别模型中,以获得待检测区域中处于积水状态的目标子区域的位置信息。
如图10所示,根据目标子区域的上边界、下边界、左边界和右边界可以确定目标子区域在第一彩色图像中对应的像素区域的形状和坐标。其中,目标子区域在第一彩色图像中对应的像素区域的形状为矩形,目标子区域在第一彩色图像中对应的像素区域的坐标为(Xmin,Ymin,Xmax,Ymax),Xmin表示像素区域的横坐标的最小值,Xmax表示像素区域的横坐标的最大值,Ymin表示像素区域的纵坐标的最小值,Ymax 表示像素区域的纵坐标的最大值。
可选的,在将第一彩色图像输入至积水区域识别模型之前,服务器可以先将第一彩色图像的分辨率调整至预设分辨率,以适应积水区域识别模型的处理要求。
示例性的,积水区域识别模型可以是基于深度神经网络的对象识别和定位算法(you only look once,YOLO)中的YOLOv5-m网络。YOLOv5-m网络采用一个单独的卷积神经网络(convolutional neural network,CNN)模型,可以实现端到端(end-to-end)的目标检测。
如图11所示,YOLOv5-m网络结构包括:主干网络(Backbone)、颈部(Neck)和预测部分(Prediction)。
其中,主干网络即特征提取网络,包括切片结构(Focus)、卷积模块(Conv)、瓶颈层(C3)和空间金字塔池化(SPP);颈部利用类特征金字塔结构进行高层特征和低层特征融合,增强特征表示;预测部分采用多尺度预测形式,使网络适用于不同尺度的目标检测,具有更强的泛化能力。
示例性的,基于YOLOv5-m网络对第一彩色图像进行积水区域识别的过程如下:
步骤b1、将第一彩色图像的分辨率调整至预设分辨率640*640,输入用于识别积水区域的YOLOv5-m网络中。
步骤b2、通过YOLOv5-m网络中的主干网络,从第一彩色图像中提取出各个像素点的位置和像素,通过分析每个像素点与周围像素点的关系,从第一彩色图像中提取出积水区域的特征信息。
步骤b3、通过YOLOv5-m网络中的颈部,对积水区域的高层特征和底层特征进行特征融合,增强积水区域的特征信息。
应理解,由于底层特征具有分辨率高、位置信息和细节信息较多的优点,但是经过的卷积操作较少,语义性低,噪声多;高层特征具有语义性高的优点,但是分辨率低,对细节的感知能力较差。因此,通过将底层特征和高层特征进行特征融合,可以实现优势特征的整合,增强积水区域的特征信息。
步骤b4、通过YOLOv5-m网络中的预测部分,根据颈部输出的积水区域的特征信息,对第一彩色图像进行多尺度的积水区域的检测,输出处于积水状态的目标子区域的位置信息。
其中,上述多尺度的积水区域检测是指,将第一彩色图像进行不同 尺度的缩放,得到图像金字塔,对每一层的图像提取不同尺度的积水区域的特征信息,进行积水区域的检测。如此,可以提高积水区域识别的精度。
S103、服务器根据目标子区域的位置信息、第一深度图像以及预先存储的第二深度图像,确定目标子区域在第一时刻的积水深度。
其中,第二深度图像用于记录在第二时刻时待检测区域中各个地点的深度值。第二时刻为待检测区域处于未积水状态的时刻。第二时刻位于第一时刻之前。
应理解,对于待检测区域中的目标地点来说,目标地点在第一深度图像中对应的像素点的坐标,与目标地点在第二深度图像中对应的像素点的坐标是相同的。目标地点是待检测区域中的任一地点。
在一些实施例中,第二深度图像可以预先存储在服务器的数据库中,或者存储在其他设备的数据库中。
基于图6所示的实施例,至少带来以下有益效果:通过拍摄装置来获取一个区域处于未积水状态时的深度信息,以及该区域处于积水状态时的深度信息,以确定该区域在积水前后的深度信息的变化情况,从而能够确定该区域的积水深度。本公开实施例提供的积水深度检测方法无需与积水进行接触,从而可以避免用户在未知积水深度时的涉水风险。
在一些实施例中,如图12所示,步骤S103可以具体实现为以下步骤:
S1031a、服务器根据目标子区域的位置信息和第一深度图像,确定第一深度值。
其中,第一深度值为第一时刻时目标子区域的水面的深度值。
在目标子区域处于积水状态时,由于水面保持水平,因此目标子区域中的任一点的深度值,均可以认为是目标子区域的水面的深度值。因此,可以将目标子区域中任一地点在第一深度图像中的深度值作为第一深度值。
作为一种可能的实现方式,由于目标子区域的位置信息包括目标子区域的边界点的坐标,因此服务器可以直接根据目标子区域的边界点的坐标,从第一深度图像中提取出目标子区域的边界点的深度值;服务器可以以目标子区域的边界点的深度值作为第一深度值。
S1032a、服务器根据目标子区域的位置信息和第二深度图像,确定第二深度值。
其中,第二深度值为第二时刻时目标子区域的最低点的深度值。应理解,最低点为地势最低点,也即海拔高度最小的地点。
可选的,步骤S1032a可以采用以下实现方式中的任意一种:
实现方式一、服务器根据目标子区域的位置信息,从第二深度图像中提取出目标子区域中各个地点的深度值。由于地点的深度值反映了地点与拍摄装置之间在竖直方向上的距离,因此地点的深度值越大说明该地点与拍摄装置在竖直方向上的距离越大,该地点的海拔也越低。因此,服务器可以比较目标子区域的各个地点的深度值,以深度值最大的地点作为目标子区域中的最低点。相应的,服务器以目标子区域的各个地点的深度值中最大的深度值作为第二深度值。
应理解,实现方式一的优点在于:操作简单,计算量小。
实现方式二,服务器根据目标子区域的位置信息,从第二深度图像中提取出目标子区域的各个地点的三维坐标(x,y,depth)。之后,服务器基于目标子区域的各个地点的三维坐标进行曲面拟合,以得到目标子区域对应的曲面,该曲面用于反映目标子区域的三维地形。之后,服务器从拟合出的曲面中提取出目标子区域的最低点的深度值(也即第二深度值)。
可选的,上述用于曲面拟合的算法可以采用最小二乘法。
应理解,实现方式二的优点在于:可以避免第二深度图像中噪声点引起的地形局部毛刺现象,以准确地确定第二深度值。
步骤S1032a还可以采用其他实现方式,本公开实施例对此不作限定。
本公开实施例不限制步骤S1031a和步骤S1032a之间的执行顺序。例如,可以先执行步骤S1031a,再执行步骤S1032a;或者,先执行步骤S1032a,再执行步骤S1031a;又或者,同时执行步骤S1031a和步骤S1032a。
S1033a、服务器以第二深度值与第一深度值之间的差值作为目标子区域在第一时刻时的积水深度。
也即,目标子区域在第一时刻时的积水深度等于第二深度值减去第一深度值。
示例性的,如图13所示,服务器可以根据第二深度图像以及目标子区域的位置信息,确定目标子区域中最低点A的深度值为300cm;另外,服务器还可以根据第一深度图像以及目标子区域的位置信息,确定目标 子区域的边界点B的深度值为260cm。从而,基于最低点A的深度值为300cm,以及边界点B的深度值260cm,可以确定目标子区域的积水深度即为300cm-260cm=40cm。
应理解,由于上述第一深度值为第一时刻目标子区域的水面的深度值,上述第二深度值为第二时刻时目标子区域的地势最低点的深度值,因此图12所示的实施例所确定的积水深度为最大积水深度。
由于在实际应用中,在判断是否能够安全通过积水区域时,通常以最大积水深度作为判断标准,因此本公开实施例在测量目标子区域的积水深度时,测量的是目标子区域的最大积水深度。
在另一些实施例中,如图14所示,步骤S103可以具体实现为以下步骤:
S1031b、服务器根据目标子区域的位置信息以及第一深度图像,确定第一平均深度值。
其中,第一平均深度值为第一时刻时目标子区域的各个地点的深度值的平均值。
作为一种可能的实现方式,服务器根据目标子区域的位置信息,从第一深度图像中提取出第一时刻时目标子区域的各个地点的深度值。之后,服务器对第一时刻时目标子区域的各个地点的深度值进行求取平均值的计算,得到第一平均深度值。
作为另一种可能的实现方式,由于处于积水状态的目标子区域的各个地点的深度值在理论上是相同的,而目标子区域的位置信息包括目标子区域的边界点的坐标,因此服务器可以根据目标子区域的边界点的坐标,从第一深度图像中提取出目标子区域的边界点的深度值。进而,服务器可以将目标子区域的边界点的深度值直接作为第一平均深度值。基于该实现方式,可以简化运算过程,降低计算量。
S1032b、服务器根据目标子区域的位置信息以及第二深度图像,确定第二平均深度值。
其中,第二平均深度值即为第二时刻时目标子区域的各个地点的深度值的平均值。
作为一种可能的实现方式,服务器根据目标子区域的位置信息,从第二深度图像中提取出第二时刻时目标子区域的各个地点的深度值。之后,服务器对第二时刻时目标子区域的各个地点的深度值进行求取平均值的计算,得到第二平均深度值。
S1033b、服务器以第二平均深度值与第一平均深度值之间的差值作为目标子区域在第一时刻时的积水深度。
也即,目标子区域在第一时刻时的积水深度等于第二平均深度值减去第一平均深度值。
应理解,由于上述第一平均深度值和第二平均深度值均为不同时刻时目标子区域的各个地点的深度值的平均值,因此图14所示的实施例所确定的积水深度为平均积水深度。
在实际的应用中,目标子区域处于积水状态时,积水水面并不一定为平静的水面,例如,风雨或其他车辆的行驶会在积水水面产生水花或浪花,使得测得的积水最大深度不够准确。因此,本公开实施例以平均积水深度作为目标子区域的积水深度,能够提高积水深度检测的准确性。
在一些实施例中,在应用于车辆辅助驾驶或者车辆无人驾驶等与车辆相关的应用场景的情况下,基于图6所示的实施例,如图15所示,步骤S102可以具体实现为:
S1021、服务器根据第一彩色图像,对待检测区域进行道路分割,获取待检测区域中车辆行驶区域的位置信息。
其中,车辆行驶区域的位置信息是指车辆行驶区域在第一彩色图像中的像素坐标。
由于第一彩色图像记录了待检测区域所有路段的情况,而车辆终端仅可以行驶在车辆行驶区域,因此需要对待检测区域进行道路分割,获取车辆行驶区域的位置信息。
作为一种可能的实现方式,将第一彩色图像输入至道路分割模型,得到待检测区域中车辆行驶区域的位置信息。
示例性的,道路分割模型可以基于Deeplab v3+语义分割算法来构建。图16示出一种Deeplab v3+语义分割模型的示意图,下面结合图16具体介绍道路分割的处理过程。
步骤c1、将第一彩色图像输入Deeplab v3+语义分割模型中。
其中,Deeplab v3+语义分割模型由编码器(encoder)和解码器(decoder)组成。
步骤c2、将第一彩色图像通过编码器中的膨胀卷积(dilated convolution,DCNN)模块,以获得经过DCNN模块处理后的第一彩色图像。
DCNN模块,通过定义卷积和当中穿插的rate-1个0的个数,将模 块内三个卷积层均设置为不同的膨胀率,实现对原始数据采样间隔变大。
步骤c2的目的在于:增大卷积核的感受野,减少第一彩色图像尺寸的损失。
步骤c3、将经过DCNN模块处理后的第一彩色图像输入编码器中的空间金字塔池(atrous spatial pyramid pooling,ASPP)模块,以将第一彩色图像进行不同尺度的缩放,得到图像金字塔,实现多尺度特征提取,得到固定大小的特征向量。
ASPP模块包括:1×1卷积(Conv)、膨胀率为6的3×3 Conv、膨胀率为12的3×3 Conv、膨胀率为18的3×3 Conv和Image Pooling。
在一些实施例中,在ASPP模块中应用深度可分离卷积。深度可分离卷积是指,对于不同的输入通道采取不同的卷积核进行卷积,将普通的卷积操作分解为深度卷积和点向卷积两个过程,能够提高卷积操作的效率,提升卷积效果。
步骤c4、将经过ASPP模块处理得到的第一彩色图像及第一彩色图像的特征向量输入解码器,得到第一彩色图像中车辆行驶区域的分割结果。
其中,解码器,采用底层特征(low-level features)提取模块获取第一彩色图像中车辆行驶区域的细节信息,例如边缘(edge)、角(corner)、颜色(color)、像素(pixels)、梯度(gradients)等,来恢复车辆行驶区域的边界信息,进而通过上采样(upsampling)得到沿着车辆行驶区域边界的分割结果。
在一些实施例中,在解码器中应用深度可分离卷积,从而提高解码器的卷积操作的效率,提升卷积效果。
S1022、服务器根据待检测区域中车辆行驶区域的位置信息和第一彩色图像,对车辆行驶区域进行积水检测,获取车辆行驶区域中处于积水状态的目标子区域的位置信息。
在一些实施例中,采用积水检测模型进行积水检测。具体的,将第一彩色图像以及待检测区域中车辆行驶区域的位置信息输入积水检测模型,输出车辆行驶区域中处于积水状态的目标子区域的位置坐标。
基于图15所示的实施例,至少可以带来以下有益效果:对待检测区域进行道路分割,得到车辆行驶区域的位置信息之后,再对车辆行驶区域进行积水检测,可以直接检测出车辆行驶区域是否存在积水以及处于积水状态的目标子区域的位置坐标,提高积水检测的效率。此外,相较于直接在第一彩色 图像上进而积水检测,图15所提供的积水检测方法,仅对车辆行驶区域进行检测,减小了输入的图像的数据,节省了算力资源,同时省去了判断积水是否在车辆行驶区域的步骤,提高了积水检测的效率。
在另一些实施例中,在应用于车辆辅助驾驶或者车辆无人驾驶等与车辆相关的应用场景的情况下,基于图6所示的实施例,如图17所示,该积水深度检测方法在步骤S103之前还包括步骤:
S104、服务器根据第一彩色图像,对待检测区域进行道路分割,获取待检测区域中车辆行驶区域的位置信息。
其中,步骤S104的具体实现方式可以参考上述步骤S1021的描述,在此不再赘述。
S105、服务器根据车辆行驶区域的位置信息和目标子区域的位置信息,判断目标子区域是否位于车辆行驶区域内。
在本公开实施例中,目标子区域全部位于车辆行驶区域内的情况,以及目标子区域部分位于车辆行驶区域内的情况,均视为目标子区域位于车辆行驶区域内。
示例性的,如图18所示,以第一彩色图像的左上角为圆心,以与左上角连接的宽边为X轴,以与左上角连接的长边为Y轴,建立直角坐标系,根据步骤S102可以得到目标子区域的位置信息(Xmin1,Ymin1,Xmax1,Ymax1),根据步骤S104可以得到车辆行驶区域的位置信息(Xmin2,Ymin2,Xmax2,Ymax2)。当Xmin1>Xmin2,Ymin1>Ymin2,Xmax1<Xmax2,Ymax1<Ymax2时,也即出现如图18中的(a)所示的情况时,确定目标子区域完全位于车辆行驶区域内;或者,当Xmin1<Xmin2,Ymin1<Ymin2,Xmax1<Xmin2,Ymax1<Ymin2时,也即出现如图18中的(b)所示的情况时,确定目标子区域部分位于车辆行驶区域内。当Xmin1>Xmin2,Ymin1>Ymin2,Xmax1>Xmax2,Ymax1<Ymax2时,也即出现如图18中的(c)所示的情况时,确定目标子区域未位于车辆行驶区域内。
应理解,上述关于判断目标子区域是否位于车辆行驶区域内的描述,仅为示例。
在一些实施例中,在目标子区域位于车辆行驶区域的情况下,执行步骤S103。
基于图17所示实施例,至少可以带来以下有益效果:在应用于车辆行驶相关的场景下,服务器可以仅对车辆行驶区域中的积水区域进行积水深度检测,而无需对非车辆行驶区域中的积水区域进行积水深度检测,从而可以节 省服务器的计算资源。
在实际的应用中,由于一些无法预料的施工、损坏等原因,可能会导致待检测区域的地势发生变化,因此,本公开实施例还提供了一种第二深度图像的更新方法,以使得数据库中存储能够反映待检测区域当前地势情况的深度图像。如图19所示,该方法包括以下步骤:
S201、服务器通过拍摄装置获取待检测区域在第三时刻时的第三彩色图像和第三深度图像。
其中,第三时刻位于第二时刻之后。
第三深度图像用于记录在第三时刻时待检测区域中各个地点的深度值。第三彩色图像用于反映在第三时刻时待检测区域的真实地貌。
S202、服务器将第三彩色图像输入天气类别识别模型,确定第三彩色图像的天气类型。
其中,上述天气类别包括雨天或者非雨天。
在一些实施例中,上述天气类别识别模型可以为二分类的分类器,二分类是指识别雨天和非雨天这两类天气的分类器。分类器是数据挖掘中对样本进行分类的方法,包括:决策树、逻辑回归、朴素贝叶斯、神经网络等算法。
可选的,可以基于朴素贝叶斯分类器,来识别雨天和非雨天。其中,朴素贝叶斯分类器是一种基于贝叶斯公式的概率网络。可选的,贝叶斯公式满足以下公式(1):
Figure PCTCN2022126492-appb-000001
其中,P(ω)代表ω拥有的初始概率,即ω的先验概率,反映了关于ω为正确假设的机会的背景知识。P(x)代表要观察的集合x的全概率,即在没有确定某一假设成立时x的概率,P(x|ω)代表假设ω成立的情形下观察到集合x的概率,即条件概率,P(x|ω)代表给定集合x时ω成立的概率,即ω的后验概率,反映了在看到集合x后ω成立的置信度。
将朴素贝叶斯分类器进行天气类别识别的训练,可以得到朴素贝叶斯分类器的天气类别识别公式。该天气类别识别公式可以满足以下公式(2):
Figure PCTCN2022126492-appb-000002
其中,ω(x) map代表第三彩色图像x中的天气类别属于雨天或非雨天的最大可能性,x j代表第三彩色图像x的第j个属性,P(ω n)满足以下公式(3)和P(x jn)满足以下公式(4):
Figure PCTCN2022126492-appb-000003
其中,n为要素集合中要素因子的总个数,ω i为要素集合中的第i个要素因子。上述要素因子为与第三彩色图像识别密切相关的一些要素,例如饱和度、色相、亮度等,对该要素因子进行合理的分类和数据挖掘,可以得到显著的相关关系。
Figure PCTCN2022126492-appb-000004
其中,x ij为第三彩色图像x的第i个要素因子的第j个属性,要素因子的属性即为对该要素因子进行的数据挖掘,例如饱和度均值、色相均值、亮度均值等。δ(ω i,ω n)代表一个二值函数,当ω i=ω n时为1,否则为0。
可选的,上述对朴素贝叶斯分类器进行天气类别识别的训练,可以采用残差网络(residual neural network,ResNet)来实现。
如图20所示,ResNet是指,在神经网络中加入直连通道,将输入的原始图像信息(图20中的输入信息为x)直接传送到在后的堆积层,如此,在后的网络层可以不用学习在前网络层输出的整个图像信息,直接学习在前网络层输出的残差(图20中的残差为F(x))即可。这样一来,通过ResNet可以加快训练过程,提升天气类别识别的准确率。
在第三彩色图像的天气类型为雨天时,待检测区域在第三时刻时可能存在积水的情况,此时第三深度图像不能用于反映待检测区域在未积水状态时的地势情况,因此不用考虑以第三深度图像来更新第二深度图像。
在第三彩色图像的天气类型为非雨天时,待检测区域在第三时刻时一般不存在积水的情况,此时第三深度图像可以用于反映待检测区域在未积水状态时的地势情况,因此可以考虑以第三深度图像来更新第二深度图像。基于此,服务器接下来执行步骤S203。
S203、在第三彩色图像的天气类型为非雨天时,服务器确定第三彩色图像与第二彩色图像之间的相似度。
其中,第二彩色图像为拍摄装置在第二时刻时拍摄的待检测区域的彩色图像。第二彩色图像可以用于反映第二时刻时待检测区域的真实地貌。
第二彩色图像可以预先存储在服务器的数据库中,或者其他设备的数据库中。
作为一种可能的实现方式,服务器可以采用模板匹配法中的归一化相关系数(normalized correlation,NC)匹配算法,确定第三彩色图像与第二彩色图像之间的相似度。
模板匹配法是指,给出一个模板图像和一个匹配图像,在匹配图像中找到与模板图像最为相似的部分。具体实现过程为,让模板图像在匹配图像上滑动,以像素点为单位,计算每一个位置的相似度,最终得到模板图像与匹配图像的最大相似度。
在本公开实施例中,模板图像为第二彩色图像,匹配图像为第三彩色图像。采用模板匹配法中的归一化相关系数匹配算法,确定第二彩色图像和第三彩色图像之间的相似度的实现过程为:
通过计算第二彩色图像与第三彩色图像之间的相关系数,来确定第二彩色图像和第三彩色图像之间的相似度。示例性的,归一化相关系数匹配算法可以满足以下公式(5):
Figure PCTCN2022126492-appb-000005
其中,(x,y)代表图像中像素点的位置坐标,T(x,y)代表第二彩色图像中的像素点,I(x,y)代表第三彩色图像中的像素点,R(x,y)代表第二彩色图像与第三彩色图像之间的相似度。
S204、服务器判断第三彩色图像与第二彩色图像之间的相似度是否小于或等于预设阈值。
S205、在第三彩色图像与第二彩色图像之间的相似度小于或等于预设阈值时,服务器以第三深度图像更新第二深度图像。
示例性的,上述预设阈值可以为0.9。预设阈值可以根据实际情况来确定,本公开实施例对此不作限定。
应理解,在第三彩色图像与第二彩色图像之间的相似度大于预设阈 值时,说明待检测区域未发生显著变化,预先存储的第二深度图像仍然可以反映待检测区域的实际地势情况,因此无需更新第二深度图像。但是,在第三彩色图像与第二彩色图像之间的相似度小于或等于预设阈值时,说明待检测区域发生了显著的变化,预先存储的第二深度图像已经不能反映待检测区域的实际地势情况,因此需要以第三深度图像更新第二深度图像。
可选的,服务器以第三深度图像更新第二深度图像,可以具体实现为:从数据库中删除第二深度图像,并在数据库中存储第三深度图像。这样,在后续的积水深度检测流程中,第三深度图像可以起到原先第二深度图像所起到的作用。
另外,在以第三深度图像更新第二深度图像之外,服务器还以第三彩色图像更新第二彩色图像。
基于图19所示的实施例,至少带来以下有益效果:通过拍摄装置获取待检测区域在第三时刻时的第三彩色图像和第三深度图像,并采用天气类别识别模型对第三彩色图像进行天气类别的识别,确保第三彩色图像中的天气为非雨天,可以排除天气原因对图像相似度计算的干扰。此外,通过计算第三深度图像与第二深度图像之间的相似度,来确定待检测区域的地势情况是否发生变化,进而确定是否以第三深度图像更新第二深度图像。如此,一方面,通过计算图像之间的相似度即可了解待检测区域的地势情况,无需用户实地检测,具有实用性;另一方面,在第三深度图像与第二深度图像之间的相似度低于预设阈值时,将第二深度图像删除,以第三深度图像代替第二深度图像,能够减少存储空间,并且确保预先存储的深度图像能够准确的反映当前时刻待检测区域的地势情况。
应理解,本申请实施例提供的积水深度检测方法所确定出的积水深度可以应用于各种场景。下面对该积水深度检测方法所确定出的积水深度应用于车辆辅助驾驶或者车辆无人驾驶等与车辆相关的应用场景进行示例性说明。
在一些实施例中,如图21所示,该积水深度检测方法还可以包括以下步骤:
Sa1、服务器获取车辆支持的最大涉水深度。
其中,上述车辆支持的最大涉水深度取决于车辆终端的型号。例如:轮胎高度、底盘高度、门框离地高度、排气管离地高度等。
示例性的,以车辆终端的轮胎高度为例,车辆支持的最大涉水深度为轮胎高度的三分之二,也即,在积水深度大于或等于轮胎高度的三分之二时,确定车辆终端不能安全涉水。
Sa2、服务器比较车辆支持的最大涉水深度以及目标子区域在第一时刻时的积水深度。
Sa3、若车辆支持的最大涉水深度大于目标子区域在第一时刻时的积水深度,服务器向车辆终端发送第一提示信息。
其中,上述第一提示信息用于表示车辆能够安全通过目标子区域。
示例性的,若车辆终端的轮胎高度为90cm,那么车辆支持的最大涉水深度为60cm。若目标子区域在第一时刻时的积水深度为50cm,那么车辆支持的最大涉水深度60cm大于目标子区域在第一时刻时的积水深度50cm,则确定车辆终端能够安全涉水,向车辆终端发送第一提示信息。
在一些实施例中,车辆终端在接收到第一提示信息之后,可以向驾驶人员发出第一提示信息。
Sa4、若车辆支持的最大涉水深度小于或等于目标子区域在第一时刻时的积水深度,服务器向车辆终端发送第二提示信息。
其中,上述第二提示信息用于警告目标子区域存在危险。
示例性的,若车辆终端的轮胎高度为90cm,那么车辆支持的最大涉水深度为60cm。若目标子区域在第一时刻时的积水深度为70cm,那么车辆支持的最大涉水深度60cm小于目标子区域在第一时刻时的积水深度70cm,则确定车辆终端不能够安全涉水,向车辆终端发送第二提示信息。
可选的,服务器向车辆终端发送的提示信息(例如第一提示信息或者第二提示信息)可以为语音提示信息或文字提示信息。
在一些实施例中,在车辆由驾驶人员驾驶的情况下,车辆终端在接收到第二提示信息之后,会向驾驶人员发出第二提示信息。驾驶人员可以根据第二提示信息的提示,提前绕道,以避免驾驶人员将车辆行驶入较为危险的积水区域。
在另一些实施例中,在车辆自动驾驶的情况下,车辆终端在接收到第二提示信息之后,自动控制车辆提前绕道,以避开较为危险的积水区域。
图21所示的实施例至少带来以下有益效果:根据车辆支持的最大涉水深度和积水区域的积水深度,判断车辆终端是否能够安全涉水,并在 不能够安全涉水的情况下及时通知驾驶人员,能够有效提升车辆驾驶的安全性,减少安全事故的发生。
另外,在图21所示的实施例中,将判断车辆终端是否能够安全涉水的判断过程均交由服务器来实现,将判断结果以提示信息的形式发送给车辆终端,减小了车辆终端的计算量,提高了积水深度检测方法在不同型号的车辆终端中的适用性。
在一些实施例中,如图22所示,该积水深度检测方法还可以包括以下步骤:
Sb1、服务器向车辆终端发送目标子区域在第一时刻时的积水深度。
Sb2、车辆终端比较车辆支持的最大涉水深度以及目标子区域在第一时刻时的积水深度。
Sb3、若车辆支持的最大涉水深度大于目标子区域在第一时刻时的积水深度,车辆终端发出第一提示信息。
Sb4、若车辆支持的最大涉水深度小于或等于目标子区域在第一时刻时的积水深度,车辆终端发出第二提示信息。
图22所示的实施例至少带来以下有益效果:根据车辆支持的最大涉水深度和积水区域的积水深度,判断车辆终端是否能够安全涉水,并在不能够安全涉水的情况下及时通知驾驶人员,能够有效提升车辆驾驶的安全性,减少安全事故的发生。
另外,在图22所示的实施例中,将判断车辆终端是否能够安全涉水的判断过程交由车辆终端来实现,能够降低服务器的计算量。
在一些实施例中,如图23所示,该积水深度检测方法还可以包括以下步骤:
Sc1、服务器对待检测区域进行车道识别,确定待检测区域中各个车道的位置信息。
作为一种可能的实现方式,服务器根据第一彩色图像,对待检测区域进行车道线识别,得到车道线的结构特征。示例性的,车道线的结构特征包括:直线型车道线、虚线型车道线以及双曲线型车道线等。之后,服务器根据车道线的结构特征,对待检测区域进行车道识别,得到待检测区域的车道数,以及各个车道与车道线的相对位置关系。服务器根据待检测区域的车道数,以及各个车道与车道线的相对位置关系,确定待检测区域中各个车道的位置信息。
以图25为例,基于对待检测区域进行车道识别,服务器可以获知待 检测区域存在3个车道,并且能够确定3个车道的具体位置。
Sc2、服务器根据待检测区域中各个车道的位置信息以及目标子区域的位置信息,确定目标子区域所影响的车道。
在一些实施例中,对于待检测区域中的任一个车道,若该车道包括目标子区域的部分或者全部,则该车道可以被认为是目标子区域所影响的车道。
在另一些实施例中,对于待检测区域中的任一个车道,若目标子区域在该车道上的部分满足预设条件,则该车道可以被认为是目标子区域所影响的车道。
示例性的,预设条件可以包括以下一项或者多项:
条件1、目标子区域在该车道上的部分的宽度大于预设值。
条件2、目标子区域在该车道上的部分的宽度与车道的宽度之间的比值大于预设比值。
Sc3、服务器根据目标子区域所影响的车道以及目标子区域的积水深度,向车辆终端发送提示信息。
可选的,如图24所示,步骤Sc3可以具体实现为以下步骤:
Sc31、服务器获取车辆支持的最大涉水深度。
Sc32、服务器比较车辆支持的最大涉水深度与目标子区域的积水深度。
Sc33、若车辆支持的最大涉水深度大于目标子区域在第一时刻时的积水深度,服务器向车辆终端发送第一提示信息。
Sc34、若车辆支持的最大涉水深度小于或等于目标子区域在第一时刻时的积水深度,则服务器根据目标子区域所影响的车道,判断待检测区域是否存在车辆能够通行的车道。
应理解,在车辆支持的最大涉水深度小于或等于目标子区域在第一时刻时的积水深度的情况下,目标子区域所影响的车道可以认为是车辆不能够通行的车道,以避免车辆在涉水时出现危险。
作为一种可能的实现方式中,服务器可以先根据车辆的行驶方向,确定待检测区域中通行方向与车辆的行驶方向相同的目标车道。之后,服务器判断目标车道是否均是目标子区域所影响的车道。若目标车道中存在至少一个车道不是目标子区域所影响的车道,则服务器可以确定待检测区域中存在车辆能够通行的车道,进而可以执行下述步骤Sc35。或者,若目标车道均是目标子区域所影响的车道,服务器可以确定待检测 区域不存在车辆能够通行的车道,进而可以执行下述步骤Sc36。
Sc35、若待检测区域存在车辆能够通行的车道,服务器向车辆终端发送第三提示信息。
其中,第三提示信息用于指示待检测区域中车辆能够通行的车道。进一步的,第三提示信息还用于指示待检测区域中车辆不能够通行的车道。
示例性的,如图25所示,若待检测区域中中通行方向与车辆的行驶方向相同的目标车道为车道2和车道3,在目标子区域所影响的车道为车道3的情况下,服务器向车辆终端发送的第三提示信息可以为:“前方路段,车道3不可通行,车道2可以通行”。
Sc36、若待检测区域不存在车辆能够通行的车道,服务器向车辆终端发送第四提示信息。
其中,第四提示信息用于提示修改行驶路线。
示例性的,如图26所示,若待检测区域中中通行方向与车辆的行驶方向相同的目标车道为车道1,在目标子区域所影响的车道为车道1的情况下,服务器向车辆终端发送的第四提示信息可以为:“前方路段无可通行车道,请提前变更行驶路线”。
图24所示实施例至少带来以下有益效果:综合考虑目标子区域的积水情况对车道的影响,从而可以为驾驶人员提供更有用的提示信息(也即上述第一提示信息、第三提示信息或者第四提示信息),以帮助驾驶人员更有效地应对目标子区域的不同积水情况。例如,在目标子区域的积水影响了与车辆行驶方向上所有的车道时,可以及时提醒驾驶人员变更路线,以避免驾驶人员将车辆行驶到临近目标子区域的地方时才发现不能通过目标子区域。
上述主要从方法的角度对本公开实施例提供的方案进行了介绍。为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本公开能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本公开的范围。
本公开实施例还提供了一种积水深度检测装置。如图27所示,积水深度检测装置300可以包括:图像获取模块301、积水检测模块302和深度检 测模块303。可选的,在一些实施例中,上述积水深度检测装置300还可以包括:图像处理模块304、通信模块305和数据处理模块306。
图像获取模块301,用于取待检测区域在第一时刻的第一彩色图像和第一深度图像,第一深度图像用于记录在第一时刻时待检测区域中各个地点的深度值。
积水检测模块302,用于根据第一彩色图像,对待检测区域进行积水检测,获取待检测区域中处于积水状态的目标子区域的位置信息。
深度检测模块303,用于根据目标子区域的位置信息、第一深度图像以及预先存储的第二深度图像,确定目标子区域的积水深度,第二深度图像用于记录在第二时刻时待检测区域中各个地点的深度值,第二时刻为待检测区域处于未积水状态的时刻。
在一些实施例中,上述深度检测模块303,具体用于根据目标子区域的位置信息以及第一深度图像,确定第一深度值,第一深度值为第一时刻时目标子区域的水面的深度值;根据目标子区域的位置信息以及第二深度图像,确定第二深度值,第二深度值为第二时刻时目标子区域的最低点的深度值;以第二深度值与第一深度值之间的差值作为目标子区域的积水深度。
另一些实施例中,上述深度检测模块303,具体用于根据目标子区域的位置信息以及第二深度图像,确定第二时刻时目标子区域的各个地点的深度值;从第二时刻时目标子区域的各个地点的深度值中,选择最大的深度值作为第二深度值。
另一些实施例中,上述深度检测模块303,具体用于根据目标子区域的位置信息以及第二深度图像,确定第二时刻时目标子区域的各个地点的三维坐标;根据第二时刻时目标子区域的各个地点的三维坐标,进行曲面拟合,得到目标子区域对应的曲面;以目标子区域对应的曲面的最低点的深度值作为第二深度值。
另一些实施例中,上述深度检测模块303,具体用于根据目标子区域的位置信息以及第一深度图像,确定第一平均深度值,第一平均深度值为第一时刻时目标子区域的各个地点的深度值的平均值;根据目标子区域的位置信息以及第二深度图像,确定第二平均深度值,第二平均深度值为第二时刻时目标子区域的各个地点的深度值的平均值;以第二平均深度值与第一平均深度值之间的差值作为目标子区域的积水深度。
另一些实施例中,上述积水深度检测装置300还包括:图像处理模块304;该图像处理模块304,用于根据第一彩色图像,对待检测区域进行 道路分割,获取待检测区域中车辆行驶区域的位置信息;所述积水检测模块302,具体用于根据待检测区域中车辆行驶区域的位置信息和第一彩色图像,对车辆行驶区域进行积水检测,获取车辆行驶区域中处于积水状态的目标子区域的位置信息。
另一些实施例中,上述积水深度检测装置300还包括:图像处理模块304;该图像处理模块304,用于根据第一彩色图像,对待检测区域进行道路分割,获取待检测区域中车辆行驶区域的位置信息;所述积水检测模块302,还用于根据车辆行驶区域的位置信息和目标子区域的位置信息,判断目标子区域是否位于车辆行驶区域内;上述深度检测模块303,具体用于若目标子区域位于车辆行驶区域内,则根据目标子区域的位置信息、第一深度图像以及预先存储的第二深度图像,确定目标子区域的积水深度。
另一些实施例中,上述图像获取模块301,还用于获取待检测区域在第三时刻时的第三彩色图像和第三深度图像,第三时刻位于第二时刻之后;上述图像处理模块304,还用于将第三彩色图像输入天气类别识别模型,确定第三彩色图像的天气类型,天气类别包括雨天或者非雨天;在第三彩色图像的天气类别为非雨天时,确定第三彩色图像与第二彩色图像之间的相似度,第二彩色图像为待检测区域在第二时刻时拍摄得到的彩色图像;在第三彩色图像与第二彩色图像之间的相似度小于预设阈值时,以第三深度图像更新第二深度图像。
另一些实施例中,上述积水深度检测装置300还包括:通信模块305;该通信模块305,用于向终端设备发送目标子区域的积水深度。
另一些实施例中,上述积水深度检测装置300还包括:通信模块305和数据处理模块306;上述数据处理模块306,用于比较车辆支持的最大涉水深度以及目标子区域在第一时刻时的积水深度;上述通信模块305,用于若车辆支持的最大涉水深度大于目标子区域在第一时刻时的积水深度,向车辆终端发送第一提示信息,第一提示信息用于表示车辆能够安全通过目标子区域;或者,若车辆支持的最大涉水深度小于或等于目标子区域在第一时刻时的积水深度,向车辆终端发送第二提示信息,第二提示信息用于警告目标子区域存在危险。
另一些实施例中,上述积水深度检测装置还包括:数据处理模块306和通信模块305。上述数据处理模块306,用于对待检测区域进行车道识别,确定待检测区域中各个车道的位置信息;根据待检测区域中各个车道的位置信息以及目标子区域的位置信息,确定目标子区域所影响的车道;根据目标 子区域所影响的车道以及目标子区域的积水深度,生成提示信息。上述通信模块305,用于向车辆终端发送提示信息。
另一些实施例中,上述数据处理模块306,具体用于比较车辆支持的最大涉水深度与目标子区域的积水深度;若车辆支持的最大涉水深度大于目标子区域的积水深度,生成第一提示信息,第一提示信息用于表示车辆能够安全通过目标子区域;若车辆支持的最大涉水深度小于或等于目标子区域的积水深度,根据目标子区域所影响的车道,判断待检测区域是否存在车辆能够通行的车道;若待检测区域存在车辆能够通行的车道,生成第三提示信息,第三提示信息用于指示车辆能够通行的车道;若待检测区域不存在车辆能够通行的车道,生成第四提示信息,第四提示信息用于提示用户修改驾驶路线。
另一些实施例中,上述图像获取模块301,还用于在满足预设条件的情况下,获取待检测区域在第一时刻的第一彩色图像和第一深度图像;其中,预设条件包括:待检测区域位于车辆终端对应的行驶路线上。
另一些实施例中,上述预设条件还包括:车辆终端与拍摄装置之间的距离小于预设距离。
本公开的一些实施例提供了一种计算机可读存储介质(例如,非暂态计算机可读存储介质),该计算机可读存储介质中存储有计算机程序指令,计算机程序指令在处理器上运行时,使得处理器执行如上述实施例中任一实施例所述的积水深度检测方法中的一个或多个步骤。
示例性的,上述计算机可读存储介质可以包括,但不限于:磁存储器件(例如,硬盘、软盘或磁带等),光盘(例如,压缩盘(compact disk,CD)、数字通用盘(digital versatile disk,DVD)等),智能卡和闪存器件(例如,可擦写可编程只读存储器(erasable programmable read-only memory,EPROM)、卡、棒或钥匙驱动器等)。本公开描述的各种计算机可读存储介质可代表用于存储信息的一个或多个设备和/或其它机器可读存储介质。术语“机器可读存储介质”可包括但不限于,无线信道和能够存储、包含和/或承载指令和/或数据的各种其它介质。
本公开的一些实施例还提供了一种计算机程序产品。该计算机程序产品包括计算机程序指令,在计算机上执行该计算机程序指令时,该计算机程序指令使计算机执行如上述实施例所述的积水深度检测方法中的一个或多个步骤。
本公开的一些实施例还提供了一种计算机程序。当该计算机程序在计算机上执行时,该计算机程序使计算机执行如上述实施例所述的积水深度检测 方法中的一个或多个步骤。
上述计算机可读存储介质、计算机程序产品及计算机程序的有益效果和上述一些实施例所述的积水深度检测方法的有益效果相同,此处不再赘述。
以上所述,仅为本公开的具体实施方式,但本公开的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,想到变化或替换,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应以所述权利要求的保护范围为准。

Claims (17)

  1. 一种积水深度检测方法,其特征在于,所述方法包括:
    通过拍摄装置获取待检测区域在第一时刻的第一彩色图像和第一深度图像,所述第一深度图像用于记录在第一时刻时所述待检测区域中各个地点的深度值;
    根据所述第一彩色图像,对所述待检测区域进行积水检测,获取所述待检测区域中处于积水状态的目标子区域的位置信息;
    根据所述目标子区域的位置信息、所述第一深度图像以及预先存储的第二深度图像,确定所述目标子区域的积水深度,所述第二深度图像用于记录在第二时刻时所述待检测区域中各个地点的深度值,所述第二时刻为所述待检测区域处于未积水状态的时刻。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述目标子区域的位置信息、所述第一深度图像以及预先存储的第二深度图像,确定所述目标子区域的积水深度,包括:
    根据所述目标子区域的位置信息以及所述第一深度图像,确定第一深度值,所述第一深度值为第一时刻时目标子区域的水面的深度值;
    根据所述目标子区域的位置信息以及所述第二深度图像,确定第二深度值,所述第二深度值为第二时刻时目标子区域的最低点的深度值;
    以所述第二深度值与所述第一深度值之间的差值作为所述目标子区域的积水深度。
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述目标子区域的位置信息以及所述第二深度图像,确定第二深度值,包括:
    根据所述目标子区域的位置信息以及所述第二深度图像,确定所述第二时刻时所述目标子区域的各个地点的深度值;
    从所述第二时刻时所述目标子区域的各个地点的深度值中,选择最大的深度值作为所述第二深度值。
  4. 根据权利要求2所述的方法,其特征在于,所述根据所述目标子区域的位置信息以及所述第二深度图像,确定第二深度值,包括:
    根据所述目标子区域的位置信息以及所述第二深度图像,确定所述第二时刻时所述目标子区域的各个地点的三维坐标;
    根据所述第二时刻时所述目标子区域的各个地点的三维坐标,进行曲面拟合,得到所述目标子区域对应的曲面;
    以所述目标子区域对应的曲面的最低点的深度值作为所述第二深度值。
  5. 根据权利要求1所述的方法,其特征在于,所述根据所述目标子区域 的位置信息、所述第一深度图像以及预先存储的第二深度图像,确定所述目标子区域的积水深度,包括:
    根据所述目标子区域的位置信息以及所述第一深度图像,确定第一平均深度值,所述第一平均深度值为第一时刻时所述目标子区域的各个地点的深度值的平均值;
    根据所述目标子区域的位置信息以及所述第二深度图像,确定第二平均深度值,所述第二平均深度值为第二时刻时所述目标子区域的各个地点的深度值的平均值;
    以所述第二平均深度值与所述第一平均深度值之间的差值作为所述目标子区域的积水深度。
  6. 根据权利要求1至5任一项所述的方法,其特征在于,所述根据所述第一彩色图像,对所述待检测区域进行积水检测,获取所述待检测区域中处于积水状态的目标子区域的位置信息,包括:
    根据所述第一彩色图像,对所述待检测区域进行道路分割,获取所述待检测区域中车辆行驶区域的位置信息;
    根据所述待检测区域中车辆行驶区域的位置信息和所述第一彩色图像,对所述车辆行驶区域进行积水检测,获取所述车辆行驶区域中处于积水状态的目标子区域的位置信息。
  7. 根据权利要求1至5任一项所述的方法,其特征在于,在所述确定所述目标子区域的积水深度之前,所述方法还包括:
    根据所述第一彩色图像,对所述待检测区域进行道路分割,获取所述待检测区域中车辆行驶区域的位置信息;
    根据所述车辆行驶区域的位置信息和所述目标子区域的位置信息,判断所述目标子区域是否位于所述车辆行驶区域内;
    所述根据所述目标子区域的位置信息、第一深度图像以及预先存储的第二深度图像,确定所述目标子区域的积水深度,包括:
    若所述目标子区域位于所述车辆行驶区域内,则根据所述目标子区域的位置信息、第一深度图像以及预先存储的第二深度图像,确定所述目标子区域的积水深度。
  8. 根据权利要求1至7任一项所述的方法,其特征在于,所述方法还包括:
    通过所述拍摄装置获取所述待检测区域在第三时刻时的第三彩色图像和第三深度图像,所述第三时刻位于所述第二时刻之后;
    将所述第三彩色图像输入天气类别识别模型,确定所述第三彩色图像的天气类型,所述天气类别包括雨天或者非雨天;
    在所述第三彩色图像的天气类别为非雨天时,确定所述第三彩色图像与第二彩色图像之间的相似度,所述第二彩色图像为所述拍摄装置在第二时刻时拍摄所述待检测区域而得到的彩色图像;
    在所述第三彩色图像与所述第二彩色图像之间的相似度小于预设阈值时,以所述第三深度图像更新所述第二深度图像。
  9. 根据权利要求1至8任一项所述的方法,其特征在于,所述方法还包括:
    向终端设备发送所述目标子区域的积水深度。
  10. 根据权利要求1至8任一项所述的方法,其特征在于,所述方法还包括:
    比较车辆支持的最大涉水深度以及所述目标子区域的积水深度;
    若所述车辆支持的最大涉水深度大于所述目标子区域的积水深度,向车辆终端发送第一提示信息,所述第一提示信息用于表示所述车辆能够安全通过所述目标子区域;或者,
    若所述车辆支持的最大涉水深度小于或等于所述目标子区域的积水深度,向所述车辆终端发送第二提示信息,所述第二提示信息用于警告所述目标子区域存在危险。
  11. 根据权利要求1至8任一项所述的方法,其特征在于,所述方法还包括:
    对所述待检测区域进行车道识别,确定所述待检测区域中各个车道的位置信息;
    根据所述待检测区域中各个车道的位置信息以及所述目标子区域的位置信息,确定所述目标子区域所影响的车道;
    根据所述目标子区域所影响的车道以及所述目标子区域的积水深度,向车辆终端发送提示信息。
  12. 根据权利要求11所述的方法,其特征在于,根据所述目标子区域所影响的车道以及所述目标子区域的积水深度,向所述车辆终端发送提示信息,包括:
    比较车辆支持的最大涉水深度与所述目标子区域的积水深度;
    若所述车辆支持的最大涉水深度大于所述目标子区域的积水深度,向车辆终端发送第一提示信息,所述第一提示信息用于表示所述车辆能够安全通 过所述目标子区域;
    若所述车辆支持的最大涉水深度小于或等于所述目标子区域的积水深度,根据所述目标子区域所影响的车道,判断所述待检测区域是否存在所述车辆能够通行的车道;
    若所述待检测区域存在所述车辆能够通行的车道,向所述车辆终端发送第三提示信息,所述第三提示信息用于指示所述车辆能够通行的车道;
    若所述待检测区域不存在所述车辆能够通行的车道,向所述车辆终端发送第四提示信息,所述第四提示信息用于提示用户修改驾驶路线。
  13. 根据权利要求1至12任一项所述的方法,其特征在于,所述通过拍摄装置获取待检测区域在第一时刻的第一彩色图像和第一深度图像,包括:
    在满足预设条件的情况下,通过所述拍摄装置获取所述待检测区域在第一时刻的第一彩色图像和第一深度图像;其中,所述预设条件包括:所述待检测区域位于车辆终端对应的行驶路线上。
  14. 根据权利要求13所述的方法,其特征在于,所述预设条件还包括:所述车辆终端与所述拍摄装置之间的距离小于预设距离。
  15. 一种积水深度检测装置,其特征在于,包括用于执行权利要求1-14任一项所述的积水深度检测方法的功能模块。
  16. 一种积水深度检测装置,其特征在于,所述装置包括存储器和处理器;
    所述存储器和所述处理器耦合;所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令;
    其中,当所述处理器执行所述计算机指令时,使得所述装置执行如权利要求1-14中任一项所述的积水深度检测方法。
  17. 一种非瞬态的计算机可读存储介质,所述计算机可读存储介质存储有计算机程序;其中,所述计算机程序在积水深度检测装置运行时,使得所述积水深度检测装置实现如权利要求1-14中任一项所述的积水深度检测方法。
PCT/CN2022/126492 2021-12-29 2022-10-20 积水深度检测方法及装置 WO2023124442A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111644105.7A CN114299457A (zh) 2021-12-29 2021-12-29 积水深度检测方法及装置
CN202111644105.7 2021-12-29

Publications (1)

Publication Number Publication Date
WO2023124442A1 true WO2023124442A1 (zh) 2023-07-06

Family

ID=80972484

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/126492 WO2023124442A1 (zh) 2021-12-29 2022-10-20 积水深度检测方法及装置

Country Status (2)

Country Link
CN (1) CN114299457A (zh)
WO (1) WO2023124442A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117541584A (zh) * 2024-01-09 2024-02-09 中国飞机强度研究所 一种掩码旋转叠加的全机试验裂纹特征增强与标识方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114299457A (zh) * 2021-12-29 2022-04-08 京东方科技集团股份有限公司 积水深度检测方法及装置
CN116071656B (zh) * 2023-03-06 2023-06-06 河北工业大学 地下变电站红外图像积水检测智能报警方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090245582A1 (en) * 2008-03-26 2009-10-01 Honda Motor Co., Ltd. Lane recognition apparatus for vehicle, vehicle thereof, and lane recognition program for vehicle
CN110378952A (zh) * 2019-07-10 2019-10-25 深圳前海微众银行股份有限公司 一种图像处理方法及装置
CN110411366A (zh) * 2019-07-31 2019-11-05 北京领骏科技有限公司 一种道路积水深度的检测方法及电子设备
CN113744256A (zh) * 2021-09-09 2021-12-03 中德(珠海)人工智能研究院有限公司 一种深度图空洞填充方法、装置、服务器及可读存储介质
CN114299457A (zh) * 2021-12-29 2022-04-08 京东方科技集团股份有限公司 积水深度检测方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090245582A1 (en) * 2008-03-26 2009-10-01 Honda Motor Co., Ltd. Lane recognition apparatus for vehicle, vehicle thereof, and lane recognition program for vehicle
CN110378952A (zh) * 2019-07-10 2019-10-25 深圳前海微众银行股份有限公司 一种图像处理方法及装置
CN110411366A (zh) * 2019-07-31 2019-11-05 北京领骏科技有限公司 一种道路积水深度的检测方法及电子设备
CN113744256A (zh) * 2021-09-09 2021-12-03 中德(珠海)人工智能研究院有限公司 一种深度图空洞填充方法、装置、服务器及可读存储介质
CN114299457A (zh) * 2021-12-29 2022-04-08 京东方科技集团股份有限公司 积水深度检测方法及装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117541584A (zh) * 2024-01-09 2024-02-09 中国飞机强度研究所 一种掩码旋转叠加的全机试验裂纹特征增强与标识方法
CN117541584B (zh) * 2024-01-09 2024-04-02 中国飞机强度研究所 一种掩码旋转叠加的全机试验裂纹特征增强与标识方法

Also Published As

Publication number Publication date
CN114299457A (zh) 2022-04-08

Similar Documents

Publication Publication Date Title
WO2023124442A1 (zh) 积水深度检测方法及装置
US20220245936A1 (en) Object-based change detection using a neural network
US10846874B2 (en) Method and apparatus for processing point cloud data and storage medium
JP7052663B2 (ja) 物体検出装置、物体検出方法及び物体検出用コンピュータプログラム
CN110619750B (zh) 面向违停车辆的智能航拍识别方法及系统
Zheng et al. A novel vehicle detection method with high resolution highway aerial image
Alvarez et al. Combining priors, appearance, and context for road detection
EP4133235A1 (en) System and method for geocoding
Yu et al. Automated detection of urban road manhole covers using mobile laser scanning data
Zhou et al. On detecting road regions in a single UAV image
WO2019227954A1 (zh) 识别交通灯信号的方法、装置、可读介质及电子设备
US11718324B2 (en) Instance segmentation imaging system
CN109767637A (zh) 倒计时信号灯识别及处理的方法和装置
CN111027511B (zh) 基于感兴趣区块提取的遥感图像舰船检测方法
CN110060508B (zh) 一种用于内河桥区的船舶自动检测方法
CN108288047A (zh) 一种行人/车辆检测方法
CN109902610A (zh) 交通标志识别方法和装置
CN110717886A (zh) 复杂环境下基于机器视觉的路面坑塘检测方法
CN111967396A (zh) 障碍物检测的处理方法、装置、设备及存储介质
Zang et al. Traffic lane detection using fully convolutional neural network
Vajak et al. Recent advances in vision-based lane detection solutions for automotive applications
Coronado et al. Detection and classification of road signs for automatic inventory systems using computer vision
Haris et al. Lane lines detection under complex environment by fusion of detection and prediction models
CN109657556B (zh) 道路及其周边地物的分类方法及系统
CN115187959B (zh) 一种基于双目视觉的飞行汽车山地着陆方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22913705

Country of ref document: EP

Kind code of ref document: A1