WO2022204905A1 - 一种障碍物检测方法及装置 - Google Patents

一种障碍物检测方法及装置 Download PDF

Info

Publication number
WO2022204905A1
WO2022204905A1 PCT/CN2021/083741 CN2021083741W WO2022204905A1 WO 2022204905 A1 WO2022204905 A1 WO 2022204905A1 CN 2021083741 W CN2021083741 W CN 2021083741W WO 2022204905 A1 WO2022204905 A1 WO 2022204905A1
Authority
WO
WIPO (PCT)
Prior art keywords
obstacle
boundary
image
boundary information
obstacles
Prior art date
Application number
PCT/CN2021/083741
Other languages
English (en)
French (fr)
Inventor
云一宵
苏惠荞
郑迪威
马志贤
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN202180003376.6A priority Critical patent/CN113841154A/zh
Priority to PCT/CN2021/083741 priority patent/WO2022204905A1/zh
Publication of WO2022204905A1 publication Critical patent/WO2022204905A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Definitions

  • the present application relates to the technical field of image processing, and in particular, to an obstacle detection method and device.
  • the visible light image of the obstacle is collected by the camera, and then the attribute information of the obstacle is extracted from the visible light image and input to the deep neural network for training; in the detection process, the attribute information of the obstacle to be detected is input into the deep neural network. , the detection result of the obstacle to be detected can be output.
  • the attribute information of obstacles currently used is mainly the shape, size, color, texture, material, motion state, etc. of the obstacles. These attribute information are various, and there is no uniform rule to follow.
  • the above-mentioned attribute information corresponding to obstacles of different categories is quite different, and the above-mentioned attribute information corresponding to different obstacles of the same category also has certain differences.
  • the embodiments of the present application provide an obstacle detection method and device, so as to improve the effectiveness of obstacle detection.
  • a first aspect of the embodiments of the present application provides an obstacle detection method, including: acquiring a first image, where the first image may be an image directly captured by a camera, or may be a frame of image in a video captured by the camera,
  • the first image contains at least one obstacle;
  • the boundary of the at least one obstacle is determined based on the boundary information network model, and the boundary information network model may be a pre-trained deep neural network, wherein the boundary of the at least one obstacle includes the obstacle and the obstacle.
  • the boundary formed by the road surface that can be used to determine the location of obstacles.
  • the attribute information of the boundary of the obstacle is more stable and single, and has better generality and generalization;
  • the similarity of the boundaries of the obstacles is high, and for the obstacles of different categories, the boundaries of the obstacles also have a certain similarity; so for the obstacles not included in the training sample set, if The training sample set contains other obstacles of the same category as the obstacle, then the network model based on the boundary information can determine the boundary of the obstacle.
  • the boundary of the obstacle can also be determined based on the boundary information network model; it can be seen that detecting obstacles by determining the boundary of the obstacle is conducive to detecting a larger number of obstacles, It can improve the effectiveness of obstacle detection.
  • the boundary information network model is obtained by training based on empirical obstacle boundary information.
  • the empirical obstacle boundary information may be information related to the empirical obstacle boundary.
  • the empirical obstacle boundary information may include the occupancy of the empirical obstacle.
  • the boundary can also include the unique identification ID of the occupied boundary of the experience obstacle; the boundary information of the experience obstacle is classified according to the source of the boundary information of the experience obstacle, and the boundary information of the experience obstacle can include the boundary information of the historical obstacle and/or the sample obstacle Among them, the sample obstacle boundary information can be understood as the boundary information obtained by manually labeling the obstacles in the sample image; the historical obstacle boundary information can be understood as the prior obstacle boundary information, that is, no manual
  • the boundary information that can be obtained by labeling, for example, the boundary information of historical obstacles can be the boundary information of existing obstacles in the map.
  • training the boundary information network model based on the historical obstacle boundary information can reduce the labeling cost;
  • a variety of obstacles can be selected for labeling, so the boundary information of sample obstacles can increase the diversity of boundary information.
  • Training the boundary information network model based on the boundary information of sample obstacles can improve the performance of the boundary information network model, thereby improving the Effectiveness of obstacle detection.
  • the sample obstacle boundary information is obtained by taking an ordered set of points along the boundary line segment between the lower edge of the obstacle in the image and the drivable road surface, wherein the lower edge refers to the edge close to the drivable road surface ;
  • the sample obstacle boundary information is obtained from the boundary line segment between the lower edge of the mask of the obstacle and the drivable road surface in the image, the mask can be understood as the image used for covering, and the mask of the obstacle can be understood as An image used to cover obstacles; or, sample obstacle boundary information is generated by a simulation engine, and the scene image simulated by the simulation engine is an image containing obstacles.
  • the implementation method provides various feasible solutions for obtaining the boundary information of the sample obstacles, which makes the way of obtaining the boundary information of the sample obstacles more flexible; It is simple and easy to obtain the boundary information of the sample obstacle by taking an ordered point set.
  • the boundary information of the sample obstacle is obtained through the boundary line segment between the lower edge of the mask of the obstacle in the image and the drivable road surface, and the existing obstacles are used to obtain the boundary information of the sample obstacle. Therefore, it is only necessary to mark the starting and ending points of the boundary line segments, and it is not necessary to take points one by one, which can improve the labeling efficiency; the sample obstacle boundary information is generated by the simulation engine, without manual labeling, which can reduce the labeling cost.
  • determining the boundary of at least one obstacle based on the boundary information network model includes: inputting the first image into the boundary information network model, and taking each pixel in the first image as a category based on empirical obstacle boundary information Classification is performed, and the classification result can be pedestrians, vehicles, lanes, lane lines, sidewalks, etc.; the classification result is processed to obtain the boundary of at least one obstacle.
  • each pixel in the first image is classified based on empirical obstacle boundary information as a category, and the result of the classification is processed to obtain the boundary of at least one obstacle, thereby achieving acquisition through semantic segmentation. Obstacle boundary.
  • the pixels occupied by the boundary of at least one obstacle are continuous in a first direction, and the first direction may be the pixel width direction of the image, and the pixel width direction corresponds to the horizontal direction of the image.
  • the boundary of the obstacle not only cannot reflect the size of the obstacle in the first direction well, but also may cause the user to mistake the discontinuity for is an exercisable area; on the contrary, in this implementation, the pixels occupied by the boundary of the obstacle are continuous in the first direction, which not only reflects the size of the obstacle in the first direction well, but also facilitates The user accurately identifies the drivable area.
  • the at least one obstacle includes a first obstacle and a second obstacle; and determining the boundary of the at least one obstacle based on the boundary information network model includes: determining the boundary of the first obstacle and the boundary of the second obstacle. Boundary, the intersection of the pixel points occupied by the boundary of the first obstacle and the pixel points occupied by the boundary of the second obstacle is an empty set.
  • This implementation method provides a feasible solution for determining the boundary of the obstacle in the scenario of multiple obstacles. Specifically, if the intersection of the pixels occupied by the boundaries of the two obstacles is an empty set, the first obstacle is determined respectively. and the boundary of the second obstacle.
  • the method further includes: determining the size of the area occupied by the at least one obstacle in the first image according to the boundary of the at least one obstacle and the pixel height of the obstacle in the preset image, and the pixel height can be understood as The size in the vertical direction of the first image, but the pixel height is preset and has no direct relationship with the actual height of the obstacle, so the pixel height can be greater than the height of the obstacle in the first image, or less than The height of the obstacle in the first image.
  • Determining the boundary of the obstacle is equivalent to determining the position of the obstacle. Since the actual obstacle has a certain volume, it is not intuitive and stereoscopic enough to represent the obstacle only by the position of the obstacle.
  • the boundary of the at least one obstacle and the pixel height of the obstacle in the preset image determine the size of the area occupied by the at least one obstacle in the first image, so that the obstacle can be represented in a more intuitive and three-dimensional manner.
  • a second aspect of the embodiments of the present application provides an obstacle detection device, including: an acquisition unit, configured to acquire a first image, where the first image includes at least one obstacle; and a determination unit, configured to determine based on a boundary information network model The boundary of at least one obstacle; wherein, the boundary of at least one obstacle includes the boundary formed by the obstacle and the road surface.
  • the boundary information network model is obtained by training based on empirical obstacle boundary information, and the empirical obstacle boundary information includes historical obstacle boundary information and/or sample obstacle boundary information.
  • the sample obstacle boundary information is obtained by taking an ordered set of points along the boundary line segment between the lower edge of the obstacle in the image and the drivable road surface; or, the sample obstacle boundary information is obtained by taking the obstacle in the image.
  • the boundary line segment between the lower edge of the mask and the drivable road surface is obtained; or, sample obstacle boundary information is generated by a simulation engine, and the scene image simulated by the simulation engine is an image containing obstacles.
  • the determining unit is specifically configured to: input the first image into the boundary information network model, and classify each pixel in the first image as a category based on empirical obstacle boundary information; process the classification result Get the boundary of at least one obstacle.
  • the pixels occupied by the boundary of at least one obstacle are continuous in the first direction.
  • the at least one obstacle includes a first obstacle and a second obstacle; the determining unit is specifically configured to: determine the boundary of the first obstacle and the boundary of the second obstacle, where the boundary of the first obstacle is located. The intersection of the occupied pixels and the pixels occupied by the boundary of the second obstacle is an empty set.
  • the determining unit is further configured to: determine the size of the area occupied by the at least one obstacle in the first image according to the boundary of the at least one obstacle and the pixel height of the obstacle in the preset image.
  • a third aspect of an embodiment of the present application provides an obstacle detection device, including: one or more processors and a memory; wherein, the memory stores computer-readable instructions; the one or more processors read The computer-readable instructions in the memory are to cause the obstacle detection apparatus to implement the method according to any one of the above-mentioned first aspect and various possible implementations.
  • a fourth aspect of the embodiments of the present application provides a computer program product containing instructions, characterized in that, when it runs on a computer, the computer is caused to execute any one of the above-mentioned first aspect and various possible implementation manners method described in item.
  • a fifth aspect of the embodiments of the present application provides a computer-readable storage medium, including instructions, characterized in that, when the instructions are executed on a computer, the computer is made to execute the above-mentioned first aspect and various possible implementation manners. The method of any one.
  • a sixth aspect of the embodiments of the present application provides a chip, including one or more processors. Part or all of the processor is used to read and execute the computer program stored in the memory, so as to execute the method in any possible implementation manner of the first aspect.
  • the chip includes a memory, and the memory and the processor are connected to the memory through a circuit or a wire. Further optionally, the chip further includes a communication interface, and the processor is connected to the communication interface.
  • the communication interface is used for receiving data and/or information to be processed, the processor obtains the data and/or information from the communication interface, processes the data and/or information, and outputs the processing result through the communication interface.
  • the communication interface may be an input-output interface.
  • some of the one or more processors may also implement some steps in the above method by means of dedicated hardware, for example, the processing involving the neural network model may be performed by a dedicated neural network processor or graphics processor.
  • the methods provided in the embodiments of the present application may be implemented by one chip, or may be implemented collaboratively by multiple chips.
  • a seventh aspect of the embodiments of the present application provides a vehicle, where the vehicle includes the device in any possible implementation manner of the foregoing second aspect.
  • the embodiments of the present application have the following advantages:
  • the attribute information of the boundary of the obstacle is more stable and single, and has better generality and generalization;
  • the similarity of the boundaries of the obstacles is relatively high, and for different types of obstacles, the boundaries of the obstacles also have a certain similarity;
  • the boundary of at least one obstacle in the image can detect not only obstacles included in the training sample set, but also obstacles not included in the training set; If the sample set contains other obstacles of the same category as the obstacle, then based on the similarity of the boundary, the embodiment of the present application can detect the obstacle; for obstacles not included in the training sample set, if the training sample set contains the boundary other obstacles similar to the boundary of the obstacle, then based on the similarity of the boundary, the embodiment of the present application can also detect the obstacle; therefore, by determining the boundary of the obstacle, a larger number of obstacles can be detected, It can improve the effectiveness of obstacle detection.
  • Fig. 1 is the framework schematic diagram of the detection system in the embodiment of the application.
  • FIG. 2 is a schematic diagram of various obstacles in an embodiment of the present application.
  • FIG. 3 is a schematic diagram of an embodiment of occupying a boundary in an embodiment of the present application.
  • FIG. 4 is a schematic diagram of another embodiment of occupying a boundary in an embodiment of the present application.
  • FIG. 5 is a schematic flowchart of a training process in an embodiment of the present application.
  • FIG. 6 is a schematic diagram of the first embodiment of the labeling occupying a boundary in an embodiment of the present application
  • FIG. 7 is a schematic diagram of a second embodiment of the labeling occupying a boundary in an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a third embodiment of the labeling occupying a boundary in an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a fourth embodiment of the labeling occupying a boundary in an embodiment of the present application.
  • Fig. 10 is the schematic diagram of ENet network processing image
  • FIG. 11 is a schematic diagram of an embodiment of an obstacle detection method provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of an embodiment of an input image of a boundary information network model in an embodiment of the application
  • FIG. 13 is a schematic diagram of a heat map output by a boundary information network model in an embodiment of the present application.
  • FIG. 14 is a schematic diagram of a sunken groove corresponding to a boundary of an obstacle in an embodiment of the present application.
  • 15 is a schematic diagram of a columnar pixel corresponding to an obstacle in an embodiment of the present application.
  • FIG. 16 is a schematic diagram of an embodiment of an obstacle detection device in an embodiment of the present application.
  • FIG. 17 is a schematic diagram of another embodiment of the obstacle detection device in the embodiment of the present application.
  • the embodiments of the present application can be applied to the detection system shown in FIG. 1 , where the detection system includes a sensor, a perception algorithm module, and a planning and control module.
  • the number of sensors can be one or more, and specifically can include a monocular camera, a binocular camera, a multi-eye camera and a surround-view camera, which are used to capture images or videos of the surrounding environment; the perception algorithm module is used for each sensor. The obstacles in the image or video are detected.
  • the sensing algorithm module is also used to fuse the obstacle detection results corresponding to each sensor; the planning and control module is used to receive obstacles from the sensing algorithm module. According to the detection result, the mobile platform's own behavior is planned and controlled according to the obstacle detection result, for example, the own behavior is the next moving path and way.
  • the perception algorithm module can be a separate device, can be arranged inside the sensor, or can be arranged in a device together with the planning and control module.
  • the embodiments of the present application can be applied to the fields of traffic safety, automatic assisted driving (ADAS), and automatic driving (AD).
  • the embodiments of the present application can also be applied to fields such as smart intersections and smart cities.
  • the detection system shown in FIG. 1 can be deployed in a distributed sensor network or a non-movable platform, where the non-movable platform can be a street lamp or traffic lights for obstacle detection in critical traffic areas.
  • obstacles are mainly detected by deep neural networks. Specifically, first use the attribute information of obstacles to train a deep neural network, and then deploy the deep neural network as part of the detection system on the corresponding equipment; when obstacles need to be detected, first obtain the attribute information of the obstacles to be detected , and then input the attribute information of the obstacle to be detected into the deep neural network to output the detection result of the obstacle to be detected.
  • the attribute information of obstacles currently used is mainly the shape, size, color, texture, material, motion state, etc. of the obstacles. These attribute information are various, and there is no uniform rule to follow.
  • the above-mentioned attribute information corresponding to obstacles of different categories is quite different, and the above-mentioned attribute information corresponding to different obstacles of the same category also has certain differences.
  • an embodiment of the present application provides an obstacle detection method, which uses the attribute information of the boundary formed by the obstacle and the road surface to detect the obstacle. Since any obstacle will form a boundary with the road surface, the present application implements the method. This example is suitable for the detection of any obstacle; and, compared with the attribute information such as the shape, size, color, texture, material, and motion state of the obstacle, the attribute information of the boundary formed by the obstacle and the road surface is more stable, single, and universal. Therefore, the obstacle detection method provided in the embodiment of the present application is used to detect obstacles, which can improve the effectiveness of obstacle detection.
  • Obstacle refers to the object that occupies the drivable road surface and affects the ego vehicle. Since any type of object (rather than some specific types of objects or common types of objects) can become obstacles, so obstacles The obstacle may also be called a general obstacle, and the method provided by the embodiments of the present application will be described below by using the obstacle.
  • FIG. 2 shows various examples of obstacles, including not only pedestrians (101), cars (102), motorcycles (103), bicycles (104) and other regular traffic participants, but also traffic Traffic scene markers such as cones (105) and triangles (106), as well as animals (107), boxes (108), flat tires (109), stones (110) and other objects that do not often appear in traffic scenes.
  • traffic Traffic scene markers such as cones (105) and triangles (106), as well as animals (107), boxes (108), flat tires (109), stones (110) and other objects that do not often appear in traffic scenes.
  • Semantic segmentation A computer vision task of pixel-level classification of an input image, that is, classifying each pixel in the image and determining the semantic category of each point (e.g. pedestrian, vehicle, lane, lane Lines, sidewalks, etc.), so as to achieve the purpose of semantic-level division of the input image.
  • semantic category of each point e.g. pedestrian, vehicle, lane, lane Lines, sidewalks, etc.
  • Instance segmentation On the basis of semantic segmentation, it additionally realizes the purpose of distinguishing a single individual in each semantic category.
  • Ground truth the standard answer, which refers to the expected result or correct output corresponding to each given input signal in a specific estimation or measurement task.
  • the ground-truth of semantic segmentation refers to the category to which each pixel in the image belongs.
  • the common representation is the category label mask of the same size as the image. Ground-truth values can be used for model training in supervised learning, as well as for validation and evaluation of model performance.
  • Heat map A visualization method that displays data in shades of color. Given an input image, the semantic segmentation network outputs a corresponding heatmap for each category. The depth of the color represents the probability that the category appears in the corresponding image area. Generally speaking, the warmer the color (or the higher the brightness), the greater the probability.
  • Occupied boundary refers to the boundary formed between the object and the road surface after the drivable road surface is occupied by the object; please refer to Figure 3 and Figure 4, Figure 3 and Figure 4 show multiple examples of the occupied boundary, specifically, Figure 3 The occupation boundary formed between the carton and the road surface is shown, Figure 3 also shows the occupation boundary formed between the barricade and the road surface, and Figure 4 shows the occupation boundary formed between various types of vehicles and the road surface.
  • the deep neural network is mainly used to detect obstacles
  • this embodiment of the present application uses the boundary information network model to detect obstacles. Get the boundary information network model.
  • the training process of the boundary information network model may include:
  • a training data set is obtained.
  • the training data set can contain multiple images and the boundary information of obstacles in the multiple images, and the multiple images containing obstacles can be directly captured by the camera or extracted from the video captured by the camera.
  • the boundary information of obstacles can also be called empirical obstacle boundary information, and the empirical obstacle boundary information can be any information related to the empirical obstacle boundary; for example, the empirical obstacle boundary information can include the occupied boundary of the empirical obstacle, wherein, The occupancy boundary refers to the boundary line segment formed between the object and the road surface after the drivable road surface is occupied by the object; in addition, the boundary information of the experience obstacle may also include the information of the boundary instance of the occupied boundary of the experience obstacle.
  • An instance can be understood as an individual, and each individual can be called an instance; based on this, each occupied boundary can be called an occupied boundary instance.
  • the information of the occupied boundary instance may be the unique ID of the occupied boundary.
  • the empirical obstacle boundary information is described above from the perspective of information content, and the following describes the empirical obstacle boundary information from the source of the empirical obstacle boundary information.
  • the empirical obstacle boundary information is classified according to the source of the empirical obstacle boundary information, and the empirical obstacle boundary information may include historical obstacle boundary information and/or sample obstacle boundary information; Boundary information obtained by manual annotation of obstacles in the image; historical obstacle boundary information can be understood as a priori obstacle boundary information, that is, boundary information that can be obtained without manual annotation.
  • the historical obstacle boundary information may be the boundary information of existing obstacles in the map. Specifically, when a road is repaired on a certain road section, the roadblocks set on the repaired road section and the boundary information of the roadblocks will be updated in the map. The boundary of the roadblock will be updated in the map. The information can be used as historical obstacle boundary information.
  • sample obstacle boundary information it needs to be obtained by manual annotation.
  • the following takes the occupied boundary as an example to introduce the labeling process of the sample obstacle boundary information.
  • sample obstacle boundary information may be obtained by taking occupied boundary as an example.
  • the occupied boundary is obtained by taking an ordered set of points along the boundary line segment between the lower edge of the obstacle and the drivable road surface in the image.
  • the ordered point set may be composed of points from left to right along the image, or may be composed of points from right to left in the image.
  • an ordered point set is taken along the boundary line segment between the lower edge of the bicycle and the ground, and the ordered point set constitutes the occupation boundary, which is also the true boundary of the occupation boundary when the bicycle acts as an obstacle value.
  • an ordered point set is taken along the boundary line segment between the lower edge of the roadblock and the ground to obtain an occupied boundary, which is also the true value of the occupied boundary when the roadblock acts as an obstacle.
  • the occupied boundary is obtained by the boundary line segment between the lower edge of the mask of the obstacle in the image and the drivable road surface.
  • the mask can be understood as an image used for covering, and the mask of an obstacle can be understood as an image used to cover the obstacle.
  • FIG. 8 shows a mask 1501 for a car and a mask 1500 for a drivable road.
  • point 1502 is marked as the starting point
  • point 1503 is marked as the end point; in this way, the boundary line between the mask 1501 of the car and the mask 1500 of the drivable road is located between points 1502 and 1503
  • All points between (including point 1502 and point 1503 ) constitute an ordered point set, and the ordered point set constitutes an occupation boundary, which is the true value of the occupation boundary when the car acts as an obstacle.
  • the occupied boundary is obtained by the boundary line segment between the lower edge of the mask of the obstacle and the drivable road surface in the image, and it is only necessary to mark the starting point of the boundary line segment between the lower edge of the mask of the obstacle and the drivable road surface
  • the occupied boundary can be obtained without taking points one by one, which can improve the labeling efficiency.
  • the occupied boundary is generated by a simulation engine, and the scene image simulated by the simulation engine is an image containing obstacles.
  • the image containing obstacles is used as the scene image simulated by the simulation engine, and the virtual data and the corresponding occupied boundary can be generated by simulating the traffic scene by the simulation engine.
  • the occupied boundary of the car generated by the simulation engine is shown as a white line segment, and the occupied boundary is the true value of the occupied boundary when the car acts as an obstacle.
  • the occupied boundaries can be automatically generated by the simulation engine, and there is no need to manually label the occupied boundaries containing the obstacles in the image one by one, which can greatly improve the efficiency of obtaining the occupied boundaries of obstacles and reduce the cost of labeling.
  • the multiple obstacles can be labeled as one obstacle or a cluster of obstacles.
  • Overlapping obstacles can correspond to an occupied boundary; where multiple overlapping obstacles means that among the multiple overlapping obstacles, for any one obstacle, there is another obstacle that overlaps with it.
  • the image contains two bicycles with overlapping parts, and the two bicycles with overlapping parts are marked to obtain an occupied boundary as shown in Figure 6 (indicated by white line segments in Figure 6) out).
  • the boundary information network model is trained based on the training data set to obtain a trained boundary information network model.
  • boundary information network model There may be various types of the boundary information network model, which is not specifically limited in this embodiment of the present application.
  • an ENet network may be used as the boundary information network model, and the image processing process of the ENet network is shown in FIG. 10 .
  • the numbers in Figure 10 represent the number of channels in the image.
  • the process of training the boundary information network model generally includes: selecting the boundary information network model, configuring the initial weights for the boundary information network model, inputting the training data in the training data set into the boundary information network model, and then based on the output and the boundary information network model.
  • the labeled information calculates a loss function, and finally back-propagates according to the loss function to update the weights in the boundary information network model.
  • the trained boundary information network model can output the occupied boundaries of obstacles in the image; in addition, , if the information of the occupied boundary instance of the obstacle in the image is also marked, then the trained boundary information network model can also output the occupied boundary instance of the obstacle in the image.
  • the trained boundary information network model can output the obstacle The unique ID of the occupied boundary; based on the occupied boundary instance of the obstacle, the boundary information network can also output the obstacle instance corresponding to the occupied boundary instance, wherein each obstacle in the image can be called an obstacle instance.
  • the training process of the boundary information network model is described above, and the process of detecting obstacles in an image based on the boundary information network model is described below.
  • an embodiment of the present application provides an embodiment of an obstacle detection method, including:
  • a first image is acquired, where the first image includes at least one obstacle.
  • the first image may be directly captured by a camera, or a video may be captured by the camera, and then a frame of image including obstacles is extracted from the video as the first image.
  • the types of cameras include, but are not limited to, monocular cameras, binocular cameras, multi-camera cameras, and surround-view cameras.
  • the first image may be collected by a vehicle-mounted forward-looking camera.
  • the number of obstacles in the first image may be one or multiple; when the number of obstacles in the first image is multiple, there may be two independent (ie non-overlapping) obstacles in the multiple obstacles There may also be two obstacles in the overlapping part.
  • the first image is the image shown in FIG. 6 , and the first image includes two obstacles, a car and a bicycle, which are independent of each other.
  • the first image also includes two bicycles with overlapping parts.
  • the types of obstacles in the first image may be as shown in FIG. 2 . any kind of obstacle.
  • a boundary of at least one obstacle is determined based on the boundary information network model.
  • the boundary of at least one obstacle includes the boundary formed by the obstacle and the road surface, and the boundary formed by the obstacle and the road surface may also be referred to as an occupation boundary.
  • the boundary information network model needs to be trained based on the training data set.
  • the training data set may include multiple training images and boundary information of obstacles in the multiple training images, and the training data set may include multiple training images.
  • the boundary information of the obstacles in one image and multiple images can also be called empirical obstacle boundary information.
  • the boundary information network model is obtained by training based on empirical obstacle boundary information, and the empirical obstacle boundary information includes historical obstacle boundary information and/or sample obstacle boundary information.
  • the experience obstacle boundary information can be understood by referring to the relevant description of operation 201 above.
  • training the boundary information network model based on the historical obstacle boundary information can reduce the labeling cost;
  • a variety of obstacles can be selected for labeling, so the boundary information of sample obstacles can increase the diversity of boundary information.
  • Training the boundary information network model based on the boundary information of sample obstacles can improve the performance of the boundary information network model, thereby improving the Effectiveness of obstacle detection.
  • the sample obstacle boundary information is obtained by taking an ordered set of points along the boundary line segment between the lower edge of the obstacle in the image and the drivable road surface; or, the sample obstacle boundary information is obtained by taking the obstacle in the image.
  • the boundary line segment between the lower edge of the mask and the drivable road surface is obtained; or, sample obstacle boundary information is generated by a simulation engine, and the scene image simulated by the simulation engine is an image containing obstacles.
  • sample obstacle boundary information can be the occupied boundary of the sample obstacle, so please refer to the above-mentioned related descriptions of FIGS. 6 to 9 (three manual labeling methods for obtaining the occupied boundary of the sample obstacle) for this embodiment.
  • the process of obtaining the boundary information of the sample obstacle is understood.
  • the implementation method provides various feasible solutions for obtaining the boundary information of the sample obstacles, which makes the way of obtaining the boundary information of the sample obstacles more flexible; It is simple and easy to obtain the boundary information of the sample obstacle by taking an ordered point set.
  • the boundary information of the sample obstacle is obtained through the boundary line segment between the lower edge of the mask of the obstacle in the image and the drivable road surface, and the existing obstacles are used to obtain the boundary information of the sample obstacle. Therefore, it is only necessary to mark the starting and ending points of the boundary line segments, and it is not necessary to take points one by one, which can improve the labeling efficiency; the sample obstacle boundary information is generated by the simulation engine, without manual labeling, which can reduce the labeling cost.
  • the pixels occupied by the boundary of at least one obstacle are continuous in the first direction.
  • the first direction may be the pixel width direction of the image, and the pixel width direction corresponds to the horizontal direction of the image; for example, the first direction may be the horizontal direction from point 1502 to point 1503 in FIG. 8 .
  • a discontinuous multi-segment boundary may lead a user (such as a driver) to mistake the discontinuous multi-segment boundary as the boundary of multiple obstacles, and then mistake the area between the two boundaries as a drivable area, but between the two boundaries
  • the area of is actually an obstacle, that is, a non-drivable area.
  • the obstacle usually has a certain volume, and the discontinuous multi-segment boundary is not conducive for the user to judge the size of the obstacle in the first direction.
  • the above-mentioned problems are caused by the discontinuous boundary of the obstacle. Therefore, in this embodiment of the present application, the pixels occupied by the boundary of at least one obstacle are continuous in the first direction, which not only can well reflect that the obstacle is in the first direction The upward dimension is also helpful for the user to accurately identify the drivable area.
  • the continuous boundary line from point 1502 to point 1503 is used as the boundary when the car is used as an obstacle; in this way, the user can judge the size of the obstacle in the horizontal direction, so as to determine the size of the obstacle. Estimated, and will treat the entire boundary area as a non-drivable area.
  • the number of obstacles may be one or more; when the number of obstacles is one, the number of boundaries of the determined obstacles is one; when the number of obstacles is multiple When the number of boundaries of the determined obstacles can be divided into two types.
  • the first case there is an overlap between multiple obstacles; at this time, based on the relevant description of the aforementioned training process, it can be seen that multiple overlapping obstacles will be marked as one obstacle or a cluster of obstacles, and the corresponding Ground, the number of boundaries of obstacles determined based on the boundary information network model can be considered as one, and this boundary can be considered to be formed by the respective boundary connections of multiple obstacles.
  • the second case is described below by taking two obstacles as an example.
  • the at least one obstacle includes a first obstacle and a second obstacle
  • operation 302 includes: determining a boundary of the first obstacle and a boundary of the second obstacle, where the boundary of the first obstacle is located.
  • the intersection of the occupied pixels and the pixels occupied by the boundary of the second obstacle is an empty set.
  • first obstacle and the second obstacle may be of the same category or may be of different categories, which are not specifically limited in this embodiment of the present application.
  • the determined boundaries of the two obstacles are independent of each other, so the intersection of the pixels occupied by the boundaries of the two obstacles is an empty set .
  • Fig. 7 includes three roadblocks, and two roadblocks are used as the first obstacle and the second obstacle, and the intersection of the pixels occupied by the boundaries of the two obstacles determined is an empty set.
  • the boundary information network model is used to determine the boundary of the obstacle through semantic segmentation, and accordingly, operation 302 includes:
  • the result of classification is processed to obtain the boundary of at least one obstacle.
  • the classification results may be pedestrians, vehicles, lanes, lane lines, sidewalks, and the like.
  • each pixel in the first image is classified, and the result of the classification is processed to obtain the boundary of at least one obstacle, thereby achieving semantic segmentation through semantic segmentation. Get the boundary of the obstacle.
  • the boundary information network model outputs a heatmap containing the boundary of the obstacle; the boundary of the obstacle can be determined based on the heatmap.
  • the boundary information network model will output the heat map shown in Figure 13, and the white line segment in Figure 13 represents the boundary of the obstacle;
  • the heatmap can determine the boundaries of obstacles.
  • the heatmap shown in Figure 13 can also be post-processed to obtain the boundary (ie, the occupied boundary) corresponding to each obstacle instance.
  • each sunken groove of the one-dimensional signal that is, the boundary corresponding to each obstacle instance, is obtained through the inflection point detection.
  • FIG. 14 Each sunken groove in FIG. 14 represents a corresponding obstacle instance. border.
  • the boundary of the obstacle can be used to determine the position of the obstacle, so that the detection of the obstacle can be realized.
  • the attribute information of the boundary of the obstacle is more stable and single, and has better generality and generalization; For different obstacles of the same category, the similarity of the boundaries of the obstacles is high, and for the obstacles of different categories, the boundaries of the obstacles also have a certain similarity.
  • Figure 4 which includes a variety of vehicles such as trucks, vans, and SUVs, all of which belong to the same category; although the shapes, sizes, colors, materials, etc.
  • objects there are various kinds of objects, but as long as it is an object in the category of a car, the boundary formed by it and the road surface roughly includes three types: straight lines, polylines bent to the left, and polylines bent to the right. Then, most cars as obstacles can be detected using these three boundaries.
  • the attribute information of the boundary of the obstacle is indeed relatively stable and single, and for different obstacles of the same category, the similarity of the boundary of the obstacle is high, so by determining the boundary of the obstacle to detect the obstacle, It is beneficial to detect a larger number of obstacles.
  • Figure 3 contains cartons and Figure 4 contains cars; although cartons and cars belong to different categories, the boundary between the carton and the road is similar to the boundary between the car and the road.
  • the boundary of the obstacle also has a certain similarity, so by determining the boundary of the obstacle to detect the obstacle, It is beneficial to detect a larger number of obstacles.
  • the size of the area occupied by the at least one obstacle in the first image is determined according to the boundary of the at least one obstacle and the pixel height of the obstacle in the preset image.
  • the position of the obstacle can be determined based on the boundary of the obstacle, so that the detection of the obstacle can be realized; but the actual obstacle has a certain volume, so in order to express the detected obstacle more intuitively and three-dimensionally
  • the size of the area occupied by the obstacle in the first image is determined according to the boundary of the obstacle and the pixel height of the obstacle in the image. Accordingly, operation 303 is optional.
  • the pixel height can be understood as the size in the vertical direction of the first image, but the pixel height is preset and has no direct relationship with the actual height of the obstacle, and the pixel height can be greater than the obstacle in the first image.
  • the height in the first image may also be smaller than the height of the obstacle in the first image.
  • the obstacle can be represented as a columnar pixel (stixel) with the boundary of the obstacle as the bottom edge, and the representation effect is shown in Figure 15. ;
  • obstacles such as cartons and roadblocks are represented by columnar pixels, and the actual height of obstacles such as cartons and roadblocks has nothing to do with the height of the columnar pixels.
  • the height of obstacles such as cartons is smaller than the height of the columnar pixels.
  • Height, the height of some barricades is greater than the height of the columnar pixels.
  • the size of the area occupied by the at least one obstacle in the first image is determined according to the boundary of the at least one obstacle and the pixel height of the obstacle in the preset image, so that the obstacle can be represented in a more intuitive and three-dimensional manner thing.
  • FIG. 16 is a schematic diagram of an embodiment of the obstacle detection device in the embodiment of the present application.
  • One or more of the respective unit modules in FIG. 16 may be implemented by software, hardware, firmware or a combination thereof.
  • the software or firmware includes, but is not limited to, computer program instructions or code, and can be executed by a hardware processor.
  • the hardware includes, but is not limited to, various types of integrated circuits, such as central processing units (CPUs), digital signal processors (DSPs), field programmable gate arrays (FPGAs), or application specific integrated circuits (ASICs).
  • CPUs central processing units
  • DSPs digital signal processors
  • FPGAs field programmable gate arrays
  • ASICs application specific integrated circuits
  • the obstacle detection device includes:
  • an acquisition unit 1201 configured to acquire a first image, where the first image contains at least one obstacle
  • the determining unit 1202 is configured to determine the boundary of at least one obstacle based on the boundary information network model; wherein, the boundary of the at least one obstacle includes the boundary formed by the obstacle and the road surface.
  • the boundary information network model is obtained by training based on empirical obstacle boundary information, and the empirical obstacle boundary information includes historical obstacle boundary information and/or sample obstacle boundary information.
  • the sample obstacle boundary information is obtained by taking an ordered point set along the boundary line segment between the lower edge of the obstacle and the drivable road surface in the image; or, the sample obstacle boundary information is obtained by taking the mask of the obstacle in the image.
  • the boundary line segment between the lower edge and the drivable road surface is obtained; or, the sample obstacle boundary information is generated by a simulation engine, and the image is a scene image simulated by the simulation engine.
  • the determining unit 1202 is specifically configured to: input the first image into the boundary information network model, and classify each pixel in the first image as a category based on empirical obstacle boundary information; process the result of the classification to obtain at least one Obstacle boundary.
  • the pixel points occupied by the boundary of at least one obstacle are continuous in the first direction.
  • the at least one obstacle includes a first obstacle and a second obstacle; the determining unit 1202 is specifically configured to: determine the boundary of the first obstacle and the boundary of the second obstacle, and the pixels occupied by the boundary of the first obstacle The intersection of the point and the pixel points occupied by the boundary of the second obstacle is an empty set.
  • the determining unit 1202 is further configured to: determine the size of the area occupied by the at least one obstacle in the first image according to the boundary of the at least one obstacle and the pixel height of the obstacle in the preset image.
  • FIG. 17 is a schematic diagram of an embodiment of the obstacle detection apparatus in the embodiment of the present application.
  • the obstacle detection device in this embodiment of the present application may be a device configured on a movable platform (such as a car, a robot, etc.), and the obstacle detection device 1300 may vary greatly due to different configurations or performances, and may include one or more One or more processors 1301 and a memory 1302 in which programs or data are stored.
  • the memory 1302 may be volatile storage or non-volatile storage.
  • the processor 1301 is one or more central processing units (CPU, central processing unit, the CPU can be a single-core CPU, or a multi-core CPU.
  • the processor 1301 can communicate with the memory 1302, in the obstacle detection device A series of instructions in memory 1302 are executed at 1300 .
  • the obstacle detection apparatus 1300 also includes one or more wired or wireless network interfaces 1303, such as an Ethernet interface.
  • the obstacle detection device 1300 may also include one or more power supplies; one or more input and output interfaces, which may be used to connect cameras, monitors, mice, keyboards, touch screens For equipment or sensing equipment, etc., the input and output interfaces are optional components, which may or may not exist, and are not limited here.
  • the obstacle detection device can be a vehicle with an obstacle detection function, or other components with an obstacle detection function.
  • the obstacle detection device includes but is not limited to: vehicle-mounted terminal, vehicle-mounted controller, vehicle-mounted module, vehicle-mounted module, vehicle-mounted components, vehicle-mounted chip, vehicle-mounted unit, vehicle-mounted radar or vehicle-mounted camera and other sensors.
  • the obstacle detection device can also be other intelligent terminals with obstacle detection function other than the vehicle, or set in other intelligent terminals with obstacle detection function except the vehicle, or a component set in the intelligent terminal middle.
  • the intelligent terminal may be other terminal equipment such as intelligent transportation equipment, smart home equipment, and robots.
  • the obstacle detection device includes, but is not limited to, a smart terminal or a controller, a chip, other sensors such as radar or a camera, and other components in the smart terminal.
  • the obstacle detection device may also be a general-purpose device or a special-purpose device.
  • the apparatus may also be a desktop computer, a portable computer, a network server, a personal digital assistant (PDA), a mobile phone, a tablet computer, a wireless terminal device, an embedded device, or other devices with processing functions.
  • PDA personal digital assistant
  • the embodiment of the present application does not limit the type of the obstacle detection device.
  • the obstacle detection device may also be a chip or processor with a processing function, and the obstacle detection device may include a plurality of processors.
  • the processor can be a single-core (single-CPU) processor or a multi-core (multi-CPU) processor.
  • the chip or processor with processing function may be arranged in the sensor, or may not be arranged in the sensor, but arranged at the receiving end of the output signal of the sensor.
  • the embodiments of the present application further provide a system, which is applied in unmanned driving or intelligent driving, which includes at least one of the obstacle detection device, camera, radar and other sensors mentioned in the above-mentioned embodiments of the present application.
  • At least one device in the system can be integrated into a whole machine or equipment, or at least one device in the system can also be independently set as a component or device.
  • any of the above systems may interact with the vehicle's central controller to provide detection and/or fusion information for decision-making or control of the vehicle's driving.
  • An embodiment of the present application further provides a vehicle, where the vehicle includes at least one obstacle detection device or any of the above-mentioned systems mentioned in the above-mentioned embodiments of the present application.
  • Embodiments of the present application further provide a chip including one or more processors. Part or all of the processor is used to read and execute the computer program stored in the memory, so as to execute the methods of the foregoing embodiments.
  • the chip includes a memory, and the memory and the processor are connected to the memory through a circuit or a wire. Further optionally, the chip further includes a communication interface, and the processor is connected to the communication interface.
  • the communication interface is used for receiving data and/or information to be processed, the processor obtains the data and/or information from the communication interface, processes the data and/or information, and outputs the processing result through the communication interface.
  • the communication interface may be an input-output interface.
  • some of the one or more processors may also implement some steps in the above method by means of dedicated hardware, for example, the processing involving the neural network model may be performed by a dedicated neural network processor or graphics processor.
  • the methods provided in the embodiments of the present application may be implemented by one chip, or may be implemented collaboratively by multiple chips.
  • Embodiments of the present application also provide a computer storage medium, where the computer storage medium is used for storing computer software instructions used by the above-mentioned computer device, which includes a program for executing a program designed for the computer device.
  • the computer device may be the obstacle detection device described in the aforementioned FIG. 16 .
  • Embodiments of the present application also provide a computer program product, where the computer program product includes computer software instructions, and the computer software instructions can be loaded by a processor to implement the processes in the methods shown in the foregoing embodiments.
  • An embodiment of the present application also provides a vehicle, which includes the obstacle detection device as described in the aforementioned FIG. 16 .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

一种障碍物检测方法及装置,涉及图像处理技术领域,方法可应用于自动驾驶或辅助驾驶,具体包括:获取第一图像(301),利用边界信息网络模型获取第一图像中至少一个障碍物与路面形成的边界,通过边界可以确定障碍物的位置,从而实现对障碍物的检测;由于障碍物的边界这一属性信息具有通用型和泛化性,所以方法利于检测出更多数量的障碍物,能够提高障碍物检测的有效性,并且,方法提升了终端在自动驾驶或者辅助驾驶中的高级驾驶辅助系统ADAS能力,可以应用于车联网,如车辆外联V2X、车间通信长期演进技术LTE-V、车辆-车辆V2V。

Description

一种障碍物检测方法及装置 技术领域
本申请涉及图像处理技术领域,尤其涉及一种障碍物检测方法及装置。
背景技术
在交通场景中,检测周围环境中的障碍物是基础的感知任务之一。
随着深度学习技术的发展,深度学习技术成为检测周围环境中的障碍物的重要手段。具体地,通过摄像头采集障碍物的可见光图像,然后从可见光图像中提取障碍物的属性信息并输入到深度神经网络进行训练;在检测过程中,将待检测障碍物的属性信息输入到深度神经网络,即可输出待检测障碍物的检测结果。
然而,目前所利用的障碍物的属性信息主要是障碍物的形状、大小、色彩、纹理、材质、运动状态等,这些属性信息是多种多样的,没有统一规律可循。不同类别的障碍物对应的上述属性信息差异较大,同类别的不同障碍物对应的上述属性信息也存在一定差异。
因此,若利用上述属性信息检测障碍物,对于未包含在训练样本集中的障碍物,难以有效检测出。
发明内容
本申请实施例提供了一种障碍物检测方法及装置,提高检测障碍物的有效性。
本申请实施例的第一方面提供一种障碍物检测方法,包括:获取第一图像,该第一图像可以是摄像头直接拍摄到的图像,也可以是摄像头拍摄到的视频中的一帧图像,第一图像中包含至少一个障碍物;基于边界信息网络模型确定至少一个障碍物的边界,该边界信息网络模型可以是预先训练得到的深度神经网络,其中,至少一个障碍物的边界包括障碍物与路面形成的边界,该边界可以用于确定障碍物的位置。
相比于障碍物的形状、大小、色彩、纹理、材质、运动状态等属性信息,障碍物的边界这一属性信息更加稳定单一,且通用性和泛化性更好;具体地,对于同类别的不同障碍物来说,障碍物的边界的相似性较高,对于不同类别的障碍物来说,障碍物的边界也具有一定的相似性;所以对于未包含在训练样本集中的障碍物,若训练样本集中包含与该障碍物同类别的其他障碍物,那么基于边界信息网络模型可以确定该障碍物的边界,若训练样本集中不包含与障碍物同类别的其他障碍物,但包含边界与该障碍物的边界相似的其他障碍物,那么也可以基于边界信息网络模型确定该障碍物的边界;由此可见,通过确定障碍物的边界检测障碍物,有利于检测出更多数量的障碍物,能够提高障碍物检测的有效性。
作为一种实现的方式,边界信息网络模型基于经验障碍物边界信息训练得到,经验障碍物边界信息可以是与经验障碍物边界有关的信息,例如,经验障碍物边界信息可以包括经验障碍物的占据边界,也可以包括经验障碍物的占据边界的唯一标识ID;根据经验障碍物边界信息的来源对经验障碍物边界信息进行分类,经验障碍物边界信息可以包括历史障碍物边界信息和/或样本障碍物边界信息;其中,样本障碍物边界信息可以理解为对样本图像中的障碍物进行人工标注而得到的边界信息;历史障碍物边界信息可以理解为先验障碍 物边界信息,即不需要进行人工标注就可以得到的边界信息,例如,历史障碍物边界信息可以是地图中已经存在的障碍物的边界信息。
由于历史障碍物边界信息不需要进行人工标注即可获取到,所以基于历史障碍物边界信息训练边界信息网络模型可以降低标注成本;由于样本障碍物边界信息是通过人工标注获取到的,而在人工标注过程可以选择各种各样的障碍物进行标注,所以样本障碍物边界信息可以增加边界信息的多样性,基于样本障碍物边界信息训练边界信息网络模型可以提高边界信息网络模型的性能,从而提高障碍物检测的有效性。
作为一种实现的方式,样本障碍物边界信息通过沿图像中障碍物的下边缘与可行驶路面之间的交界线段取有序点集得到,其中,该下边缘是指靠近可行驶路面的边缘;或者,样本障碍物边界信息通过图像中障碍物的掩膜的下边缘与可行驶路面之间的交界线段得到,掩膜可以理解为用于覆盖的图像,障碍物的掩膜则可以理解为用于覆盖障碍物的图像;或者,样本障碍物边界信息通过仿真引擎生成,仿真引擎模拟的场景图像为包含障碍物的图像。
该实现的方式提供了获取样本障碍物边界信息的多种可行方案,使得获取样本障碍物边界信息的方式更加灵活;并且,通过沿图像中障碍物的下边缘与可行驶路面之间的交界线段取有序点集获取样本障碍物边界信息,简便易行,通过图像中障碍物的掩膜的下边缘与可行驶路面之间的交界线段获取样本障碍物边界信息,借助了已有的障碍物的掩膜的信息,所以只需要标注交界线段起终点即可,不需要逐个取点,可以提高标注效率;通过仿真引擎生成样本障碍物边界信息,无需人工标注,可以降低标注成本。
作为一种实现的方式,基于边界信息网络模型,确定至少一个障碍物的边界包括:将第一图像输入边界信息网络模型,对第一图像中的每个像素点基于经验障碍物边界信息作为类别进行分类,分类结果可以是行人、车辆、车道、车道线、人行道等;对分类的结果进行处理得到至少一个障碍物的边界。
在该实现方式中,基于经验障碍物边界信息作为类别,对第一图像中的每个像素点进行分类,并对分类的结果进行处理得到至少一个障碍物的边界,从而实现了通过语义分割获取障碍物的边界。
作为一种实现的方式,至少一个障碍物的边界所占的像素点在第一方向上连续,该第一方向可以是图像的像素宽度方向,该像素宽度方向对应的是图像的水平方向。
若障碍物的边界所占的像素点在第一方向上是间断的,该障碍物的边界不仅不能很好地反映障碍物在第一方向上的尺寸,而且还可能导致用户将间断处误认为是可行使的区域;相反,在该实现方式中,障碍物的边界所占的像素点在第一方向上连续,不仅能很好地反映障碍物在第一方向上的尺寸,而且还有利于用户准确识别可行驶的区域。
作为一种实现的方式,至少一个障碍物包括第一障碍物和第二障碍物;基于边界信息网络模型,确定至少一个障碍物的边界包括:确定第一障碍物的边界和第二障碍物的边界,第一障碍物的边界所占的像素点与第二障碍物的边界所占的像素点的交集为空集。
该实现的方式提供了多个障碍物的场景下确定障碍物的边界的可行方案,具体地,若两个障碍物的边界所占的像素点的交集为空集,则分别确定第一障碍物的边界和第二障碍 物的边界。
作为一种实现的方式,方法还包括:根据至少一个障碍物的边界和预设的图像中障碍物的像素高度确定至少一个障碍物在第一图像中占据的区域大小,该像素高度可以理解为在第一图像的竖直方向上的尺寸,但该像素高度是预设的,与障碍物的实际高度没有直接关系,所以该像素高度可以大于障碍物在第一图像中的高度,也可以小于障碍物在第一图像中的高度。
确定障碍物的边界相当于确定障碍物的位置,由于实际的障碍物都有一定的体积,所以仅通过障碍物的位置表示障碍物,则不够直观、不够立体,所以在该实现方式中,根据至少一个障碍物的边界和预设的图像中障碍物的像素高度确定至少一个障碍物在第一图像中占据的区域大小,则可以更直观、更立体地表示障碍物。
本申请实施例的第二方面提供一种障碍物检测装置,包括:获取单元,用于获取第一图像,第一图像中包含至少一个障碍物;确定单元,用于基于边界信息网络模型,确定至少一个障碍物的边界;其中,至少一个障碍物的边界包括障碍物与路面形成的边界。
作为一种实现的方式,边界信息网络模型基于经验障碍物边界信息训练得到,经验障碍物边界信息包括历史障碍物边界信息和/或样本障碍物边界信息。
作为一种实现的方式,样本障碍物边界信息通过沿图像中障碍物的下边缘与可行驶路面之间的交界线段取有序点集得到;或者,样本障碍物边界信息通过取图像中障碍物的掩膜的下边缘与可行驶路面之间的交界线段得到;或者,样本障碍物边界信息通过仿真引擎生成,仿真引擎模拟的场景图像为包含障碍物的图像。
作为一种实现的方式,确定单元具体用于:将第一图像输入边界信息网络模型,对第一图像中的每个像素点基于经验障碍物边界信息作为类别进行分类;对分类的结果进行处理得到至少一个障碍物的边界。
作为一种实现的方式,至少一个障碍物的边界所占的像素点在第一方向上连续。
作为一种实现的方式,至少一个障碍物包括第一障碍物和第二障碍物;确定单元具体用于:确定第一障碍物的边界和第二障碍物的边界,第一障碍物的边界所占的像素点与第二障碍物的边界所占的像素点的交集为空集。
作为一种实现的方式,确定单元还用于:根据至少一个障碍物的边界和预设的图像中障碍物的像素高度确定至少一个障碍物在第一图像中占据的区域大小。
其中,以上各单元的具体实现、相关说明以及技术效果请参考本申请实施例第一方面的描述。
本申请实施例第三方面提供了一种障碍物检测装置,包括:一个或多个处理器和存储器;其中,所述存储器中存储有计算机可读指令;所述一个或多个处理器读取所述存储器中的所述计算机可读指令以使所述障碍物检测装置实现如上述第一方面以及各种可能的实现方式中任一项所述的方法。
本申请实施例第四方面提供了一种包含指令的计算机程序产品,其特征在于,当其在计算机上运行时,使得所述计算机执行如上述第一方面以及各种可能的实现方式中任一项所述的方法。
本申请实施例第五方面提供了一种计算机可读存储介质,包括指令,其特征在于,当所述指令在计算机上运行时,使得计算机执行如上述第一方面以及各种可能的实现方式中任一项所述的方法。
本申请实施例第六方面提供了一种芯片,包括一个或多个处理器。所述处理器中的部分或全部用于读取并执行存储器中存储的计算机程序,以执行上述第一方面任意可能的实现方式中的方法。
可选地,该芯片该包括存储器,该存储器与该处理器通过电路或电线与存储器连接。进一步可选地,该芯片还包括通信接口,处理器与该通信接口连接。通信接口用于接收需要处理的数据和/或信息,处理器从该通信接口获取该数据和/或信息,并对该数据和/或信息进行处理,并通过该通信接口输出处理结果。该通信接口可以是输入输出接口。
在一些实现方式中,所述一个或多个处理器中还可以有部分处理器是通过专用硬件的方式来实现以上方法中的部分步骤,例如涉及神经网络模型的处理可以由专用神经网络处理器或图形处理器来实现。
本申请实施例提供的方法可以由一个芯片实现,也可以由多个芯片协同实现。
本申请实施例第七方面提供了一种车辆,该车辆包括上述第二方面任意可能的实现方式中的装置。
从以上技术方案可以看出,本申请实施例具有以下优点:
相比于障碍物的形状、大小、色彩、纹理、材质、运动状态等属性信息,障碍物的边界这一属性信息更加稳定单一,且通用性和泛化性更好;具体地,对于同类别的不同障碍物来说,障碍物的边界的相似性较高,对于不同类别的障碍物来说,障碍物的边界也具有一定的相似性;所以本申请实施例基于边界信息网络模型确定第一图像中至少一个障碍物的边界,不仅可以检测出包含在训练样本集中的障碍物,而且可以检测出训练集中未包含的障碍物;具体地,对于未包含在训练样本集中的障碍物,若训练样本集中包含与该障碍物同类别的其他障碍物,那么基于边界的相似性,本申请实施例可以将该障碍物检测出;对于未包含在训练样本集中的障碍物,若训练样本集中包含边界与该障碍物的边界相似的其他障碍物,那么基于边界的相似性,本申请实施例也可以将该障碍物检测出;因此,通过确定障碍物的边界可以检测出更多数量的障碍物,能够提高障碍物检测的有效性。
附图说明
图1为本申请实施例中检测系统的框架示意图;
图2为本申请实施例中多种障碍物的示意图;
图3为本申请实施例中占据边界的一个实施例示意图;
图4为本申请实施例中占据边界的另一个实施例示意图;
图5为本申请实施例中训练过程的流程示意图;
图6为本申请实施例中占据边界的标注的第一实施例示意图;
图7为本申请实施例中占据边界的标注的第二实施例示意图;
图8为本申请实施例中占据边界的标注的第三实施例示意图;
图9为本申请实施例中占据边界的标注的第四实施例示意图;
图10为ENet网络处理图像的示意图;
图11为本申请实施例提供的一种障碍物检测方法的实施例示意图;
图12为本申请实施例中边界信息网络模型的输入图像的实施例示意图;
图13为本申请实施例中边界信息网络模型输出的热图示意图;
图14为本申请实施例中障碍物的边界所对应的下陷凹槽的示意图;
图15为本申请实施例中障碍物所对应的柱状像素的示意图;
图16为本申请实施例中障碍物检测装置的一个实施例示意图;
图17为本申请实施例中障碍物检测装置的另一个实施例示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行详细描述。
本申请实施例可以应用于图1所示的检测系统中,该检测系统包括传感器、感知算法模块以及规划与控制模块。
其中,传感器的数量可以为一个或多个,具体可以包括单目摄像头、双目摄像头、多目摄像头和环视摄像头,用于拍摄周围环境的图像或视频;感知算法模块用于对各个传感器拍摄的图像或视频中的障碍物进行检测,当传感器的数量为多个时,感知算法模块还用于将各个传感器对应的障碍物检测结果进行融合;规划与控制模块用于接收感知算法模块的障碍物检测结果,并根据障碍物检测结果对可移动平台的自身行为进行规划和控制,例如,该自身行为为下一步移动路径和方式。
感知算法模块可以为单独的装置,可以设置在传感器内部,还可以和规划与控制模块设置在一个装置内。
本申请实施例可以应用于交通安全、自动辅助驾驶(ADAS)、以及自动驾驶(AD)等领域中,此时,图1所示的检测系统可以部署在可移动平台中,该可移动平台包括汽车、机器人等;当可移动平台为汽车时,图1所示的检测系统又可以称为车载系统。
本申请实施例还可以应用于智慧路口和智慧城市等领域,此时,图1所示的检测系统可以部署在分布式传感器网络或非可移动平台中,其中,该非可移动平台可以为路灯或交通灯,用于对关键交通区域的障碍物进行检测。
目前,主要是通过深度神经网络对障碍物进行检测。具体地,先利用障碍物的属性信息训练一个深度神经网络,然后将该深度神经网络作为检测系统的一部分部署在相应的设备上;在需要检测障碍物时,先获取待检测障碍物的属性信息,然后将待检测障碍物的属性信息输入到深度神经网络,即可输出待检测障碍物的检测结果。
然而,目前所利用的障碍物的属性信息主要是障碍物的形状、大小、色彩、纹理、材质、运动状态等,这些属性信息是多种多样的,没有统一规律可循。不同类别的障碍物对应的上述属性信息差异较大,同类别的不同障碍物对应的上述属性信息也存在一定差异。
为此,本申请实施例提供了一种障碍物检测方法,该方法利用障碍物与路面形成的边 界这一属性信息对障碍物进行检测,由于任意障碍物都会与路面形成边界,所以本申请实施例适用于任意障碍物的检测;并且,相比于障碍物的形状、大小、色彩、纹理、材质、运动状态等属性信息,障碍物与路面形成的边界这一属性信息更加稳定单一,且通用性和泛化性更好,所以采用本申请实施例提供的障碍物检测方法对障碍物进行检测,能够提高障碍物检测的有效性。
为了便于理解,下面先对本申请实施例所使用的术语进行说明。
障碍物:是指占据可行驶路面且影响自主车辆(ego vehicle)前行的物体,由于任意类别的物体(而不是某些特定类别的物体或常见类别的物体)都可以成为障碍物,所以障碍物又可以称为通用障碍物,下文采用障碍物对本申请实施例提供的方法进行描述。
请参阅图2,图2示出了障碍物的多种示例,不仅包括行人(101)、汽车(102)、摩托车(103)、自行车(104)等各种常规交通参与者,还包括交通锥(105)、三角牌(106)等交通场景标志物,以及动物(107)、箱子(108)、平躺轮胎(109)、石头(110)等不常出现在交通场景中的物体。
语义分割(semantic segmentation):一种对输入图像进行像素级分类的计算机视觉任务,即对图像中的每一个像素点进行分类,确定每个点的语义类别(例如,行人、车辆、车道、车道线、人行道等),从而实现对输入图像进行语义级划分的目的。
实例分割(instance segmentation):在语义分割的基础上,额外实现对每一种语义类别中的单个个体进行区分的目的。
真值(ground truth):即标准答案,指在某一个特定的估计或测量任务中,每一个给定的输入信号所对应的预期结果或正确输出。例如,语义分割的真值,是指图像中每一个像素点所属的类别,常见的表现形式是和图像尺寸相同的类别标签掩膜(mask)。真值可用于监督学习中对模型的训练,也可用于模型性能的验证和评估。
热图(heat map):一种以颜色深浅变化来显示数据的可视化方法。给定一张输入图像,语义分割网络对于每一个类别,都会输出一张对应的热图。其中颜色的深浅代表了该类别出现在对应图像区域的可能性,一般来说,颜色越暖(或亮度越高)则可能性越大。
占据边界:是指可行驶路面被物体占据后,物体与路面之间形成的边界;请参阅图3和图4,图3和图4示出了占据边界的多个示例,具体地,图3示出了纸箱与路面之间形成的占据边界,图3还示出了路障与路面之间形成的占据边界,图4示出了各种类型的汽车与路面之间形成的占据边界。
基于前述说明可知,目前主要利用深度神经网络对障碍物进行检测,本申请实施例利用边界信息网络模型对障碍物进行检测,因此在采用本申请实施例提供的方法检测障碍物之前,需要先训练得到边界信息网络模型。
下面结合图5对边界信息网络模型的训练过程进行说明。
如图5所示,对边界信息网络模型的训练过程可以包括:
操作201,获取训练数据集。
其中,训练数据集可以包含多张图像以及多张图像中障碍物的边界信息,这多张包含障碍物的图像可以是由摄像头直接拍摄的,也可以是从摄像头拍摄的视频中提取的。
障碍物的边界信息又可以称为经验障碍物边界信息,经验障碍物边界信息可以是与经验障碍物边界有关的任何信息;例如,经验障碍物边界信息可以包括经验障碍物的占据边界,其中,占据边界是指可行驶路面被物体占据后,物体与路面之间形成的交界线段;除此之外,经验障碍物边界信息还可以包括经验障碍物的占据边界实例的信息。
实例可以理解为个体,每个个体都可以称为一个实例;基于此,每个占据边界都可以称为一个占据边界实例。
占据边界实例的信息可以有多种,本申请实施例对此不做具体限定,例如占据边界实例的信息可以为占据边界的唯一标识ID。
上面从信息内容的角度对经验障碍物边界信息进行了说明,下面从经验障碍物边界信息的来源对经验障碍物边界信息进行说明。
根据经验障碍物边界信息的来源对经验障碍物边界信息进行分类,经验障碍物边界信息可以包括历史障碍物边界信息和/或样本障碍物边界信息;其中,样本障碍物边界信息可以理解为对样本图像中的障碍物进行人工标注而得到的边界信息;历史障碍物边界信息可以理解为先验障碍物边界信息,即不需要进行人工标注就可以得到的边界信息。
例如,历史障碍物边界信息可以是地图中已经存在的障碍物的边界信息,具体地,某一路段修路,会在地图中更新所修路段设置的路障以及路障的边界信息,该路障的边界信息即可作为历史障碍物边界信息。
对于样本障碍物边界信息,则需要通过人工标注获取,下面以占据边界为例对样本障碍物边界信息的标注过程进行介绍。
需要说明的是,可以采用多种人工标注方法获取样本障碍物边界信息,本申请实施例对此不做具体限定,下面以占据边界为例介绍获取样本障碍物边界信息的三种标注方法。
作为一种实现的方式,通过沿图像中障碍物的下边缘与可行驶路面之间的交界线段取有序点集得到占据边界。
其中,该有序点集可以是由沿着图像从左到右的点构成,也可以是由图像从右到左的点构成。
例如,如图6所示,沿着自行车的下边缘与地面之间的交界线段取有序点集,该有序点集构成了占据边界,该占据边界也是自行车作为障碍物时占据边界的真值。
再例如,如图7所示,沿着路障的下边缘与地面之间的交界线段取有序点集,得到占据边界,该占据边界也是路障作为障碍物时占据边界的真值。
作为一种实现的方式,通过图像中障碍物的掩膜的下边缘与可行驶路面之间的交界线段得到占据边界。
其中,掩膜可以理解为用于覆盖的图像,障碍物的掩膜则可以理解为用于覆盖障碍物的图像。
例如,图8示出了汽车的掩膜1501和可行驶路面的掩膜1500,在汽车的掩膜1501和可行驶路面的掩膜1500的交界线上,标注这辆汽车的下边缘与地面之间的交界线段的起终点,例如将点1502标注为起点,将点1503标注为终点;这样,汽车的掩膜1501和可行驶路面的掩膜1500的交界线上,位于点1502和点1503之间的所有点(包含点1502和点1503) 构成了有序点集,该有序点集构成了占据边界,该占据边界是汽车作为障碍物时占据边界的真值。
由此可见,通过图像中障碍物的掩膜的下边缘与可行驶路面之间的交界线段得到占据边界,只需标注障碍物的掩膜的下边缘与可行驶路面之间的交界线段的起点和终点即可,无需逐个取点即可得到占据边界,可以提高标注效率。
作为一种实现的方式,通过仿真引擎生成占据边界,仿真引擎模拟的场景图像为包含障碍物的图像。
具体地,将包含障碍物的图像作为仿真引擎模拟的场景图像,通过仿真引擎模拟交通场景,即可生成虚拟数据和对应的占据边界。例如,如图9所示,通过仿真引擎生成的汽车的占据边界如白色线段所示,该占据边界是汽车作为障碍物时占据边界的真值。
由通过仿真引擎可以自动生成占据边界,无需逐一手动对包含图像中障碍物的占据边界进行标注,可以极大提高获取障碍物的占据边界的效率,且可以降低标注成本。
需要说明的是,无论采用哪种标注方法,若图像中存在多个重叠的障碍物,都可以将这多个障碍物看成一个障碍物或一簇障碍物进行标注,相应地,这多个重叠的障碍物可以对应一条占据边界;其中多个重叠的障碍物是指在这多个重叠的障碍物中,对于任意一个障碍物,都存在与之重叠的另一个障碍物。
例如,如图6所示,该图像包含两辆存在重叠部分的自行车,对这两辆存在重叠部分的自行车进行标注,得到如图6所示的一条占据边界(在图6中用白色线段示出)。
操作202,基于训练数据集对边界信息网络模型进行训练,以得到训练后的边界信息网络模型。
其中,该边界信息网络模型的种类可以有多种,本申请实施例对此不做具体限定,例如,可以采用ENet网络作为边界信息网络模型,ENet网络对图像的处理过程如图10所示,其中图10中的数字表示图像的通道channel数。
训练边界信息网络模型的过程大致包括:选定边界信息网络模型,并为边界信息网络模型配置初始权重,将训练数据集中的训练数据输入该边界信息网络模型,然后基于边界信息网络模型的输出和标注的信息计算损失函数,最后根据该损失函数进行反向传播,以更新该边界信息网络模型中的权重。
可以理解的是,由于对训练数据集中的图像中的障碍物的占据边界进行了标注,所以以一张图像作为输入,训练后的边界信息网络模型可以输出该图像中障碍物的占据边界;此外,若还标注了图像中障碍物的占据边界实例的信息,那么训练后的边界信息网络模型还可以输出该图像中障碍物的占据边界实例,例如,训练后的边界信息网络模型可以输出障碍物的占据边界的唯一ID;基于障碍物的占据边界实例,边界信息网络还可以输出该占据边界实例对应的障碍物实例,其中,图像中的每个障碍物都可以称为障碍物实例。
上面对边界信息网络模型的训练过程进行了说明,下面对基于边界信息网络模型检测图像中的障碍物的过程进行说明。
请参阅图11,本申请实施例提供了一种障碍物检测方法的一个实施例,包括:
操作301,获取第一图像,第一图像中包含至少一个障碍物。
获取第一图像的方式有多种,本申请实施例对此不做具体限定。例如,可以通过摄像头直接采集第一图像,也可以通过摄像头采集视频,然后从视频中提取包含障碍物的一帧图像作为第一图像。
其中,摄像头的种类包括但不限于单目摄像头、双目摄像头、多目摄像头和环视摄像头。
具体地,在交通场景下,可以通过车载前视摄像头采集第一图像。
第一图像中障碍物的数量可以为一个,也可以为多个;当第一图像中障碍物的数量为多个时,多个障碍物中可以存在相互独立(即非重叠)的两个障碍物,也可以存在重叠部分的两个障碍物。
例如,第一图像为图6所示的图像,该第一图像中包含相互独立的汽车和自行车两个障碍物,除此之外,该第一图像中还包含存在重叠部分的两辆自行车。
第一图像中障碍物的种类可以有一种,也可以有多种,本申请实施例对第一图像中障碍物的种类不做具体限定,例如,第一图像中障碍物的种类可以是图2中任意一种障碍物。
操作302,基于边界信息网络模型,确定至少一个障碍物的边界。
其中,至少一个障碍物的边界包括障碍物与路面形成的边界,障碍物与路面形成的边界也可以称为占据边界。
基于前述说明可知,在执行操作302之前,需要基于训练数据集训练得到边界信息网络模型,训练数据集可以包含多张训练图像以及多张训练图像中障碍物的边界信息,训练数据集可以包含多张图像以及多张图像中障碍物的边界信息,又可以称为经验障碍物边界信息。
因此,作为一种实现的方式,边界信息网络模型基于经验障碍物边界信息训练得到,经验障碍物边界信息包括历史障碍物边界信息和/或样本障碍物边界信息。
由于前文已经对经验障碍物边界信息进行了说明,故可参阅前文操作201的相关说明对经验障碍物边界信息进行理解。
由于历史障碍物边界信息不需要进行人工标注即可获取到,所以基于历史障碍物边界信息训练边界信息网络模型可以降低标注成本;由于样本障碍物边界信息是通过人工标注获取到的,而在人工标注过程可以选择各种各样的障碍物进行标注,所以样本障碍物边界信息可以增加边界信息的多样性,基于样本障碍物边界信息训练边界信息网络模型可以提高边界信息网络模型的性能,从而提高障碍物检测的有效性。
基于上述说明可知,样本障碍物边界信息需要通过人工标注获取,下面介绍三种获取样本障碍物边界信息的人工标注方法。
作为一种实现的方式,样本障碍物边界信息通过沿图像中障碍物的下边缘与可行驶路面之间的交界线段取有序点集得到;或者,样本障碍物边界信息通过取图像中障碍物的掩膜的下边缘与可行驶路面之间的交界线段得到;或者,样本障碍物边界信息通过仿真引擎生成,仿真引擎模拟的场景图像为包含障碍物的图像。
可以理解的是,样本障碍物边界信息可以是样本障碍物的占据边界,所以可参阅前文图6至图9的相关说明(获取样本障碍物的占据边界的三种人工标注方法)对该实施例中 获取样本障碍物边界信息的过程进行理解。
该实现的方式提供了获取样本障碍物边界信息的多种可行方案,使得获取样本障碍物边界信息的方式更加灵活;并且,通过沿图像中障碍物的下边缘与可行驶路面之间的交界线段取有序点集获取样本障碍物边界信息,简便易行,通过图像中障碍物的掩膜的下边缘与可行驶路面之间的交界线段获取样本障碍物边界信息,借助了已有的障碍物的掩膜的信息,所以只需要标注交界线段起终点即可,不需要逐个取点,可以提高标注效率;通过仿真引擎生成样本障碍物边界信息,无需人工标注,可以降低标注成本。
下面对障碍物的边界的特点进行说明。
作为一种可实现的方式,至少一个障碍物的边界所占的像素点在第一方向上连续。
其中,该第一方向可以是图像的像素宽度方向,该像素宽度方向对应的是图像的水平方向;例如,第一方向可以是图8中从点1502至点1503的水平方向。
可以理解的是,若一个障碍物的边界所占的像素在第一方向上是间断的,则可能会导致很多问题。
例如,间断的多段边界可能会让用户(例如驾驶员)误认为间断的多段边界是多个障碍的边界,进而会误认为两段边界之间的区域是可行驶区域,但两段边界之间的区域实际也是障碍物,即非可行驶区域。
再例如,障碍物通常是存在一定体积的,间断的多段边界不利于用户判断障碍物在第一方向上的尺寸。
基于障碍物间断的边界导致上述种种问题,所以在本申请实施例中,至少一个障碍物的边界所占的像素点在第一方向上连续,这不仅能很好地反映障碍物在第一方向上的尺寸,而且还有利于用户准确识别可行驶的区域。
以图8为例,图8中的汽车与路面实际接触的位置在四个车轮处,这四个车轮处显然是分散的;若以这四个车轮处为该汽车作为障碍物时的边界,那会让用户误认为车轮之间的区域是可行驶区域,而且无法判断该障碍物在水平方向的尺寸。
而在本申请实施例中,会将点1502至点1503的连续交界线作为汽车作为障碍物时的边界;这样,用户便可以判断该障碍物在水平方向的尺寸,从而对障碍物的大小进行估计,并且会将整个边界所在的区域都作为非可行驶的区域。
基于操作301的相关说明可知,障碍物的数量可以为一个,也可以为多个;当障碍物的数量为一个时,确定的障碍物的边界的数量为一个;当障碍物的数量为多个时,确定的障碍物的边界的数量情况则可以分为两种。
第一种情况:多个障碍物重叠之间存在重叠部分;此时,基于前述训练过程的相关说明可知,会将多个重叠的障碍物看成一个障碍物或一簇障碍物进行标注,相应地,基于边界信息网络模型确定的障碍物的边界的数量可以认为是一个,这一个边界可以认为是由多个障碍物各自的边界连接形成。
第二种情况:多个障碍物之间没有重叠部分。
下面以两个障碍物为例对第二种情况进行介绍。
作为一种实现方式,至少一个障碍物包括第一障碍物和第二障碍物,相应地,操作302 包括:确定第一障碍物的边界和第二障碍物的边界,第一障碍物的边界所占的像素点与第二障碍物的边界所占的像素点的交集为空集。
其中,第一障碍物和第二障碍物可以是同一类别,也可以是不同类别,本申请实施例对此不做具体限定。
在本申请实施例中,当两个障碍物之间没有重叠部分时,确定出的两个障碍物的边界是相互独立的,所以两个障碍物的边界所占的像素点的交集为空集。
例如,图7中包含三个路障,以其中两个路障作为第一障碍物和第二障碍物,确定出的两个障碍物的边界所占的像素点的交集为空集。
可以理解的是,边界信息网络模型不同,对应的操作302的具体过程也不同。
作为一种实现的方式,边界信息网络模型用于通过语义分割确定障碍物的边界,相应地,操作302包括:
将第一图像输入边界信息网络模型,对第一图像中的每个像素点基于经验障碍物边界信息作为类别进行分类;
对分类的结果进行处理得到至少一个障碍物的边界。
其中,分类结果可以是行人、车辆、车道、车道线、人行道等。
在本申请实施例中,基于经验障碍物边界信息作为类别,对第一图像中的每个像素点进行分类,并对分类的结果进行处理得到至少一个障碍物的边界,从而实现了通过语义分割获取障碍物的边界。
需要说明的是,不同边界信息网络模型的输出的类型不同,通常情况下,边界信息网络模型输出的是包含障碍物的边界的热图;基于该热图可以确定障碍物的边界。
下面对基于热图确定障碍物的边界的具体过程进行说明。
具体地,将图12所示的图像输入边界信息网络模型,边界信息网络模型则会输出如图13所示的热图,图13中的白色线段表示障碍物的边界;基于图13所示的热图便可以确定障碍物的边界。
此外,还可以对图13所示的热图进行后处理,以得到每个障碍物实例所对应的边界(即占据边界)。
具体地,保留图13所示的热图中每一列最下方大于某一预设的阈值的像素,其余位置的像素置零,处理后的热图中的各个像素可以看成是一维信号;然后,通过拐点检测得到这个一维信号的每一个下陷凹槽,即每个障碍物实例所对应的边界,具体可参阅图14,图14中的每个下陷凹槽表示一个障碍物实例所对应的边界。
在本申请实施例中,障碍物的边界可以用于确定障碍物的位置,从而可以实现对障碍物的检测。
并且,相比于障碍物的形状、大小、色彩、纹理、材质、运动状态等属性信息,障碍物的边界这一属性信息更加稳定单一,且通用性和泛化性更好;具体地,对于同类别的不同障碍物来说,障碍物的边界的相似性较高,对于不同类别的障碍物来说,障碍物的边界也具有一定的相似性。
例如,如图4所述,该图4包含卡车、面包车和运动型多用途汽车SUV等多种汽车, 这多种汽车都属于同一类别;尽管这多种汽车的形状、大小、色彩、材质等都是多种多样的,但只要是汽车这一类别的物体,它与路面所形成的边界大致包括三种:直线、向左弯折的折线和向右弯折的折线。那么,便可以利用上述这三种边界对作为障碍物的大多数汽车进行检测。
由此可见,障碍物的边界这一属性信息确实比较稳定单一,并且,对于同类别的不同障碍物来说,障碍物的边界的相似性较高,所以通过确定障碍物的边界检测障碍物,有利于检测出更多数量的障碍物。
再例如,如图3和图4所示,图3中包含纸箱,图4中包含汽车;尽管纸箱和汽车属于不同类别,但纸箱与路面之间的边界,同汽车与路面的边界相似,都包含三种:直线、向左弯折的折线和向右弯折的折线。那么,利用上述三种边界不仅可以对作为障碍物的汽车进行检测,还可以对作为障碍物的纸箱进行检测。
由此可见,障碍物的边界这一属性信息确实比较稳定单一,并且,对于不同类别的障碍物来说,障碍物的边界也具有一定的相似性,所以通过确定障碍物的边界检测障碍物,有利于检测出更多数量的障碍物。
综上所述,在本申请实施例中,通过确定障碍物的边界检测障碍物,有利于检测出更多数量的障碍物,能够提高障碍物检测的有效性。
操作303,根据至少一个障碍物的边界和预设的图像中障碍物的像素高度确定至少一个障碍物在第一图像中占据的区域大小。
可以理解的是,基于障碍物的边界可以确定障碍物的位置,从而可以实现对障碍物的检测;但实际的障碍物都有一定的体积,所以为了更直观、更立体地表示检测出的障碍物,本申请实施例根据障碍物的边界和图像中障碍物的像素高度确定障碍物在第一图像中占据的区域大小,相应地,操作303是可选的。
其中,该像素高度可以理解为在第一图像的竖直方向上的尺寸,但该像素高度是预设的,与障碍物的实际高度没有直接关系,该像素高度可以大于障碍物在第一图像中的高度,也可以小于障碍物在第一图像中的高度。
以图12的图像为例,在根据图13的热图确定障碍物的边界后,可以将障碍物表示为以障碍物的边界为底边的柱状像素(stixel),表示效果如图15所示;从图15可以看出,纸箱、路障等障碍物均通过柱状像素表示,且纸箱、路障等障碍物的实际高度与柱状像素的高度无关,具体地,纸箱等障碍物的高度小于柱状像素的高度,部分路障的高度大于柱状像素的高度。
在本申请实施例中,根据至少一个障碍物的边界和预设的图像中障碍物的像素高度确定至少一个障碍物在第一图像中占据的区域大小,则可以更直观、更立体地表示障碍物。
上面介绍了本申请提供的障碍物检测方法,下面对实现该障碍物检测方法的装置进行介绍,请参阅图16所示,为本申请实施例中障碍物检测装置的一个实施例示意图。
图16中的各个单元模块中的一个或多个可以由软件、硬件、固件或其结合实现。所述软件或固件包括但不限于计算机程序指令或代码,并可以被硬件处理器所执行。所述硬件包括但不限于各类集成电路,如中央处理单元(CPU)、数字信号处理器(DSP)、现场 可编程门阵列(FPGA)或专用集成电路(ASIC)。
该障碍物检测装置包括:
获取单元1201,用于获取第一图像,第一图像中包含至少一个障碍物;
确定单元1202,用于基于边界信息网络模型,确定至少一个障碍物的边界;其中,至少一个障碍物的边界包括障碍物与路面形成的边界。
进一步地,边界信息网络模型基于经验障碍物边界信息训练得到,经验障碍物边界信息包括历史障碍物边界信息和/或样本障碍物边界信息。
进一步地,样本障碍物边界信息通过沿图像中障碍物的下边缘与可行驶路面之间的交界线段取有序点集得到;或者,样本障碍物边界信息通过取图像中障碍物的掩膜的下边缘与可行驶路面之间的交界线段得到;或者,样本障碍物边界信息通过仿真引擎生成,图像为仿真引擎模拟的场景图像。
进一步地,确定单元1202具体用于:将第一图像输入边界信息网络模型,对第一图像中的每个像素点基于经验障碍物边界信息作为类别进行分类;对分类的结果进行处理得到至少一个障碍物的边界。
进一步地,至少一个障碍物的边界所占的像素点在第一方向上连续。
进一步地,至少一个障碍物包括第一障碍物和第二障碍物;确定单元1202具体用于:确定第一障碍物的边界和第二障碍物的边界,第一障碍物的边界所占的像素点与第二障碍物的边界所占的像素点的交集为空集。
进一步地,确定单元1202还用于:根据至少一个障碍物的边界和预设的图像中障碍物的像素高度确定至少一个障碍物在第一图像中占据的区域大小。
请参阅图17,为本申请实施例中障碍物检测装置的一个实施例示意图。
本申请实施例中的障碍物检测装置可以是配置于可移动平台(例如汽车、机器人等)的装置,该障碍物检测装置1300可因配置或性能不同而产生比较大的差异,可以包括一个或一个以上处理器1301和存储器1302,该存储器1302中存储有程序或数据。
其中,存储器1302可以是易失性存储或非易失性存储。可选地,处理器1301是一个或多个中央处理器(CPU,central processing unit,该CPU可以是单核CPU,也可以是多核CPU。处理器1301可以与存储器1302通信,在障碍物检测装置1300上执行存储器1302中的一系列指令。
该障碍物检测装置1300还包括一个或一个以上有线或无线网络接口1303,例如以太网接口。
可选地,尽管图17中未示出,障碍物检测装置1300还可以包括一个或一个以上电源;一个或一个以上输入输出接口,输入输出接口可以用于连接摄像头、显示器、鼠标、键盘、触摸屏设备或传感设备等,输入输出接口为可选部件,可以存在也可以不存在,此处不做限定。
本实施例中障碍物检测装置1300中的处理器1301所执行的流程可以参考前述方法实施例中描述的方法流程,此处不加赘述。
该障碍物检测装置可为具有障碍物检测功能的车辆,或者为具有障碍物检测功能的其 他部件。该障碍物检测装置包括但不限于:车载终端、车载控制器、车载模块、车载模组、车载部件、车载芯片、车载单元、车载雷达或车载摄像头等其他传感器,车辆可通过该车载终端、车载控制器、车载模块、车载模组、车载部件、车载芯片、车载单元、车载雷达或摄像头,实施本申请提供的方法。
该障碍物检测装置还可以为除了车辆之外的其他具有障碍物检测功能的智能终端,或设置在除了车辆之外的其他具有障碍物检测功能的智能终端中,或设置于该智能终端的部件中。该智能终端可以为智能运输设备、智能家居设备、机器人等其他终端设备。该障碍物检测装置包括但不限于智能终端或智能终端内的控制器、芯片、雷达或摄像头等其他传感器、以及其他部件等。
该障碍物检测装置还可以是一个通用设备或者专用设备。在具体实现中,该装置还可以是台式机、便携式电脑、网络服务器、掌上电脑(personal digital assistant,PDA)、移动手机、平板电脑、无线终端设备、嵌入式设备或其他具有处理功能的设备。本申请实施例不限定该障碍物检测装置的类型。
该障碍物检测装置还可以是具有处理功能的芯片或处理器,该障碍物检测装置可以包括多个处理器。处理器可以是一个单核(single-CPU)处理器,也可以是一个多核(multi-CPU)处理器。该具有处理功能的芯片或处理器可以设置在传感器中,也可以不设置在传感器中,而设置在传感器输出信号的接收端。
本申请实施例还提供一种系统,应用于无人驾驶或智能驾驶中,其包含至少一个本申请上述实施例提到的障碍物检测装置、摄像头、雷达等传感器其他传感器中的至少一个,该系统内的至少一个装置可以集成为一个整机或设备,或者该系统内的至少一个装置也可以独立设置为元件或装置。
进一步,上述任一系统可以与车辆的中央控制器进行交互,为所述车辆驾驶的决策或控制提供探测和/或融合信息。
本申请实施例还提供一种车辆,所述车辆包括至少一个本申请上述实施例提到的障碍物检测装置或上述任一系统。
本申请实施例还提供一种芯片,包括一个或多个处理器。所述处理器中的部分或全部用于读取并执行存储器中存储的计算机程序,以执行前述各实施例的方法。
可选地,该芯片该包括存储器,该存储器与该处理器通过电路或电线与存储器连接。进一步可选地,该芯片还包括通信接口,处理器与该通信接口连接。通信接口用于接收需要处理的数据和/或信息,处理器从该通信接口获取该数据和/或信息,并对该数据和/或信息进行处理,并通过该通信接口输出处理结果。该通信接口可以是输入输出接口。
在一些实现方式中,所述一个或多个处理器中还可以有部分处理器是通过专用硬件的方式来实现以上方法中的部分步骤,例如涉及神经网络模型的处理可以由专用神经网络处理器或图形处理器来实现。
本申请实施例提供的方法可以由一个芯片实现,也可以由多个芯片协同实现。
本申请实施例还提供了一种计算机存储介质,该计算机存储介质用于储存为上述计算机设备所用的计算机软件指令,其包括用于执行为计算机设备所设计的程序。
该计算机设备可以如前述图16所描述的障碍物检测装置。
本申请实施例还提供了一种计算机程序产品,该计算机程序产品包括计算机软件指令,该计算机软件指令可通过处理器进行加载来实现前述各个实施例所示的方法中的流程。
本申请实施例还提供了一种车辆,该车辆包括如前述图16所描述的障碍物检测装置。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。

Claims (19)

  1. 一种障碍物检测方法,其特征在于,包括:
    获取第一图像,所述第一图像中包含至少一个障碍物;
    基于边界信息网络模型,确定所述至少一个障碍物的边界;
    其中,所述至少一个障碍物的边界包括障碍物与路面形成的边界。
  2. 根据权利要求1所述的方法,其特征在于,所述边界信息网络模型基于经验障碍物边界信息训练得到,所述经验障碍物边界信息包括历史障碍物边界信息和/或样本障碍物边界信息。
  3. 根据权利要求2所述的方法,其特征在于,
    所述样本障碍物边界信息通过沿图像中障碍物的下边缘与可行驶路面之间的交界线段取有序点集得到;或者,
    所述样本障碍物边界信息通过取图像中障碍物的掩膜的下边缘与所述可行驶路面之间的交界线段得到;或者,
    所述样本障碍物边界信息通过仿真引擎生成,仿真引擎模拟的场景图像为包含障碍物的图像。
  4. 根据权利要求1至3中任一项所述的方法,其特征在于,所述基于边界信息网络模型,确定所述至少一个障碍物的边界包括:
    将所述第一图像输入所述边界信息网络模型,对所述第一图像中的每个像素点基于经验障碍物边界信息作为类别进行分类;
    对所述分类的结果进行处理得到所述至少一个障碍物的边界。
  5. 根据权利要求1至4中任一项所述的方法,其特征在于,
    所述至少一个障碍物的边界所占的像素点在第一方向上连续。
  6. 根据权利要求1至5中任一项所述的方法,其特征在于,
    所述至少一个障碍物包括第一障碍物和第二障碍物;
    所述基于边界信息网络模型,确定所述至少一个障碍物的边界包括:
    确定所述第一障碍物的边界和所述第二障碍物的边界,所述第一障碍物的边界所占的像素点与所述第二障碍物的边界所占的像素点的交集为空集。
  7. 根据权利要求1至6中任一项所述的方法,其特征在于,
    所述方法还包括:
    根据所述至少一个障碍物的边界和预设的图像中障碍物的像素高度确定所述至少一个障碍物在所述第一图像中占据的区域大小。
  8. 一种障碍物检测装置,其特征在于,包括:
    获取单元,用于获取第一图像,所述第一图像中包含至少一个障碍物;
    确定单元,用于基于边界信息网络模型,确定所述至少一个障碍物的边界;其中,所述至少一个障碍物的边界包括障碍物与路面形成的边界。
  9. 根据权利要求8所述的装置,其特征在于,
    所述边界信息网络模型基于经验障碍物边界信息训练得到,所述经验障碍物边界信息 包括历史障碍物边界信息和/或样本障碍物边界信息。
  10. 根据权利要求9所述的装置,其特征在于,
    所述样本障碍物边界信息通过沿图像中障碍物的下边缘与可行驶路面之间的交界线段取有序点集得到;或者,
    所述样本障碍物边界信息通过取图像中障碍物的掩膜的下边缘与所述可行驶路面之间的交界线段得到;或者,
    所述样本障碍物边界信息通过仿真引擎生成,仿真引擎模拟的场景图像为包含障碍物的图像。
  11. 根据权利要求8至10中任一项所述的装置,其特征在于,所述确定单元具体用于:
    将所述第一图像输入所述边界信息网络模型,对所述第一图像中的每个像素点基于经验障碍物边界信息作为类别进行分类;
    对所述分类的结果进行处理得到所述至少一个障碍物的边界。
  12. 根据权利要求8至11中任一项所述的装置,其特征在于,
    所述至少一个障碍物的边界所占的像素点在第一方向上连续。
  13. 根据权利要求8至12中任一项所述的装置,其特征在于,
    所述至少一个障碍物包括第一障碍物和第二障碍物;
    所述确定单元具体用于:
    确定所述第一障碍物的边界和所述第二障碍物的边界,所述第一障碍物的边界所占的像素点与所述第二障碍物的边界所占的像素点的交集为空集。
  14. 根据权利要求8至13中任一项所述的装置,其特征在于,
    所述确定单元还用于:根据所述至少一个障碍物的边界和预设的图像中障碍物的像素高度确定所述至少一个障碍物在所述第一图像中占据的区域大小。
  15. 一种障碍物检测装置,其特征在于,包括:一个或多个处理器和存储器;其中,
    所述存储器中存储有计算机可读指令;
    所述一个或多个处理器用于读取所述计算机可读指令以使所述装置实现如权利要求1至7中任一项所述的方法。
  16. 一种计算机程序产品,其特征在于,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如权利要求1至7任一项所述的方法。
  17. 一种计算机可读存储介质,其特征在于,包括计算机可读指令,当所述计算机可读指令在计算机上运行时,使得所述计算机执行如权利要求1至7中任一项所述的方法。
  18. 一种车辆,其特征在于,所述车辆包括如权利要求8至14中任一项所述的装置。
  19. 一种芯片,其特征在于,包括一个或多个处理器,所述处理器中的部分或全部用于读取并执行存储器中存储的计算机程序,以执行如权利要求1至14中任一项所述的方法。
PCT/CN2021/083741 2021-03-30 2021-03-30 一种障碍物检测方法及装置 WO2022204905A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202180003376.6A CN113841154A (zh) 2021-03-30 2021-03-30 一种障碍物检测方法及装置
PCT/CN2021/083741 WO2022204905A1 (zh) 2021-03-30 2021-03-30 一种障碍物检测方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/083741 WO2022204905A1 (zh) 2021-03-30 2021-03-30 一种障碍物检测方法及装置

Publications (1)

Publication Number Publication Date
WO2022204905A1 true WO2022204905A1 (zh) 2022-10-06

Family

ID=78971731

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/083741 WO2022204905A1 (zh) 2021-03-30 2021-03-30 一种障碍物检测方法及装置

Country Status (2)

Country Link
CN (1) CN113841154A (zh)
WO (1) WO2022204905A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115808929B (zh) * 2023-01-19 2023-04-14 禾多科技(北京)有限公司 车辆仿真避障方法、装置、电子设备和计算机可读介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101881615A (zh) * 2010-05-28 2010-11-10 清华大学 用于驾驶安全的视觉障碍物检测方法
CN103413135A (zh) * 2013-07-31 2013-11-27 东软集团股份有限公司 一种车辆前照灯亮暗边界线检测方法、装置和系统
CN108470469A (zh) * 2018-03-12 2018-08-31 海信集团有限公司 道路障碍物预警方法、装置及终端
CN109740484A (zh) * 2018-12-27 2019-05-10 斑马网络技术有限公司 道路障碍物识别的方法、装置及系统
CN111899299A (zh) * 2020-06-16 2020-11-06 弗徕威智能机器人科技(上海)有限公司 地面障碍物地图标记方法、移动机器人和存储介质
US10867402B2 (en) * 2019-03-01 2020-12-15 Here Global B.V. System and method for determining distance to object on road

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016076449A1 (en) * 2014-11-11 2016-05-19 Movon Corporation Method and system for detecting an approaching obstacle based on image recognition
KR101795270B1 (ko) * 2016-06-09 2017-11-07 현대자동차주식회사 장애물의 지면경계 정보를 이용한 물체 측면 검출 방법 및 장치
US10438082B1 (en) * 2018-10-26 2019-10-08 StradVision, Inc. Learning method, learning device for detecting ROI on the basis of bottom lines of obstacles and testing method, testing device using the same
US10311324B1 (en) * 2018-10-26 2019-06-04 StradVision, Inc. Learning method, learning device for detecting objectness by detecting bottom lines and top lines of nearest obstacles and testing method, testing device using the same
CN111666921B (zh) * 2020-06-30 2022-05-20 腾讯科技(深圳)有限公司 车辆控制方法、装置、计算机设备和计算机可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101881615A (zh) * 2010-05-28 2010-11-10 清华大学 用于驾驶安全的视觉障碍物检测方法
CN103413135A (zh) * 2013-07-31 2013-11-27 东软集团股份有限公司 一种车辆前照灯亮暗边界线检测方法、装置和系统
CN108470469A (zh) * 2018-03-12 2018-08-31 海信集团有限公司 道路障碍物预警方法、装置及终端
CN109740484A (zh) * 2018-12-27 2019-05-10 斑马网络技术有限公司 道路障碍物识别的方法、装置及系统
US10867402B2 (en) * 2019-03-01 2020-12-15 Here Global B.V. System and method for determining distance to object on road
CN111899299A (zh) * 2020-06-16 2020-11-06 弗徕威智能机器人科技(上海)有限公司 地面障碍物地图标记方法、移动机器人和存储介质

Also Published As

Publication number Publication date
CN113841154A (zh) 2021-12-24

Similar Documents

Publication Publication Date Title
EP3462377B1 (en) Method and apparatus for identifying driving lane
US11217012B2 (en) System and method for identifying travel way features for autonomous vehicle motion control
CN111874006B (zh) 路线规划处理方法和装置
US11670087B2 (en) Training data generating method for image processing, image processing method, and devices thereof
CN106980813B (zh) 机器学习的注视生成
US11682137B2 (en) Refining depth from an image
CN107450529A (zh) 用于自动驾驶车辆的改进的物体检测
US11610078B2 (en) Low variance region detection for improved high variance region detection using machine learning
US11605236B2 (en) Training a machine-learned model to detect low variance regions
US20200126244A1 (en) Training method for detecting vanishing point and method and apparatus for detecting vanishing point
CA3160671A1 (en) Generating depth from camera images and known depth data using neural networks
GB2609060A (en) Machine learning-based framework for drivable surface annotation
CN116601667A (zh) 用单目监视相机进行3d对象检测和跟踪的系统和方法
CN111971725A (zh) 用于确定车辆的变道说明的方法、计算机可读存储介质以及车辆
Dwivedi et al. Bird's Eye View Segmentation Using Lifted 2D Semantic Features.
Yebes et al. Learning to automatically catch potholes in worldwide road scene images
Karkera et al. Autonomous bot using machine learning and computer vision
Kemsaram et al. An integrated framework for autonomous driving: object detection, lane detection, and free space detection
WO2022204905A1 (zh) 一种障碍物检测方法及装置
WO2022082571A1 (zh) 一种车道线检测方法和装置
Martinek et al. Lidar-based deep neural network for reference lane generation
EP4113377A1 (en) Use of dbscan for lane detection
CN116311216A (zh) 三维对象检测
CN114913329A (zh) 一种图像处理方法、语义分割网络的训练方法及装置
CN114972731A (zh) 交通灯检测识别方法及装置、移动工具、存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21933590

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21933590

Country of ref document: EP

Kind code of ref document: A1