WO2020103892A1 - Procédé et appareil de détection de ligne de voie, dispositif électronique et support de stockage lisible - Google Patents

Procédé et appareil de détection de ligne de voie, dispositif électronique et support de stockage lisible

Info

Publication number
WO2020103892A1
WO2020103892A1 PCT/CN2019/119886 CN2019119886W WO2020103892A1 WO 2020103892 A1 WO2020103892 A1 WO 2020103892A1 CN 2019119886 W CN2019119886 W CN 2019119886W WO 2020103892 A1 WO2020103892 A1 WO 2020103892A1
Authority
WO
WIPO (PCT)
Prior art keywords
lane line
probability
road surface
surface image
neural network
Prior art date
Application number
PCT/CN2019/119886
Other languages
English (en)
Chinese (zh)
Inventor
孙鹏
程光亮
石建萍
Original Assignee
北京市商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京市商汤科技开发有限公司 filed Critical 北京市商汤科技开发有限公司
Priority to JP2021525040A priority Critical patent/JP2022506920A/ja
Priority to KR1020217015000A priority patent/KR20210080459A/ko
Publication of WO2020103892A1 publication Critical patent/WO2020103892A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration

Definitions

  • Embodiments of the present disclosure relate to computer technology, and in particular, to a lane line detection method, device, electronic device, and readable storage medium.
  • Assisted driving and automatic driving are two important technologies in the field of intelligent driving.
  • the interval between workshops can be reduced, the occurrence of traffic accidents can be reduced, and the physical and mental burden of the driver can be reduced. Therefore, it plays an important role in the field of intelligent driving. effect.
  • An embodiment of the present disclosure provides a technical solution for lane line detection.
  • An aspect of an embodiment of the present disclosure provides a lane line detection method, including:
  • the road surface image is input to a neural network, and M probability maps corresponding to the road surface image are output through the neural network.
  • the M probability maps include N lane line probability maps and MN non-lane line probability maps.
  • the N lane line probability maps respectively correspond to N lane lines on the road surface, and are used to represent the probability that pixels in the road surface image belong to the corresponding lane line;
  • the MN non-lane line probability maps correspond to the road surface
  • the non-lane line of is used to represent the probability that the pixels in the road surface image belong to the non-lane line, where N is a positive integer and M is an integer greater than N;
  • a lane line detection device including:
  • the first obtaining module is used to obtain the road surface image collected by the vehicle-mounted equipment installed on the vehicle;
  • the second acquisition module is used to input the road surface image into a neural network, and output M probability maps corresponding to the road surface image via the neural network
  • the M probability maps include N lane line probability maps and MN Non-lane line probability map
  • the N lane-line probability maps respectively correspond to N lane lines on the road surface, and are used to represent the probability that pixels in the road surface image belong to the corresponding lane line
  • the MN non-lane lines The probability map corresponds to the non-lane line on the road surface and is used to represent the probability that the pixel points in the road surface image belong to the non-lane line, where N is a positive integer and M is an integer greater than N;
  • the first determining module is configured to determine the lane line in the road surface image according to the lane line probability map.
  • a driving control method including:
  • the driving control device acquires the lane line detection result of the road surface image, and the lane line detection result of the road surface image is obtained by using the lane line detection method as described in any one of the above embodiments;
  • the driving control device outputs prompt information according to the lane line detection result and / or performs intelligent driving control on the vehicle.
  • a driving control device including:
  • An acquisition module for acquiring a lane line detection result of a road surface image, the lane line detection result of the road surface image is obtained by using the lane line detection method as described in any of the above embodiments;
  • the driving control module is configured to output prompt information according to the detection result of the lane line and / or perform intelligent driving control on the vehicle.
  • an electronic device including:
  • Memory used to store program instructions
  • the processor is configured to call and execute program instructions in the memory to execute the method steps described in any one of the foregoing embodiments.
  • an intelligent driving system including: a camera connected in communication, an electronic device according to any of the above embodiments, and a driving control device according to any of the above embodiments, the The camera is used to obtain road images.
  • a readable storage medium stores a computer program, and the computer program is used to execute the method steps described in any one of the foregoing embodiments.
  • the lane line detection method, device, electronic device, and readable storage medium use a neural network trained by a road surface training image that includes lane line and / or non-lane line annotation information to obtain pixels in the road surface image
  • the points belong to the probability map of the corresponding lane line
  • the lane line in the road surface image is determined according to the probability map of the lane line, so that the accurate lane line detection result can be obtained even in the scene with higher complexity.
  • the M probability maps in this embodiment include non-lane line probability maps, that is, non-lane line categories are added in addition to the lane line categories, so the accuracy of road image segmentation can be improved, thereby improving the lane line detection results Accuracy.
  • FIG. 2 is a schematic flowchart of an embodiment of a lane line detection method according to an embodiment of the present disclosure
  • FIG. 3 is a schematic flowchart of another embodiment of a lane line detection method according to an embodiment of the present disclosure
  • FIG. 4 is a schematic flowchart of another embodiment of a lane line detection method according to an embodiment of the present disclosure
  • FIG. 5 is a schematic structural diagram of a convolutional neural network corresponding to this example.
  • FIG. 6 is a schematic flowchart of still another embodiment of a lane line detection method according to an embodiment of the present disclosure
  • FIG. 7 is a schematic flowchart of another embodiment of a lane line detection method according to an embodiment of the present disclosure.
  • FIG. 8 is a module structure diagram of an embodiment of a lane line detection device provided by an embodiment of the present disclosure.
  • FIG. 9 is a module structure diagram of another embodiment of a lane line detection device provided by an embodiment of the present disclosure.
  • FIG. 10 is a block diagram of another embodiment of a lane line detection device provided by an embodiment of the present disclosure.
  • FIG. 11 is a block diagram of another embodiment of a lane line detection device provided by an embodiment of the present disclosure.
  • FIG. 12 is a block diagram of another embodiment of a lane line detection device provided by an embodiment of the present disclosure.
  • FIG. 13 is a block diagram of another embodiment of a lane line detection device provided by an embodiment of the present disclosure.
  • FIG. 14 is a block diagram of another embodiment of a lane line detection device provided by an embodiment of the present disclosure.
  • FIG. 15 is a module structure diagram of another embodiment of a lane line detection device provided by an embodiment of the present disclosure.
  • 16 is a block diagram of another embodiment of a lane line detection device provided by an embodiment of the present disclosure.
  • 17 is a physical block diagram of an electronic device provided by an embodiment of the present disclosure.
  • FIG. 19 is a schematic structural diagram of a driving control device provided by an embodiment of the present disclosure.
  • 20 is a schematic diagram of an intelligent driving system provided by an embodiment of the present disclosure.
  • 21 is a schematic structural diagram of an application embodiment of an electronic device of the present disclosure.
  • a plurality may refer to two or more, and “at least one” may refer to one, two, or more than two.
  • first and second in the embodiments of the present disclosure are only used to distinguish different steps, devices, or modules, etc., and neither represent any specific technical meaning nor represent between them. The inevitable logical order.
  • association relationship describing the association object, indicating that there may be three kinds of relationships, for example, A and / or B, which may mean: there is A alone, A and B exist at the same time There are three cases of B alone.
  • character “/” in the present disclosure generally indicates that the related objects before and after are in an “or” relationship.
  • the embodiments of the present disclosure can be applied to electronic devices such as terminal devices, computer systems, servers, vehicle-mounted devices, etc., which can operate together with many other general-purpose or special-purpose computing system environments or configurations.
  • Examples of well-known terminal devices, computing systems, environments, and / or configurations suitable for use with terminal devices, computer systems, servers, vehicle-mounted devices, and other electronic devices include, but are not limited to: vehicle-mounted devices, personal computer systems, server computer systems, Thin clients, thick clients, handheld or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, network personal computers, small computer systems, mainframe computer systems, in-vehicle equipment, and distribution including any of the above Cloud computing technology environment, etc.
  • Electronic devices such as terminal devices, computer systems, servers, in-vehicle devices, etc. may be described in the general context of computer system executable instructions (such as program modules) executed by the computer system.
  • program modules may include routines, programs, target programs, components, logic, data structures, etc., which perform specific tasks or implement specific abstract data types.
  • the computer system / server can be implemented in a distributed cloud computing environment, where tasks are performed by remote processing devices linked through a communication network.
  • program modules may be located on local or remote computing system storage media including storage devices.
  • An embodiment of the present disclosure proposes a lane line detection method.
  • a neural network trained through a large amount of labeled data is used to obtain a probability map of each pixel in the road image belonging to the lane line, and the lane line in the road image is determined according to the probability map of the lane line ,
  • the end-to-end method can not only get accurate lane line detection results in some simple scenes, such as scenes with good weather conditions and lighting conditions, but also in scenes with high complexity, such as rainy days, In scenes such as nights and tunnels, accurate lane line detection results can also be obtained.
  • the neural networks in the embodiments of the present disclosure may be a multi-layer neural network (ie, deep neural network), wherein the neural network may be a multi-layer convolutional neural network, for example, LeNet, AlexNet, GoogLeNet, VGG , ResNet and other arbitrary neural network models.
  • Each neural network may use a neural network of the same type and structure, or a neural network of a different type and / or structure. The embodiments of the present disclosure do not limit this.
  • FIG. 1 is a schematic diagram of a scene of a lane line detection method provided by an embodiment of the present disclosure.
  • this method can be applied to vehicles equipped with in-vehicle devices.
  • the vehicle-mounted device may be a device with a shooting function such as a camera or a driving recorder installed on the vehicle.
  • the road surface image is collected by the vehicle-mounted device on the vehicle, and the lane line on the road surface where the vehicle is located is detected based on the method of the embodiment of the present disclosure, so that the detection result can be applied to assisted driving or automatic driving of the vehicle.
  • FIG. 2 is a schematic flowchart of an embodiment of a lane line detection method according to an embodiment of the present disclosure. As shown in FIG. 2, the method includes:
  • the vehicle-mounted device installed on the vehicle can collect the road surface image on the road surface of the vehicle in real time, and further, the road surface image collected by the vehicle-mounted device can be continuously input into the neural network to obtain continuously updated lane line detection results.
  • S201 may be executed by the processor invoking the corresponding instruction stored in the memory, or may be executed by the first obtaining module 801 executed by the processor.
  • N is a positive integer
  • M is an integer greater than N.
  • the above neural network may include but is not limited to a convolutional neural network.
  • the neural network is pre-trained using road surface training image sets including information marked by lane lines and / or non-lane lines.
  • the road training image set includes a large number of training images.
  • Each training image is obtained through the process of collecting actual road images and marking.
  • the neural network is obtained by supervising the training images collected by the rich scenes, the trained neural network can not only get accurate results under some simple scenes, such as daytime scenes with good weather conditions and lighting conditions.
  • the detection result of the lane line can also obtain accurate detection results of the lane line in scenes with high complexity, such as rainy days, nights, tunnels and other scenes.
  • the above non-lane line may refer to a portion of the road surface of the vehicle other than the lane line, and may also be referred to as a road surface background.
  • roads other than lane lines, cars on the road, plants on the side of the road, etc. all belong to the category of road background.
  • the aforementioned M may be equal to 5, and the aforementioned N may be equal to 4. That is, it can be considered that there are 4 lane lines on the road surface of the vehicle, and the neural network can output 5 probability maps. Among them, there are 4 lane line probability maps in the 5 probability maps, which respectively correspond to 4 lane lines on the road surface. That is, the four lane line probability maps correspond one-to-one to the four lane lines on the road surface. In addition, there is one non-lane line probability map in the five probability maps, which corresponds to the non-lane line on the road surface.
  • the aforementioned M may be equal to 3, and the aforementioned N may be equal to 2. That is, there are 2 lane lines on the road surface of the vehicle.
  • the above neural network can output 3 probability maps, and there are 2 lane line probability maps in the 3 probability maps, which respectively correspond to 2 lane lines on the road surface, that is, the 2 lane line probability maps and the 2 on the road surface
  • the lane lines correspond to each other.
  • the four lane lines on the road surface are lane line 1, lane line 2, lane line 3, and lane line 4 in the order from the left side to the right side of the vehicle, and the four lane line probability maps in the above five probability maps Probability graph 1, probability graph 2, probability graph 3 and probability graph 4 respectively.
  • the corresponding relationship between the lane line probability map and the lane line may be as shown in Table 1 below.
  • the probability map 1 output by the above neural network corresponds to lane line 1
  • the probability map 2 corresponds to lane line 2, and so on.
  • Table 1 is only an example of the correspondence between the lane line probability map and the lane line.
  • the correspondence relationship between the lane line probability map and the lane line can be flexibly set according to needs. Examples do not make specific restrictions on this.
  • the probability map 1 can identify the probability that each pixel in the road surface image belongs to the lane line 1.
  • a 200 * 200 size matrix can be output, where the value of each element in the matrix is that the corresponding pixel belongs to the lane line 1 The probability.
  • the value of the first row and first column is 0.4, which means that the probability that the pixel of the first row and first column in the road image belongs to the lane line 1 is 0.4.
  • the matrix output by the neural network can be expressed in the form of a lane line probability map.
  • S202 may be executed by the processor invoking the corresponding instruction stored in the memory, or may be executed by the second obtaining module 802 executed by the processor.
  • the probability that each pixel in the road image belongs to each lane line can be determined, and based on these probabilities, the lane line in the road image can be determined.
  • S203 may be executed by the processor invoking the corresponding instruction stored in the memory, or may be executed by the first determining module 803 executed by the processor.
  • the N lane line probability maps output by the neural network respectively correspond to the N lane line lines on the road surface.
  • some of the pixel points can be selected according to pre-conditions. Point fitting the lane line corresponding to the lane line probability map to obtain N lane lines.
  • a neural network trained using a road training image including lane line and / or non-lane line annotation information is used to obtain a probability map of each pixel in the road image belonging to the corresponding lane line, and is determined according to the probability map of the lane line
  • the lane line in the road surface image so that the accurate lane line detection result can be obtained even in the scene with higher complexity.
  • the M probability maps in this embodiment include non-lane line probability maps, that is, non-lane line categories are added in addition to the lane line categories, so the accuracy of road image segmentation can be improved, thereby improving the lane line detection results Accuracy.
  • the N probability maps in the M probability maps correspond to N lane lines on the road surface
  • the Lth lane line probability in the N lane line probability maps The graph corresponds to the Lth lane line, L is any integer greater than or equal to 1 and less than or equal to M, that is, the Lth lane line probability map is any lane line probability map of the N lane line probability maps.
  • the Lth lane line may be fitted based on a plurality of pixels with a probability value greater than or equal to a preset threshold in the probability map.
  • a plurality of pixels with a probability value greater than or equal to the preset threshold are fitted to the Lth lane line.
  • each pixel has a probability value. If the probability value is greater than or equal to the preset threshold, it means that the pixel belongs to the The probability of L lane lines is larger.
  • the selected pixels can be calculated for the maximum connected domain, and then based on the maximum connected domain Lane line fitting, so that the lane line in the road image can be obtained.
  • the preset threshold may be 0.5, for example.
  • the Lth lane line probability map includes the probability values of three pixels, where the probability value of pixel A is 0.5, the probability value of pixel B is 0.6, and the probability value of pixel C is 0.2 That is, the probability value of pixel point A and pixel point B is greater than the preset threshold, then the Lth lane line can be fitted through pixel point A and pixel point B.
  • the Lth lane line probability map does not satisfy the condition that includes multiple pixels with a probability value greater than or equal to the preset threshold, it means that the Lth lane line probability map does not exist in the current road surface image. Corresponding Lth lane line.
  • the first pixel point is used as the pixel point when fitting the first lane line, wherein ,
  • the first lane line is the lane line corresponding to the lane line probability map corresponding to the maximum probability value among the multiple probability values.
  • the neural network outputs a total of 4 lane line probability maps.
  • the probability value of the first pixel in the first lane line probability map is 0.5
  • the probability of the second lane line probability is 0.5.
  • the probability value in the figure is 0.6
  • the probability in the third lane line probability map is 0.7
  • the probability in the fourth lane line probability map is 0.2, that is, the first pixel is in the first, second, and third lanes
  • the probabilities in the line probability map are greater than or equal to the preset threshold.
  • the MN probability maps in the M probability maps correspond to non-lane lines on the road surface, and optionally, the S-th lane line in the MN lane line probability maps is optional.
  • the probability map corresponds to the non-lane line, and S is any integer greater than or equal to 1 and less than or equal to MN, that is, the S-th non-lane line probability map is any one of the MN non-lane line probability maps.
  • the non-lane line may be determined based on a plurality of pixels with a probability value greater than or equal to a preset threshold in the probability map.
  • the non-lane line is determined according to a plurality of pixels with a probability value greater than or equal to the preset threshold.
  • each pixel has a probability value. If the probability value is greater than or equal to the preset threshold, it means that the pixel belongs to The probability of non-lane lines is greater.
  • the selected pixels can be calculated, for example, to find the maximum connected domain to obtain the road surface image. Non-lane line area.
  • the preset threshold may be 0.5, for example.
  • the Sth non-lane line probability map includes the probability values of three pixels, where the probability value of pixel A is 0.5, the probability value of pixel B is 0.6, and the probability value of pixel C is 0.2, that is, the probability value of the pixel point A and the pixel point B is greater than the preset threshold, then the non-lane line in the road surface image can be determined by the pixel point A and the pixel point B.
  • the color of the pixel point in the road surface image may be adjusted to the above according to the lane line to which the pixel point in the road surface image belongs.
  • the corresponding color of the lane line to improve the visual effect.
  • FIG. 3 is a schematic flowchart of another embodiment of a lane line detection method according to an embodiment of the present disclosure. As shown in FIG. 3, the above method further includes:
  • the M probability maps respectively correspond to a lane line or a non-lane line. After using the M probability maps to fit each lane line and determine the lane line, the M probability maps can be fused into a target probability map.
  • the target probability map contains information for each lane line and information for non-lane lines.
  • S301 may be executed by the processor invoking the corresponding instruction stored in the memory, or may be executed by the fusion module 804 executed by the processor.
  • the first lane line probability map is any lane line probability map in the N lane line probability maps, and the pixel points corresponding to the first lane line probability map are composed in the first lane line probability map. Pixels of the combined lane line.
  • S302 may be executed by the processor invoking the corresponding instruction stored in the memory, or may be executed by the adjustment module 805 executed by the processor.
  • the pixels constituting the lane line corresponding to the first lane line probability map are determined.
  • the fused The resulting probability map sets the pixel value of each pixel of the lane line corresponding to the first lane line probability map to the color corresponding to the lane line.
  • the color corresponding to each lane line may be set in advance. For example, if there are 4 lane lines on the road surface, the colors of the 4 lane lines may be set to red, yellow, blue, and purple, respectively. After obtaining the target probability map, set the pixel value of each pixel that constitutes each lane line to the corresponding color. After setting, you can get 4 lanes displayed in four colors of red, yellow, blue, and purple line.
  • the user in the vehicle can view the road surface more intuitively and clearly Lane lanes to enhance user experience.
  • this embodiment relates to the process of passing the lane line probability map.
  • FIG. 4 is a schematic flowchart of another embodiment of a lane line detection method according to an embodiment of the present disclosure. As shown in FIG. 4, the above step S202 includes:
  • S401 Extract low-level feature information of the M channels of the road surface image through at least one convolutional layer of the neural network.
  • the convolutional layer can reduce the resolution of the road surface image and retain the low-level features of the road surface image.
  • the low-level feature information of the road surface image may include edge information, straight line information, and curve information in the image.
  • the M channels of the above road surface image respectively correspond to one lane line category, where, assuming there are 4 lane lines on the road surface, there are 5 lane line categories, namely lane line 1, lane line 2, lane line 3. Lane line 4 and non-lane line.
  • S401 may be executed by the processor invoking the corresponding instruction stored in the memory, or may be executed by the first obtaining unit 8021 executed by the processor.
  • the high-level feature information of the M channels of the road surface image extracted through the residual extraction layer includes semantic features, contours, and overall structure.
  • S402 may be executed by the processor invoking the corresponding instruction stored in the memory, or may be executed by the second obtaining unit 8022 executed by the processor.
  • the image can be restored to the original size of the image input to the neural network.
  • S403 may be executed by the processor invoking the corresponding instruction stored in the memory, or may be executed by the third obtaining unit 8023 executed by the processor.
  • the neural network may further include a normalization layer after the above-mentioned upsampling layer.
  • the normalization layer normalizes the result after the upsampling process, and outputs the above-mentioned lane line probability map.
  • the feature map of the road surface image is obtained after the upsampling process, and the value of each pixel in the feature map is normalized, so that the value of each pixel in the feature map is in the range of 0 to 1 To obtain the probability map of the drivable area.
  • a normalization method is: first determine the maximum value of the pixels in the feature map, and then divide the value of each pixel by the maximum value, so that the value of each pixel in the feature map In the range of 0 to 1.
  • this embodiment relates to this embodiment relates to the training process of establishing the above neural network.
  • the neural network involved in the embodiments of the present disclosure may be a convolutional neural network
  • the convolutional neural network may include a convolutional layer, a residual extraction layer, an upsampling layer, and a normalization layer .
  • the order of the convolutional layer and the residual extraction layer can be flexibly set as needed, and the number of each layer can also be flexibly set as needed.
  • the above-mentioned convolutional neural network may include any number of convolutional layers in 6-10 connected, any number of residual extraction layers in 7-12 connected, and any number of 1-4 in convolutional neural networks Upsampling layer.
  • the convolutional neural network with this specific structure When used for lane line detection, it can meet the requirements of lane scene detection in multiple scenes or complex scenes, thereby making the detection results more robust.
  • the convolutional neural network may include 8 convolutional layers connected, 9 residual extraction layers connected, and 2 upsampling layers connected.
  • Figure 5 is a schematic diagram of the structure of the convolutional neural network corresponding to this example. As shown in Figure 5, after the road surface image is input, it first passes through 8 consecutive convolutional layers of the convolutional neural network. After that, it includes 9 consecutive residual extraction layers. After the 9 consecutive residual extraction layers, it includes 2 consecutive upsampling layers. After the 2 consecutive upsampling layers, it is a normalized layer, namely Finally, the normalized layer outputs the lane line probability map.
  • each of the foregoing residual extraction layers may include 256 filters, and each layer includes 128 filters of 3 * 3 and 128 1 * 1 sizes.
  • the above road network training image set may be used to train the above neural network.
  • FIG. 6 is a schematic flowchart of still another embodiment of a lane line detection method provided by an embodiment of the present disclosure. As shown in FIG. 6, the training process of the above neural network may be:
  • the above predicted lane line probability map is the current lane line probability map output by the neural network.
  • S602 Fit the predicted lane line of the training image according to a plurality of pixel points with a probability value greater than or equal to a preset threshold value included in the predicted lane line probability map.
  • S603 Acquire the loss between the predicted lane line of the training image and the lane line in the truth map of the lane line of the training image.
  • the above lane line truth map is obtained based on the label information of the lane line of the training image.
  • the loss between the predicted lane line and the lane line in the lane line truth map can be calculated by using a loss function.
  • the network parameters of the neural network may include convolution kernel size and weight information.
  • the above-mentioned loss can be back-transmitted in the neural network by means of gradient back propagation, and the network parameters of the neural network can be adjusted.
  • the neural network may be trained with one training image at a time, or the neural network may be trained with multiple training images at a time.
  • FIG. 7 is a schematic flowchart of another embodiment of a lane line detection method according to an embodiment of the present disclosure. As shown in FIG. 7, before training the neural network, the method further includes:
  • the above multiple scenes may include, but are not limited to, at least two scenes of daytime scenes, rainy scenes, foggy scenes, straight road scenes, curved road scenes, tunnel scenes, strong light scenes, and night scenes.
  • the S801-S802 may be executed by the processor invoking the corresponding instruction stored in the memory, or may also be executed by the collection module 806 executed by the processor.
  • the on-vehicle equipment such as the camera on the vehicle can be used to collect the road surface image in each of the above scenarios.
  • the lane line on the collected road surface image can be marked by manual labeling, etc., to obtain each Training images in the scene.
  • the training images obtained through the above process cover various scenes in practice. Therefore, the neural networks trained using these training images are very robust to the detection of lane lines in various scenarios, and the detection Short time and high accuracy of test results.
  • the above road surface image may be de-distorted to further improve the accuracy of the output result of the neural network.
  • each pixel point belonging to the lane line in the road surface image can be coordinate-mapped to obtain lane line information in the world coordinate system, and assist driving based on the obtained lane line information in the world coordinate system Or autonomous driving.
  • FIG. 8 is a module structure diagram of an embodiment of a lane line detection device provided by an embodiment of the present disclosure.
  • the lane line detection device of the embodiment of the present disclosure may be used to implement the above embodiments of the lane line detection method of the present disclosure. As shown in Figure 8, the device includes:
  • the first obtaining module 801 is used to obtain the road surface image collected by the vehicle-mounted device installed on the vehicle.
  • the second acquisition module 802 is configured to input the road surface image into a neural network, and output M probability maps corresponding to the road surface image via the neural network.
  • the M probability maps include N lane line probability maps and MN Non-lane line probability maps, the N lane-line probability maps respectively correspond to N lane lines on the road surface, and are used to represent the probability that pixels in the road surface image belong to the corresponding lane line; the MN non-lane lines
  • the line probability map corresponds to the non-lane line on the road surface, and is used to represent the probability that the pixel point in the road surface image belongs to the non-lane line, where N is a positive integer and M is an integer greater than N.
  • the first determining module 803 is configured to determine the lane line in the road surface image according to the lane line probability map.
  • FIG. 10 is a module structure diagram of another embodiment of a lane line detection device provided by an embodiment of the present disclosure. As shown in FIG. 10, the first determination module 803 further includes:
  • the first determining unit 8032 is configured to use the first pixel point as the first lane line when multiple probability values corresponding to the first pixel point in the multiple lane line probability maps are greater than or equal to a preset threshold The pixel point of, where the first lane line is the lane line corresponding to the lane line probability map corresponding to the largest probability value among the multiple probability values.
  • FIG. 11 is a module structure diagram of Embodiment 4 of a lane line detection device according to an embodiment of the present disclosure.
  • the first determination module 803 further includes: a third determination unit 8033, which is used to determine the probability of the Sth non-lane line
  • a non-lane line is determined according to a plurality of pixels with a probability value greater than or equal to a preset threshold, wherein the S-th non-lane line probability map is Describe any of the MN non-lane line probability maps.
  • FIG. 12 is a module structure diagram of Embodiment 5 of a lane line detection device provided by an embodiment of the present disclosure. As shown in FIG. 12, it further includes: a fusion module 804, configured to perform fusion processing on the M probability maps to obtain a target Probability diagram.
  • the adjustment module 805 is configured to adjust the pixel value of the pixel corresponding to the first lane line probability map in the target probability map to a preset pixel value corresponding to the first lane line probability map.
  • the first lane line probability map is any lane line probability map of the N lane line probability maps
  • the pixel corresponding to the first lane line probability map is the probability of the first lane line probability map
  • the graph constitutes the pixel points of the fitted lane line.
  • FIG. 13 is a module structure diagram of yet another embodiment of a lane line detection device provided by an embodiment of the present disclosure.
  • the second acquisition module 802 includes: a first acquisition unit 8021, which is used to pass at least A convolution layer extracts the low-level feature information of the M channels of the road surface image.
  • the second obtaining unit 8022 is configured to extract the high-level feature information of the M channels of the road surface image based on the M-channel low-level feature information through at least one residual extraction layer of the neural network.
  • the third obtaining unit 8023 is configured to up-sample the high-level feature information of the M channels through at least one up-sampling layer of the neural network to obtain M probability maps equal to the road surface image.
  • the at least one convolutional layer includes any number of contiguous 6-10 convolutional layers
  • the at least one residual extraction layer includes any number of contiguous 7-12 residual extraction layers
  • the at least one upsampling layer includes any number of connected upsampling layers in 1-4.
  • the lane line detection device further includes: a training module (not shown in the figure), which is used to supervise and train a road training image set including lane line and / or non-lane line annotation information The neural network.
  • a training module (not shown in the figure), which is used to supervise and train a road training image set including lane line and / or non-lane line annotation information The neural network.
  • the training module is configured to: input training images included in the road surface training image set to the neural network to obtain a predicted lane line probability map of the training images; according to the predicted lanes A plurality of pixels with a probability value greater than or equal to a preset threshold value included in the line probability map, fitting the predicted lane line of the training image; acquiring the predicted lane line of the training image and the lane of the training image Loss between lane lines in a line truth map, where the lane line truth map is obtained based on the labeling information of the lane lines of the training image; the network parameters of the neural network are adjusted according to the losses.
  • FIG. 14 is a module structure diagram of another embodiment of a lane line detection device provided by an embodiment of the present disclosure. As shown in FIG. 14, it further includes: an acquisition module 806, which is used to acquire road surface images in multiple scenes, and The road surface images in the plurality of scenes are obtained by labeling lane lines as training images.
  • the plurality of scenes may include, but not limited to, at least two scenes in rainy scenes, foggy scenes, straight road scenes, curved road scenes, tunnel scenes, strong light scenes, and night scenes.
  • FIG. 15 is a module structure diagram of another embodiment of a lane line detection device provided by an embodiment of the present disclosure. As shown in FIG. 15, it further includes: a preprocessing module 807, configured to perform distortion-removing processing on the road surface image.
  • a preprocessing module 807 configured to perform distortion-removing processing on the road surface image.
  • FIG. 16 is a module structure diagram of another embodiment of a lane line detection device provided by an embodiment of the present disclosure. As shown in FIG. 16, it further includes: a mapping module 808 for mapping the lane line in the road surface image to world coordinates In the system, the position of the lane line in the road surface image in the world coordinate system is obtained.
  • a mapping module 808 for mapping the lane line in the road surface image to world coordinates In the system, the position of the lane line in the road surface image in the world coordinate system is obtained.
  • FIG. 17 is a physical block diagram of an electronic device according to an embodiment of the present disclosure. As shown in FIG. 17, the electronic device 1700 includes:
  • the memory 1701 is used to store program instructions.
  • the processor 1702 is configured to call and execute program instructions in the memory 1701 to execute the method steps described in any embodiment of the present disclosure.
  • FIG. 18 is a schematic flowchart of a driving control method provided by an embodiment of the present disclosure. Based on the foregoing embodiment, an embodiment of the present disclosure also provides a driving control method, including:
  • the driving control device acquires the lane line detection result of the road surface image.
  • the S1801 may be executed by the processor invoking the corresponding instruction stored in the memory, or may be executed by the obtaining module 1901 executed by the processor.
  • the driving control device outputs prompt information according to the lane line detection result and / or performs intelligent driving control on the vehicle.
  • the S201 may be executed by the processor invoking the corresponding instruction stored in the memory, or may be executed by the driving control module 1902 executed by the processor.
  • the detection result of the lane line of the road surface image is obtained by the detection method of the lane line of the above embodiment, and the specific process refers to the description of the above embodiment, which will not be repeated here.
  • the electronic device executes the above lane line detection method, obtains the lane line detection result of the road surface image, and outputs the lane line detection result of the road surface image.
  • the driving control device acquires the lane line detection result of the road surface image, and outputs prompt information and / or performs intelligent driving control on the vehicle according to the lane line detection result of the road surface image.
  • the prompt information may include a warning warning of lane line departure, or a reminder of keeping lane line.
  • the intelligent driving in this embodiment includes assisted driving and / or automatic driving.
  • the above-mentioned intelligent driving control may include: braking, changing the driving speed, changing the driving direction, keeping lane lines, changing the state of the lights, driving mode switching, etc., wherein the driving mode switching may be switching between assisted driving and automatic driving, for example To switch from assisted driving to automatic driving.
  • the driving control device obtains the lane line detection result of the road surface image, and outputs prompt information and / or performs intelligent driving control on the vehicle according to the lane line detection result of the road surface image, thereby improving the intelligent driving Safety and reliability.
  • FIG. 19 is a schematic structural diagram of a driving control device provided by an embodiment of the present disclosure. Based on the foregoing embodiment, the driving control device 1900 of the embodiment of the present disclosure includes:
  • the obtaining module 1901 is used to obtain a lane line detection result of a road surface image.
  • the lane line detection result of the road surface image is obtained by using the lane line detection method as in any of the above embodiments;
  • the driving control module 1902 is configured to output prompt information according to the lane line detection result and / or perform intelligent driving control on the vehicle.
  • the driving control device of the embodiment of the present disclosure may be used to execute the technical solutions of the above-described method embodiments, and its implementation principles and technical effects are similar, and will not be repeated here.
  • FIG. 20 is a schematic diagram of an intelligent driving system provided by an embodiment of the present disclosure.
  • the intelligent driving system 2000 of this embodiment includes: a communication-connected camera 2001, an electronic device 1700, and a driving control device 1900, wherein the electronic device 1700 As shown in FIG. 17, the driving control device 1900 is shown in FIG. 19, and the camera 2001 is used to capture a road surface image.
  • the camera 2001 captures the road surface image and sends the road surface image to the electronic device 1700.
  • the electronic device 1700 After receiving the road surface image, the electronic device 1700 performs the road surface image detection according to the above lane line detection method. Processing to obtain the lane detection result of the road surface image.
  • the electronic device 1700 sends the obtained lane line detection result of the road surface image to the driving control device 1900, and the driving control device 1900 outputs prompt information and / or performs intelligent driving control on the vehicle according to the lane line detection result of the road surface image.
  • the electronic device includes one or more processors, a communication section, etc.
  • the one or more processors are, for example, one or more central processing units (CPUs) 2101, and / or one or more An image processor (GPU) 2113, etc.
  • the processor can execute various instructions according to the executable instructions stored in the read only memory (ROM) 2102 or the executable instructions loaded from the storage section 2108 into the random access memory (RAM) 2103 Appropriate actions and handling.
  • the communication part 2112 may include but is not limited to a network card, and the network card may include but not limited to an IB (Infiniband) network card.
  • the processor may communicate with the read-only memory 2102 and / or the random access memory 2103 to execute executable instructions through the bus 2104 It is connected to the communication unit 2112 and communicates with other target devices via the communication unit 2112 to complete the operation corresponding to any lane line detection method or any driving control method provided by the embodiments of the present disclosure.
  • RAM 2103 various programs and data necessary for device operation can also be stored.
  • the CPU 2101, ROM 2102, and RAM 2103 are connected to each other via a bus 2104.
  • ROM 2102 is an optional module.
  • the RAM 2103 stores executable instructions or writes executable instructions to the ROM 2102 at runtime.
  • the executable instructions cause the processor 2101 to perform operations corresponding to the lane detection method or the driving control method provided in any of the foregoing embodiments.
  • An input / output (I / O) interface 2105 is also connected to the bus 2104.
  • the communication part 2112 may be integratedly provided, or may be provided with multiple sub-modules (for example, multiple IB network cards), and are on the bus link.
  • the following components are connected to the I / O interface 2105: an input section 2106 including a keyboard, a mouse, etc .; an output section 2107 including a cathode ray tube (CRT), a liquid crystal display (LCD), etc., and a speaker; a storage section 2108 including a hard disk, etc. ; And a communication section 2109 including a network interface card such as a LAN card, a modem, etc. The communication section 2109 performs communication processing via a network such as the Internet.
  • the driver 2110 is also connected to the I / O interface 2105 as needed.
  • a removable medium 2111 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, etc., is installed on the drive 2110 as necessary, so that the computer program read out therefrom is installed into the storage portion 2108 as needed.
  • FIG. 21 is only an optional implementation method.
  • the number and type of components in FIG. 21 can be selected, deleted, added, or replaced according to actual needs;
  • the setting of functional components separate or integrated settings can also be adopted.
  • the GPU and the CPU can be set separately or the GPU can be integrated on the CPU.
  • the communication department can be set separately or on the CPU or GPU Wait.
  • any method provided by the embodiments of the present disclosure may be executed by any appropriate device with data processing capabilities, including but not limited to: a terminal device and a server.
  • any method provided by the embodiments of the present disclosure may be executed by the processor, for example, the processor executes any method mentioned by the embodiments of the present disclosure by calling corresponding instructions stored in the memory. The embodiments of the present disclosure will not be repeated here.
  • the method and apparatus of the embodiments of the present disclosure may be implemented in many ways.
  • the method and apparatus of the embodiments of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware.
  • the above sequence of steps for the method is for illustration only, and the steps of the method of the embodiments of the present disclosure are not limited to the above-described sequence unless otherwise specifically stated.
  • the present disclosure may also be implemented as programs recorded in a recording medium, and these programs include machine-readable instructions for implementing the method according to an embodiment of the present disclosure.
  • the present disclosure also covers the recording medium storing the program for executing the method according to the embodiment of the present disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)
  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)

Abstract

Selon des modes de réalisation, la présente invention concerne un procédé et un appareil de détection de ligne de voie, un dispositif électronique et un support de stockage lisible. Le procédé consiste à : obtenir une image de surface de route acquise par un dispositif embarqué sur un véhicule ; fournir l'image de surface de route en entrée d'un réseau neuronal, et produire M graphes de probabilité correspondant à l'image de surface de route au moyen du réseau neuronal, les M graphes de probabilité comprenant N graphes de probabilité de ligne de voie et M-N graphes de probabilité de ligne autre, les N graphes de probabilité de ligne de voie correspondant respectivement à N lignes de voie sur la surface de route et étant utilisés pour représenter les probabilités que les points pixels dans l'image de surface de route appartiennent aux lignes de voie correspondantes, et les M-N graphes de probabilité de ligne autre correspondant à des lignes autres sur la route et étant utilisés pour représenter les probabilités que les points pixels dans l'image de surface de route appartiennent aux lignes autres ; et déterminer les lignes de voie dans l'image de surface de route selon les graphes de probabilité de ligne de voie.
PCT/CN2019/119886 2018-11-21 2019-11-21 Procédé et appareil de détection de ligne de voie, dispositif électronique et support de stockage lisible WO2020103892A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2021525040A JP2022506920A (ja) 2018-11-21 2019-11-21 区画線検出方法、装置、電子機器及び可読記憶媒体
KR1020217015000A KR20210080459A (ko) 2018-11-21 2019-11-21 차선 검출방법, 장치, 전자장치 및 가독 저장 매체

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811392943.8A CN111209777A (zh) 2018-11-21 2018-11-21 车道线检测方法、装置、电子设备及可读存储介质
CN201811392943.8 2018-11-21

Publications (1)

Publication Number Publication Date
WO2020103892A1 true WO2020103892A1 (fr) 2020-05-28

Family

ID=70773344

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/119886 WO2020103892A1 (fr) 2018-11-21 2019-11-21 Procédé et appareil de détection de ligne de voie, dispositif électronique et support de stockage lisible

Country Status (4)

Country Link
JP (1) JP2022506920A (fr)
KR (1) KR20210080459A (fr)
CN (1) CN111209777A (fr)
WO (1) WO2020103892A1 (fr)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111178215A (zh) * 2019-12-23 2020-05-19 深圳成谷科技有限公司 一种传感器数据融合处理的方法和装置
CN112373474A (zh) * 2020-11-23 2021-02-19 重庆长安汽车股份有限公司 车道线融合及横向控制方法、系统、车辆及存储介质
CN112464742A (zh) * 2021-01-29 2021-03-09 福建农林大学 赤潮图像自动识别的方法及装置
JP2022028870A (ja) * 2020-12-16 2022-02-16 ベイジン バイドゥ ネットコム サイエンス テクノロジー カンパニー リミテッド 車線検出方法、装置、電子機器、記憶媒体及び車両
CN116863429A (zh) * 2023-07-26 2023-10-10 小米汽车科技有限公司 检测模型的训练方法、可行使区域的确定方法和装置
WO2024002014A1 (fr) * 2022-07-01 2024-01-04 上海商汤智能科技有限公司 Procédé et appareil d'identification de marquage de trafic, dispositif informatique et support de stockage
CN116863429B (zh) * 2023-07-26 2024-05-31 小米汽车科技有限公司 检测模型的训练方法、可行使区域的确定方法和装置

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111539403B (zh) * 2020-07-13 2020-10-16 航天宏图信息技术股份有限公司 农业大棚的识别方法、装置及电子设备
CN112446344B (zh) * 2020-12-08 2022-09-16 北京深睿博联科技有限责任公司 路况提示的方法及装置、电子设备及计算机可读存储介质
CN112633151B (zh) * 2020-12-22 2024-04-12 浙江大华技术股份有限公司 一种确定监控图像中斑马线的方法、装置、设备及介质
CN113739811A (zh) * 2021-09-03 2021-12-03 阿波罗智能技术(北京)有限公司 关键点检测模型的训练和高精地图车道线的生成方法设备
KR102487408B1 (ko) * 2021-09-07 2023-01-12 포티투닷 주식회사 로컬맵에 기초한 차량의 라우팅 경로 결정 장치, 방법 및 이를 기록한 기록매체

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060239509A1 (en) * 2005-04-26 2006-10-26 Fuji Jukogyo Kabushiki Kaisha Road line recognition apparatus
CN108052904A (zh) * 2017-12-13 2018-05-18 辽宁工业大学 车道线的获取方法及装置
CN108216229A (zh) * 2017-09-08 2018-06-29 北京市商汤科技开发有限公司 交通工具、道路线检测和驾驶控制方法及装置
CN108875603A (zh) * 2018-05-31 2018-11-23 上海商汤智能科技有限公司 基于车道线的智能驾驶控制方法和装置、电子设备

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654064A (zh) * 2016-01-25 2016-06-08 北京中科慧眼科技有限公司 车道线检测方法和装置及高级驾驶辅助系统
CN108280450B (zh) * 2017-12-29 2020-12-29 安徽农业大学 一种基于车道线的高速公路路面检测方法
CN108846328B (zh) * 2018-05-29 2020-10-16 上海交通大学 基于几何正则化约束的车道检测方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060239509A1 (en) * 2005-04-26 2006-10-26 Fuji Jukogyo Kabushiki Kaisha Road line recognition apparatus
CN108216229A (zh) * 2017-09-08 2018-06-29 北京市商汤科技开发有限公司 交通工具、道路线检测和驾驶控制方法及装置
CN108052904A (zh) * 2017-12-13 2018-05-18 辽宁工业大学 车道线的获取方法及装置
CN108875603A (zh) * 2018-05-31 2018-11-23 上海商汤智能科技有限公司 基于车道线的智能驾驶控制方法和装置、电子设备

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111178215A (zh) * 2019-12-23 2020-05-19 深圳成谷科技有限公司 一种传感器数据融合处理的方法和装置
CN111178215B (zh) * 2019-12-23 2024-03-08 深圳成谷科技有限公司 一种传感器数据融合处理的方法和装置
CN112373474A (zh) * 2020-11-23 2021-02-19 重庆长安汽车股份有限公司 车道线融合及横向控制方法、系统、车辆及存储介质
CN112373474B (zh) * 2020-11-23 2022-05-17 重庆长安汽车股份有限公司 车道线融合及横向控制方法、系统、车辆及存储介质
JP2022028870A (ja) * 2020-12-16 2022-02-16 ベイジン バイドゥ ネットコム サイエンス テクノロジー カンパニー リミテッド 車線検出方法、装置、電子機器、記憶媒体及び車両
JP7273129B2 (ja) 2020-12-16 2023-05-12 ベイジン バイドゥ ネットコム サイエンス テクノロジー カンパニー リミテッド 車線検出方法、装置、電子機器、記憶媒体及び車両
US11967132B2 (en) 2020-12-16 2024-04-23 Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. Lane marking detecting method, apparatus, electronic device, storage medium, and vehicle
CN112464742A (zh) * 2021-01-29 2021-03-09 福建农林大学 赤潮图像自动识别的方法及装置
CN112464742B (zh) * 2021-01-29 2024-05-24 福建农林大学 赤潮图像自动识别的方法及装置
WO2024002014A1 (fr) * 2022-07-01 2024-01-04 上海商汤智能科技有限公司 Procédé et appareil d'identification de marquage de trafic, dispositif informatique et support de stockage
CN116863429A (zh) * 2023-07-26 2023-10-10 小米汽车科技有限公司 检测模型的训练方法、可行使区域的确定方法和装置
CN116863429B (zh) * 2023-07-26 2024-05-31 小米汽车科技有限公司 检测模型的训练方法、可行使区域的确定方法和装置

Also Published As

Publication number Publication date
KR20210080459A (ko) 2021-06-30
CN111209777A (zh) 2020-05-29
JP2022506920A (ja) 2022-01-17

Similar Documents

Publication Publication Date Title
WO2020103892A1 (fr) Procédé et appareil de détection de ligne de voie, dispositif électronique et support de stockage lisible
WO2020103893A1 (fr) Procédé de détection de propriété de ligne de voie, dispositif, appareil électronique et support de stockage lisible
US11694430B2 (en) Brake light detection
CN110188807B (zh) 基于级联超分辨率网络与改进Faster R-CNN的隧道行人目标检测方法
WO2022126377A1 (fr) Procédé et appareil de détection de ligne de voie de circulation, dispositif terminal et support de stockage lisible
US20160155027A1 (en) Method and apparatus of determining air quality
CN112528878A (zh) 检测车道线的方法、装置、终端设备及可读存储介质
CN105844257A (zh) 基于机器视觉雾天行车错失道路标志牌的预警系统及方法
CN113723377B (zh) 一种基于ld-ssd网络的交通标志检测方法
CN111767831B (zh) 用于处理图像的方法、装置、设备及存储介质
US20200394838A1 (en) Generating Map Features Based on Aerial Data and Telemetry Data
CN103902985A (zh) 一种基于roi的强鲁棒性实时车道侦测算法
CN113887418A (zh) 车辆违规行驶的检测方法、装置、电子设备及存储介质
WO2021088504A1 (fr) Procédé et appareil de détection de jonction de route, procédé et appareil d'apprentissage de réseau neuronal, procédé et appareil de conduite intelligente, et dispositif
CN111899515A (zh) 一种基于智慧道路边缘计算网关的车辆检测系统
CN103886609A (zh) 基于粒子滤波和lbp特征的车辆跟踪方法
CN117218622A (zh) 路况检测方法、电子设备及存储介质
Xing et al. Traffic sign recognition from digital images by using deep learning
CN110909674A (zh) 一种交通标志识别方法、装置、设备和存储介质
Varun et al. A road traffic signal recognition system based on template matching employing tree classifier
CN112396060B (zh) 基于身份证分割模型的身份证识别方法及其相关设备
CN115131826B (zh) 物品检测识别方法、网络模型的训练方法和装置
CN114092880A (zh) 一种基于视频分析的机场跑道大颗粒异物检测方法
CN113807236B (zh) 车道线检测的方法、装置、设备、存储介质及程序产品
WO2022217551A1 (fr) Procédé et appareil de détection de cible

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19886323

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021525040

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20217015000

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 31.08.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19886323

Country of ref document: EP

Kind code of ref document: A1