WO2022170540A1 - 交通灯检测的方法和装置 - Google Patents

交通灯检测的方法和装置 Download PDF

Info

Publication number
WO2022170540A1
WO2022170540A1 PCT/CN2021/076430 CN2021076430W WO2022170540A1 WO 2022170540 A1 WO2022170540 A1 WO 2022170540A1 CN 2021076430 W CN2021076430 W CN 2021076430W WO 2022170540 A1 WO2022170540 A1 WO 2022170540A1
Authority
WO
WIPO (PCT)
Prior art keywords
traffic light
information
image
area
detected
Prior art date
Application number
PCT/CN2021/076430
Other languages
English (en)
French (fr)
Inventor
魏宁
周旺
果晨阳
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2021/076430 priority Critical patent/WO2022170540A1/zh
Priority to CN202180000611.4A priority patent/CN112970030A/zh
Publication of WO2022170540A1 publication Critical patent/WO2022170540A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Definitions

  • the present application relates to the technical field of automatic driving, and, more particularly, relates to a method and apparatus for traffic light detection.
  • Traffic lights are an important part of traffic regulations. Autonomous vehicles need to accurately and real-time determine the location and status of the traffic lights ahead in order to make correct behavioral decisions.
  • the detection algorithm based on deep learning is more accurate and efficient than the traditional image processing technology, so it has become the mainstream method of current traffic light detection.
  • the currently known traffic light detection technology based on deep learning can usually only realize the output of the color and shape of the lamp head, which is difficult to meet the needs of automatic driving and is not conducive to improving the safety factor of driving.
  • the present application provides a traffic light detection method and device, which can provide traffic light information including the number of traffic light heads.
  • a first aspect provides a traffic light detection method, the method includes: using a neural network to obtain a first area of an image to be detected, the first area includes N traffic light groups, the N is a positive integer, and the The traffic light group includes at least one traffic light head; the neural network is used to obtain traffic light information according to the first region, wherein the traffic light information includes the number of traffic light heads in each traffic light group.
  • the neural network is used to identify and detect the traffic light group, and directly output the traffic light information, especially the number of traffic light heads, so as to provide more detailed and accurate traffic light information.
  • the lamp head is detected in a small local area, which ensures the detection efficiency and detection accuracy.
  • more detailed and accurate traffic light information is also helpful for downstream decision-making.
  • the first area may be a small area including the traffic light detection frame in the feature map of the image to be detected after being re-encoded by the neural network.
  • the above-mentioned traffic light information may also include other information about traffic lights, such as traffic light head on and off information, traffic light head color information, traffic light head Lamp head shape information and traffic light lamp head category information. This provides more detailed information.
  • the above-mentioned traffic light information is obtained by processing the image to be detected through a neural network, which can realize end-to-end detection processing and improve detection efficiency and accuracy.
  • the image to be detected may be a region of interest in an image captured by a vehicle-mounted camera, which can reduce the amount of data processing and improve detection efficiency.
  • the image to be detected may also be an image captured by a vehicle-mounted camera, which can simplify the processing flow.
  • the neural network may include a classifier, and the first region of the image to be detected is input into the classifier, and the above-mentioned traffic light information is output.
  • the neural network may include a lamp head count classifier.
  • the first area of the image to be detected can be input into the lamp head number classifier, and the information of the number of traffic lamp heads in each traffic light group is output.
  • the neural network may include a traffic light on/off classifier and a lamp head detector, input the first area of the image to be detected into the traffic light on/off classifier, and output the first area of the image to be detected.
  • Traffic light on/off information of an area when there is a traffic light on in the first area of the image to be detected, the first area of the image to be detected is input to the lamp head detector, and the traffic light information is output.
  • the inputting the first region of the image to be detected into the lamp head detector, and outputting the traffic light information includes: inputting the image to be detected The first area is input to the lamp head detector, and the first information is output, wherein the first information includes: traffic light detection frame length information, illuminated traffic light detection frame length information and illuminated traffic light detection frame number information; outputting the traffic light information according to the first information.
  • the neural network is used to identify and detect the traffic light group, and directly output the traffic light information, especially the number of traffic light heads, so as to provide more detailed and accurate traffic light information.
  • the lamp head is detected in a small local area, which ensures the detection efficiency and detection accuracy.
  • more detailed and accurate traffic light information is also helpful for downstream decision-making.
  • the neural network may include a lamp head count classifier, a traffic light on/off classifier, and a lamp head detector; input the first area of the image to be detected into the traffic light on/off classifier, and output all The traffic light on and off information of the first area of the image to be detected; when there is a traffic light that is on in the first area of the image to be detected, input the first area of the image to be detected into the lamp head number classifier , output the first number information and the first confidence level of the traffic light heads in each traffic light group; input the first area of the to-be-detected image into the light head detector, and output the traffic light heads in each traffic light group the second number information and the second confidence level; according to the first confidence level and the second confidence level, one of the first number information or the second number information is determined as the Information on the number of traffic light heads in a traffic light group.
  • the neural network is used to identify and detect the traffic light group, and directly output the traffic light information, especially the number of traffic light heads, so as to provide more detailed and accurate traffic light information.
  • the lamp head is detected in a small local area, which ensures the detection efficiency and detection accuracy.
  • more detailed and accurate traffic light information is also helpful for downstream decision-making.
  • a traffic light detection device including: an acquisition unit configured to use a neural network to acquire a first area of an image to be detected, where the first area includes N traffic light groups, and N is a positive Integer, the traffic light group includes at least one traffic light head; the processing unit is configured to use the neural network to acquire traffic light information according to the first area, wherein the traffic light information includes the traffic light information in each traffic light group Information on the number of traffic light heads.
  • the neural network is used to identify and detect the traffic light group, and directly output the traffic light information, especially the number of traffic light heads, so as to provide more detailed and accurate traffic light information.
  • the lamp head is detected in a small local area, which ensures the detection efficiency and detection accuracy.
  • more detailed and accurate traffic light information is also helpful for downstream decision-making.
  • the first area may be an area including a traffic light detection frame in a feature map of the image to be detected after being re-encoded by a neural network.
  • the above-mentioned traffic light information may also include other information about traffic lights, such as traffic light head on and off information, traffic light head color information, traffic light head Lamp head shape information and traffic light lamp head category information. This provides more detailed information.
  • the above-mentioned traffic light information is obtained by processing the image to be detected through a neural network, which can realize end-to-end detection processing and improve detection efficiency and accuracy.
  • the image to be detected may be a region of interest in an image captured by a vehicle-mounted camera, which can reduce the amount of data processing and improve detection efficiency.
  • the image to be detected may also be an image captured by a vehicle-mounted camera, which can simplify the processing flow.
  • the neural network may include a classifier
  • the processing unit is specifically configured to: input the first region of the image to be detected into the classifier, and output the traffic light information.
  • the neural network may include a lamp number classifier, and the processing unit is specifically configured to: input the first region of the image to be detected into the lamp number classifier, and output the traffic lights in each traffic light group. Number of lamp holders.
  • the neural network includes a traffic light on/off classifier and a lamp head detector
  • the processing unit is configured to: input the first area of the image to be detected into the traffic light on/off classifier, and output the first area of the image to be detected.
  • the traffic light information of the area when there is a traffic light on in the first area of the image to be detected, the first area of the image to be detected is input into the lamp head detector, and the above traffic light information is output.
  • the processing unit is configured to input the first area of the image to be detected into the lamp head detector, and output the traffic light information, including: the processing unit is specifically configured to: input the first area of the image to be detected into the lamp head detector , output first information, wherein the first information includes: length information of traffic light detection frame, length information of illuminated traffic light detection frame and information of the number of illuminated traffic light detection frames; output the above traffic light information according to the first information .
  • the neural network is used to identify and detect the traffic light group, and directly output the traffic light information, especially the number of traffic light heads, so as to provide more detailed and accurate traffic light information.
  • the lamp head is detected in a small local area, which ensures the detection efficiency and detection accuracy.
  • more detailed and accurate traffic light information is also helpful for downstream decision-making.
  • the neural network may simultaneously include a light head count classifier, a traffic light on/off classifier, and a light head detector
  • the processing unit is configured to: input the first area of the image to be detected into the traffic light on/off classifier, Output the traffic light on/off information in the first area of the image to be detected; when there is a traffic light on in the first area of the image to be detected, input the first area of the image to be detected into the lamp head number classifier, and output each traffic light
  • the first number information and first confidence of the traffic light heads in the group input the first area of the image to be detected into the light head detector, and output the second number information and second confidence of the traffic light heads in each traffic light group degree; according to the first confidence degree and the second confidence degree, one of the first number information or the second number information is determined as the number information of the traffic light heads in each traffic light group.
  • the neural network is used to identify and detect the traffic light group, and directly output the traffic light information, especially the number of traffic light heads, so as to provide more detailed and accurate traffic light information.
  • the lamp head is detected in a small local area, which ensures the detection efficiency and detection accuracy.
  • more detailed and accurate traffic light information is also helpful for downstream decision-making.
  • the traffic light detection device is a chip.
  • the chip includes a processing module and a communication interface, the processing module is used to control the communication interface to communicate with the outside, and the processing module is further used to implement the method of the first aspect.
  • a traffic light detection device in a third aspect, includes a memory and a processor, the memory is used for storing instructions, the processor is used for executing the instructions stored in the memory, and the memory is stored in the memory. Execution of the instructions causes the processor to perform the method of the first aspect.
  • a computer-readable storage medium having a computer program stored thereon, which when executed by a computer causes the computer to implement the method of the first aspect.
  • the computer may be the above-mentioned traffic light detection device.
  • a fifth aspect provides a computer program product comprising instructions that, when executed by a computer, cause the computer to implement the method of the first aspect.
  • the computer may be the above-mentioned traffic light detection device.
  • a vehicle comprising at least one traffic light detection device mentioned in the second aspect or the third aspect, so that the vehicle can implement the method in the first aspect.
  • FIG. 1 is a functional block diagram of a vehicle 100 applicable to the embodiment of the present application.
  • FIG. 2 is a functional block diagram of an automatic driving system 200 applicable to the embodiment of the present application.
  • FIG. 3 is a schematic flowchart of an example of a traffic light detection method provided by an embodiment of the present application.
  • FIG. 4 is a block diagram of a detection flow of a traffic light detection method provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of deduction of the number of lamp caps of the traffic light detection method provided by the embodiment of the present application.
  • FIG. 6 is a schematic diagram of an example of input and output of the traffic light detection method provided by the embodiment of the present application.
  • FIG. 7 is a schematic block diagram of an example of a traffic light detection device provided by an embodiment of the present application.
  • FIG. 8 is a schematic block diagram of another example of a traffic light detection apparatus provided by an embodiment of the present application.
  • FIG. 1 shows a functional block diagram of a vehicle 100 to which the embodiments of the present application are applied.
  • the vehicle 100 may be configured in a fully or partially autonomous driving mode.
  • the vehicle 100 may be configured to operate without human interaction.
  • Vehicle 100 may include a number of subsystems, such as sensing system 104 , control system 106 , computer system 112 , and user interface 116 .
  • vehicle 100 may include more or fewer subsystems, and each subsystem may include multiple elements. Additionally, each of the subsystems and elements of the vehicle 100 may be interconnected by wire or wirelessly.
  • the sensing system 104 may include several sensors that sense information about the environment surrounding the vehicle 100 .
  • sensors For example, radar 126 , laser rangefinder 128 , and camera 130 .
  • Radar 126 may use radio signals to sense objects within the surrounding environment of vehicle 100
  • laser rangefinder 128 may use laser light to sense objects in the environment where vehicle 100 is located
  • camera 130 may be used to capture the surrounding environment of vehicle 100 . multiple images.
  • the camera 130 may be a still camera or a video camera.
  • Control system 106 controls the operation of the vehicle 100 and its components.
  • Control system 106 may include various elements, including computer vision system 140 and obstacle avoidance system 144.
  • Computer vision system 140 may be operable to process and analyze images captured by camera 130 in order to identify objects and/or features in the environment surrounding vehicle 100 .
  • the objects and/or features may include traffic signals, road boundaries and obstacles.
  • Computer vision system 140 may use object recognition algorithms, Structure from Motion (SFM) algorithms, video tracking, and other computer vision techniques.
  • SFM Structure from Motion
  • the obstacle avoidance system 144 is used to identify, evaluate, and avoid or otherwise traverse potential obstacles in the environment of the vehicle 100 .
  • control system 106 may include additional or alternative components in addition to those shown and described, or may reduce some of the components shown above.
  • Computer system 112 may include at least one processor 113 that executes instructions 115 stored in a non-transitory computer-readable medium such as data storage device 114 .
  • Computer system 112 may also be multiple computing devices that control individual components or subsystems of vehicle 100 in a distributed fashion.
  • the data storage device 114 may contain instructions 115 (eg, program logic) executable by the processor 113 to perform various functions of the vehicle 100 , including those described above.
  • Data storage 114 may also contain additional instructions, including instructions to send data to, receive data from, interact with, and/or control one or more of sensing system 104 and/or control system 106 .
  • the data storage device 114 may store data such as road maps, route information, the vehicle's position, direction, speed, and other such vehicle data, among other information. Such information may be used by the vehicle 100 and the computer system 112 during operation of the vehicle 100 in autonomous, semi-autonomous and/or manual modes.
  • a user interface 116 for providing information to or receiving information from a user of the vehicle 100 .
  • Computer system 112 may control functions of vehicle 100 based on input received from various subsystems (eg, sensor system 104 and control system 106 ) and from user interface 116 .
  • computer system 112 is operable to provide control of various aspects of vehicle 100 and its subsystems.
  • one or more of these components described above may be installed or associated with the vehicle 100 separately.
  • the data storage device 114 may exist partially or completely separate from the vehicle 110 .
  • the above-described components may be communicatively coupled together in a wired and/or wireless manner.
  • FIG. 1 should not be construed as a limitation on the embodiments of the present application.
  • the autonomous vehicle vehicle 100 or a computing device associated with the autonomous vehicle 100 may be based on characteristics of the identified objects (eg, traffic lights, etc.) to adjust the way the vehicle drives. For example, in this embodiment of the present application, when the vehicle detects a red light or a yellow light, the vehicle speed may be reduced or even stopped; or, when the vehicle detects a green light, the vehicle speed may be maintained or only slightly reduced; or, when the vehicle detects a green light When the green light is turned, you can steer and drive according to the steering instructions.
  • characteristics of the identified objects eg, traffic lights, etc.
  • the above-identified object characteristics can also be used to validate or update HD maps. That is, the high-precision map may include traffic light information, and the source or verification reference of the information may be the above-identified object characteristics. For example, when the vehicle finds that the characteristic information of the object ahead (such as traffic light information or other traffic sign information) is inconsistent with the information recorded by the high-precision map, it can update the high-precision map, or send the high-precision map verification error message to the authorized server. , so that the supplier of the high-precision map can confirm the accurate object characteristic information in time.
  • the characteristic information of the object ahead such as traffic light information or other traffic sign information
  • the above-mentioned vehicle 100 can be a car, a truck, a motorcycle, a bus, a boat, an airplane, a helicopter, a lawn mower, a recreational vehicle, a playground vehicle, construction equipment, a tram, a golf cart, a train, a cart, etc.
  • the application examples are not particularly limited.
  • FIG. 2 shows a functional block diagram of an automatic driving system 200 to which the embodiments of the present application are applicable.
  • computer system 201 includes processor 203 .
  • Processor 203 is coupled to system bus 205 .
  • Processor 203 may be one or more processors, each of which may include one or more processor cores.
  • the system bus 205 is coupled to an input output (I/O) interface 215 .
  • the I/O interface 215 communicates with various I/O devices, such as the transceiver 223 (which can transmit and/or receive radio communication signals), the camera 255 (which can capture dynamic digital video images), and the like.
  • Network interface 229 is a hardware network interface, such as a network card.
  • the network 227 may be an external network, such as the Internet, or an internal network, such as an Ethernet network or a virtual private network (VPN).
  • the network 227 may also be a wireless network, such as a WiFi network, a cellular network, and the like.
  • Hard disk drive 233 is coupled to system bus 205 .
  • System memory 235 is coupled to system bus 205 .
  • Data running in system memory 235 may include operating system 237 and application programs 243 of computer 202 .
  • Application 243 includes programs that control the autonomous driving of the car, for example, programs that manage the interaction of the autonomous car with obstacles on the road, programs that control the route or speed of the autonomous car, and programs that control the interaction of the autonomous car with other autonomous vehicles on the road. .
  • Application 243 also exists on the system of software deployment server 249 .
  • FIG. 3 shows a schematic flowchart of an example of a traffic light detection method provided according to an embodiment of the present application.
  • the method of FIG. 3 may be performed by the vehicle 100 of FIG. 1 or the autonomous driving system 200 of FIG. 2 .
  • S310 Use a neural network to acquire a first area of the image to be detected, where the first area includes N traffic light groups, where N is a positive integer, and the traffic light group includes at least one traffic light head.
  • the image to be detected may be a frame of image captured by the camera 130 in the vehicle 100 shown in FIG. 1 , or may be a frame of multiple frames of images captured by the camera 130 .
  • the image to be detected may be an image frame obtained by direct shooting, or a processed image obtained by preprocessing the image frame obtained by shooting, such as contrast processing, brightness processing, noise reduction processing and other optimization processing.
  • the computer vision system 140 in the vehicle 100 depicted in FIG. 1 may operate to process and analyze the image to be inspected to identify traffic lights.
  • a first area suitable for subsequent detection may be acquired from an image to be detected by a traffic light detector.
  • the first area may include N traffic light groups, where N is a positive integer, and the traffic light group includes at least one traffic light head.
  • the first area may be a partial area including the traffic light detection frame in the feature map of the image to be detected after being re-encoded by the neural network, which can reduce the amount of data processing and improve the detection efficiency.
  • the above-mentioned traffic light detector is used to re-encode the image to be detected to obtain a first region that is convenient for subsequent detection, which can be any target detector, such as fast convolutional neural networks (faster regions with convolutional neural network features, Faster RCNN) detector, one-shot object (you only look once, YOLO) detector, this application does not make any limitation on this.
  • target detector such as fast convolutional neural networks (faster regions with convolutional neural network features, Faster RCNN) detector, one-shot object (you only look once, YOLO) detector
  • the traffic light information includes information on the number of traffic light heads in each traffic light group.
  • the traffic light head is a signal light that can be used to direct traffic flow, and is generally composed of a specific color (such as red, yellow, green, etc.) and/or a specific pattern (for example, a specific shape, a digital pattern, a pedestrian pattern, a direction pattern, a lane pattern, etc.) etc.) composition.
  • a specific color such as red, yellow, green, etc.
  • a specific pattern for example, a specific shape, a digital pattern, a pedestrian pattern, a direction pattern, a lane pattern, etc.
  • One or more traffic light heads can form a traffic light group, also known as a combined traffic light.
  • a red light, a yellow light and a green light can form a basic functional traffic light group.
  • More complex traffic light groups may include more complex traffic light heads, such as to indicate the direction of the vehicle, lanes permitted/prohibited, directions prohibited, countdown timers, etc. If allowed, a traffic light group can also be formed by a light head that changes in time, for example, the light head can display a countdown timer, which turns red when the timer reaches zero.
  • the number of traffic light heads included in a traffic light group is usually 3, but it may also include fewer or more traffic light heads. Accurately identifying the number of traffic light heads is helpful for subsequent application requirements such as vehicle policy control and high-precision map verification/update.
  • the computer vision system 140 in FIG. 1 or the processor 203 in FIG. 2 can process and analyze the first area to obtain the first area including each Traffic light information including the number of traffic light heads in each traffic light group.
  • the traffic light information may include, but is not limited to: information on the number of traffic light heads in each light group, information on the color of the traffic light heads, information on the shape of the traffic light heads, and information on the type of traffic light heads. Detailed identification of traffic light information is helpful for subsequent application requirements such as vehicle policy control and high-precision map verification/update.
  • the recognition and detection of traffic light groups through neural network can realize the end-to-end output of traffic light information, especially the number of traffic light heads, so as to provide more detailed and accurate traffic light information.
  • a traffic light group consists of three traffic light heads, namely a left-handed green light, a circular red light, and a right-handed green light.
  • the technology may not be able to output any information of these unlit traffic light heads, or even any information of the traffic light group (for example, the number of traffic lights, traffic lights, etc.) lamp type information, etc.).
  • the HD map may not be accurately verified or updated. For example, suppose that due to the optimization of road conditions, a new circular yellow light is added to the original traffic light group with three lamp heads, which is improved to a traffic light group with four lamp heads, but when the vehicle passes by, the newly added yellow light does not On, one of the original three light heads is still on. At this time, since the number of traffic light heads cannot be recognized, the vehicle will think that the traffic light group has not changed, which will reduce the accuracy and update efficiency of the high-precision map.
  • the technology can only use the attribute of one of the two lighted light heads as the attribute of the entire traffic light group, that is, output the
  • the traffic light information of the traffic light group is a left-facing green light or a circular red light, which will cause the output information to be inconsistent with the actual situation, which may lead to wrong decision-making of downstream devices (eg, autonomous vehicles).
  • these two light heads are illuminated at the same time, indicating that straight driving is prohibited, but left turns are allowed.
  • a circular red light is output, it will cause vehicles that could have turned left to slow down and stop by mistake.
  • the end-to-end one-time output of traffic light information (the output may include information of each traffic light head granularity) is realized through the neural network, which improves the processing efficiency; and the solution of the present application can output the traffic light head
  • the number information is beneficial to meet the needs of downstream applications.
  • some embodiments of the present application can independently detect all the traffic light heads in a traffic light group, and output the lamp head information respectively, which avoids outputting the attribute of a certain lamp head as the attribute of the whole light group, and can effectively solve the problem of indistinguishable difference. Problem with combining traffic lights.
  • FIG. 4 shows a flowchart of a traffic light detection provided according to an embodiment of the present application.
  • traffic light group on means that at least one lamp head in the traffic light group is on
  • traffic light group is off means that all lamp heads in the traffic light group are off.
  • the input image 300 includes a traffic light group to be detected.
  • the traffic light group includes at least one traffic light head.
  • This input image 300 is used as the input of the traffic light detector 301 .
  • the output of the traffic light detector 301 is a first area including N traffic light groups.
  • the traffic light detector may re-encode the image to be detected through a neural network, and extract a partial region including the traffic light detection frame in the obtained feature map as the first region. This can improve the speed and accuracy of detection.
  • the process described in S320 in FIG. 3 can be performed in three possible ways, and a neural network is used to obtain traffic light information according to the first area, wherein the traffic light information includes the traffic light information in each traffic light group. information on the number of traffic light heads.
  • the first region may be input to the lamp cap count classifier 302 as shown in the leftmost path in FIG. 4 .
  • the light head count classifier 302 outputs the traffic light count information.
  • the light head count classifier 302 may be a neural network that detects the input first area and outputs the traffic light head count information in an end-to-end manner.
  • the light head count classifier 302 can handle traffic lights that are on and traffic lights that are off.
  • the number of lamp head classifier 302 can use a softmax multi-classifier or other classifiers that can implement multi-classification functions.
  • the probability that the softmax function classifies x into class j is:
  • y is the predicted category
  • N is the total number of possible categories
  • T is the transpose symbol
  • ⁇ j is the parameter vector required by the classifier to predict category j, which is obtained by neural network training.
  • y may be a positive integer less than or equal to 6, and the function may output y as a positive integer between 1 and 6, respectively.
  • the confidence level P1 is an optional output.
  • the number N1 of lamp heads with the highest confidence may be directly output as the number information of the traffic lamp heads.
  • the number N1 of lamp caps and the confidence level P1 may be output at the same time.
  • the first area may be input to the traffic light on/off classifier 303 .
  • the traffic light on/off classifier 303 is used to detect whether there is at least one traffic light on in the first area.
  • the traffic light on/off classifier 303 outputs the first area to the light head detector 304 .
  • the base detector 304 outputs base type information and base detection frame information.
  • the lamp head category information indicates the lamp head category, such as shape and color, such as green left arrow, red circle, etc.
  • the lamp head detection frame information includes information such as the number of lighted lamp heads and the length of the lamp head detection frame, which can be used for subsequent deduction to obtain the information on the number of traffic lamp heads. This method can handle the situation when the traffic light group is on.
  • a traffic light on and off classifier can use a softmax multi-classifier or other classifiers that can implement multi-classification functions.
  • the softmax multi-classifier the above formula (1) can be used to classify traffic lights on and off.
  • the possible category y may be 0 (representing "no") or 1 (representing "yes"), then, the total number of possible categories N is 2, and the function can output The probability that y is 0 or 1, and the one with the highest probability is the output.
  • the above-mentioned traffic light detection frame information may be referred to as the first information, which includes: the number n of the light head detection frames that are on in the traffic light group, the length wi of the i-th lighted light head detection frame, the confidence level Pi, and the light head.
  • FIG. 5 is an exemplary schematic diagram of a traffic light detection frame. As shown in Figure 5, the leftmost traffic light is on and the rest of the traffic lights are off, the number n of the light head detection frames that are on is 1, the length of the light head detection frame is w1, and the total length of the traffic light detection frame is W. FIG. 5 is only an example, if more lamp heads are on, that is, n is an integer greater than 1, the lengths of the lamp head detection frames that are on are w1, w2, . . . wn, respectively.
  • the average length of the light cap can be calculated according to formula (2)
  • the number N2 of traffic light heads in the traffic light group and the confidence level P2 can be deduced.
  • the above formulas (2) to (4) are only exemplary, and other equivalent deduction formulas may be used in this embodiment of the present application.
  • the confidence level P2 is an optional output.
  • the number N2 of lamp heads can be obtained as the number information of the traffic lamp heads, and the operation of formula (4) is no longer performed. In other implementation manners, the number N2 of lamp caps and the confidence level P2 may be output at the same time.
  • the traffic light group with at least one traffic light on, and finally output the traffic light information of the traffic light group, for example, the traffic light head number information N2 of the traffic light group and the type of light head that is on. information (eg, green arrows, etc.).
  • the first area is input to the traffic light on/off classifier 303 .
  • the processing procedure of the traffic light on/off classifier 303 in Mode 3 is the same as that in Mode 2 above, and thus will not be described again.
  • the difference between method 3 and method 2 is that in the case where the detection result of the traffic light on/off classifier 303 is that there is at least one on-light traffic light in the first area, the number of lamp caps classifier 302 and method in method 1 can be used.
  • the lamp head detectors 304 in 2 respectively detect the first regions to obtain respective detection results. Then, based on the results of the lamp head count classifier 302 and the lamp head detector 304, the final traffic light count information can be output. For example, the confidence of the two results can be compared, and the number of traffic lights corresponding to the high confidence can be output.
  • the first area is sent to the light head number classifier 302, and the light head number classifier 302 detects the input first area, and directly outputs the first traffic light of the traffic light group in an end-to-end manner.
  • the first area can be sent to the lamp head detector 304, and the lamp head detector 304 can process the first area, and output lamp head type information and first information.
  • the second traffic light number N2 and the second confidence level P2 of the traffic light group can be obtained through the method described in FIG. 5 .
  • the confidence level P1 of the first number N1 of traffic lights is compared with the confidence level P2 of the second number N2 of traffic lights, and the one with higher confidence level is used as the final output. For example, assuming that P1>P2, the first traffic light number N1 can be output as the traffic light number information; and vice versa.
  • Mode 3 can also output other traffic light information, such as traffic light category information (eg, green left arrow, etc.).
  • traffic light category information eg, green left arrow, etc.
  • the traffic light group with at least one traffic light on can be detected, and the information on the number of traffic lights with higher confidence can be output by comparing in various ways, so as to provide more detailed and accurate traffic light information, which is helpful for downstream Decision judgment or other application requirements.
  • the input image to be detected may also be a region of interest (ROI) of the image to be detected, wherein the region of interest may be the image to be detected that is most likely to include the region of interest to be detected
  • ROI region of interest
  • the region of interest may be the image to be detected that is most likely to include the region of interest to be detected
  • a small area or a partial area of the traffic light group, or the area of interest may also be a small area or a partial area in the image to be detected that needs further processing.
  • the traffic light is usually in the upper half of the image to be detected, as shown by the black frame area in the figure, the upper half of the image to be detected can be set as the region of interest, or the upper 1/3 part is the region of interest , or follow other settings.
  • the region of interest is not limited to the upper part of the image to be detected. Since the traffic light may also appear in the middle part of the image, or on the left or right side, the region of interest can be determined in an appropriate manner, which is not performed in this embodiment of
  • a region of interest can reduce the amount of data that needs to be processed and make the detection process more efficient.
  • the selection method and the size of the region of interest should not constitute limitations, and the region of interest described in this application may be other images obtained by screening and intercepting the images to be detected.
  • the neural network is used to identify and detect the traffic light group, and directly output the traffic light information, especially the number of traffic light heads, so as to provide more detailed and accurate traffic light information.
  • the traffic light heads are independently detected, and the light head information is output separately, which avoids taking a certain light head attribute as the category attribute of the entire light group, which can effectively solve the problem of indistinguishable combined traffic. problem with lights.
  • the lamp head is detected in a small local area, which ensures the detection efficiency and detection accuracy. At the same time, more detailed and accurate traffic light information is also helpful for downstream decision-making.
  • FIG. 6 shows a schematic diagram of an input and output of the traffic light detection method provided by the embodiment of the present application.
  • an image to be detected 401 is input, and the image to be detected includes a plurality of traffic light groups.
  • the detection method described in FIG. 4 can be used to output the detection result images 403 to 406 of each traffic light group and related Traffic light information.
  • the traffic light information can be output together with the image as feature information or label information of the detection result image.
  • the information on the number of traffic lights in the traffic light group can be directly output by the method described in Mode 1 in FIG. 4 .
  • the to-be-detected image may be sent to the traffic light detector 301 shown in FIG. 4 , and the traffic light detector 301 processes the to-be-detected image and outputs the first area including the traffic light group.
  • the first area is sent to the lamp head number classifier 302, and the lamp head number classifier 302 processes the first area, and directly outputs the information on the number of traffic lights in the traffic light group.
  • detection can be performed by the method described in Mode 2 or Mode 3 in FIG.
  • the to-be-detected image may be sent to the traffic light detector 301 shown in FIG. 4 , and the traffic light detector 301 processes the to-be-detected image and outputs a first area including the to-be-detected traffic light group.
  • the first area is sent to the lamp head on/off classifier 303 shown in FIG. 4 , and the lamp head on/off classifier 303 detects the first area and outputs the result of the on/off traffic lights in the first area.
  • the detection method shown in the mode 2 in the above-mentioned FIG. 4 may be performed.
  • the first area is sent to the lamp cap detector 304 shown in FIG.
  • the lamp cap detector 304 detects the first area and outputs lamp cap type information and first information.
  • information on the number of traffic lights in each light group can be obtained through the method described in FIG. 5 .
  • the number of traffic lights and lamp head category information in each traffic light group are output.
  • traffic light information "arrow_left_3" shown in the image 404 in the figure “arrow_left” indicates that the category information of the lighted traffic light in the traffic light group is a left arrow, and "3" indicates that the head of the traffic light in the traffic light group The number is 3.
  • the detection method shown in mode 3 in the above-mentioned FIG. 4 may also be performed.
  • the first area is sent to the lamp number classifier 302 shown in FIG. 4 , and the lamp number classifier 302 processes the first area and directly outputs the traffic light number information N1 and the confidence level P1 in the traffic light group .
  • the first area is sent to the lamp head detector 304 shown in FIG. 4 , and the lamp head detector 304 detects the first area and outputs lamp head type information and first information.
  • the number information N2 and the confidence level P2 of the traffic lights in each light group can be obtained through the method described in FIG. 5 .
  • the lamp holder category information and the traffic light number information N1 or N2 with higher confidence levels are finally output.
  • the traffic light information "circle_3" shown in the images 405 and 406 “circle” indicates that the traffic light category information in the traffic light group is a circle, and “3” indicates that the traffic light head in the traffic light group is a circle. The number is 3.
  • traffic light information is only exemplary, and the traffic light information in this embodiment of the present application may adopt any suitable representation form.
  • the neural network is used to identify and detect the traffic light group, and directly output the traffic light information, especially the number of traffic light heads, so as to provide more detailed and accurate traffic light information.
  • the traffic light heads are independently detected, and the light head information is output separately, which avoids taking a certain light head attribute as the category attribute of the entire light group, which can effectively solve the problem of indistinguishable combined traffic. problem with lights.
  • the lamp head is detected in a small local area, which ensures the detection efficiency and detection accuracy. At the same time, more detailed and accurate traffic light information is also helpful for downstream decision-making.
  • FIG. 7 is a schematic block diagram of a traffic light detection apparatus provided by an embodiment of the present application.
  • the apparatus of FIG. 7 may be a specific example of computer system 112 in FIG. 1 or processor 203 in FIG. 2 .
  • the traffic light detection apparatus 500 may execute each process of the above-mentioned traffic light detection method, and repetition is not avoided and will not be described in detail.
  • the detection apparatus 500 includes an acquisition unit 510 and a processing unit 520 .
  • the obtaining unit 510 is configured to use a neural network to obtain a first area of the image to be detected, where the first area includes N traffic light groups, and N is a positive integer.
  • the above-mentioned traffic light group includes at least one traffic light head.
  • An example of the acquisition unit 510 is the traffic light detector 301 in FIG. 4 , which will not be described in detail to avoid repetition.
  • the processing unit 520 is configured to acquire traffic light information according to the first region by using a neural network, where the traffic light information includes information on the number of traffic light heads in each traffic light group.
  • the neural network is used to identify and detect the traffic light group, and directly output the traffic light information, especially the number of traffic light heads, so as to provide more detailed and accurate traffic light information.
  • the traffic light heads are independently detected, and the light head information is output separately, which avoids taking a certain light head attribute as the category attribute of the entire light group, which can effectively solve the problem of indistinguishable combined traffic. problem with lights.
  • the lamp head is detected in a small local area, which ensures the detection efficiency and detection accuracy. At the same time, more detailed and accurate traffic light information is also helpful for downstream decision-making.
  • the above-mentioned neural network may include a lamp holder number classifier, such as the lamp holder number classifier 302 in FIG. 4 , which will not be described in detail to avoid repetition.
  • the processing unit 520 may obtain traffic light information according to the first area, where the traffic light information includes information on the number of traffic light heads in each traffic light group.
  • the above-mentioned neural network may include a traffic light on/off classifier and a light head detector, for example, the traffic light on/off classifier 303 and the light head detector 304 in FIG. 4 .
  • the processing unit 520 is specifically configured to: send the first area of the image to be detected to the traffic light on/off classifier, and output the traffic light on/off information of the first area of the image to be detected.
  • the first area of the image to be detected is sent to the lamp head detector, and the traffic light category information and the first information are output, where the first information includes: a traffic light detection frame The total length, the length of the illuminated traffic light detection frame, and the number of illuminated traffic light detection frames.
  • the processing unit 520 may also output traffic light information according to the first information.
  • the traffic light information includes information on the number of traffic light heads in each traffic light group.
  • the above-mentioned neural network may include a lamp head count classifier, a traffic light on/off classifier, and a lamp head detector, for example, the lamp head count classifier 302 shown in FIG. 303 and base detector 304.
  • the processing unit 520 is configured to: send the first area of the image to be detected to the traffic light on/off classifier, and output the traffic light on/off information of the first area of the image to be detected.
  • the first area of the image to be detected is sent to the lamp head number classifier, and the first number information and the first number of traffic lamp heads in each traffic light group are output. a confidence level.
  • the processing unit 520 is further configured to send the first area of the image to be detected to the light head detector, and output the second number information and the second confidence level of the traffic light heads in each traffic light group. According to the first confidence level and the second confidence level, one of the first number information or the second number information is determined as the number information of the traffic light heads in each traffic light group.
  • the above traffic information may further include: traffic light head on/off information, traffic light head color information, and traffic light head shape information.
  • the above detection device 500 is embodied in the form of functional units.
  • the term “unit” here can be implemented in the form of software and/or hardware, which is not specifically limited.
  • a "unit” may be a software program, a hardware circuit, or a combination of the two that realizes the above-mentioned functions.
  • the hardware circuits may include application-specific integrated circuits, electronic circuits, processors (eg, shared processors, proprietary processors, or group processors, etc.) for executing one or more software or firmware programs, and memory, combined logic circuits and/or other suitable components that support the described functionality.
  • the units of each example described in the embodiments of the present application can be implemented by electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementations should not be considered beyond the scope of this application.
  • FIG. 8 is another schematic block diagram of a traffic light detection apparatus 600 provided by an embodiment of the present application.
  • the apparatus 600 includes: a communication interface 610 , a processor 620 and a memory 630 .
  • a program is stored in the memory 630
  • the processor 620 is used to execute the program stored in the memory 630
  • the execution of the program stored in the memory 630 causes the processor 620 to execute the relevant processing steps in the above method embodiments
  • the execution of the program stored in the memory 630 causes the processor 620 to control the communication interface 610 to perform the relevant steps of obtaining and outputting in the above method embodiments.
  • the image processing device 600 is a chip.
  • the processor in this embodiment of the present application may be an integrated circuit chip, which has a signal processing capability.
  • each step of the above method embodiments may be completed by a hardware integrated logic circuit in a processor or an instruction in the form of software.
  • the above-mentioned processor may be a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit, a field programmable gate array (FPGA) or other programmable logic devices, discrete gates or transistors Logic devices, discrete hardware components.
  • DSP digital signal processor
  • FPGA field programmable gate array
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the steps of the method disclosed in conjunction with the embodiments of the present application may be directly embodied as executed by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor.
  • the software modules may be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other storage media mature in the art.
  • the storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps of the above method in combination with its hardware.
  • the present application also provides a computer program product, the computer program product includes: computer program code, when the computer program code is run on a computer, the computer is made to execute any one of the foregoing embodiments. Methods of Examples.
  • the present application further provides a computer-readable medium, where the computer-readable medium stores program codes, when the program codes are executed on a computer, the computer is made to execute any one of the foregoing embodiments.
  • the present application further provides a vehicle, the vehicle includes at least one traffic light detection device mentioned in the above-mentioned embodiment of the present application, so that the vehicle can execute the method of any one of the above-mentioned embodiments. .
  • the disclosed apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional module in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the functions, if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium.
  • the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art or the part of the technical solution.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), magnetic disk or optical disk and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

本申请提供了一种交通灯检测的方法和装置,涉及自动驾驶技术领域。该方法包括:采用神经网络获取待检测图像的第一区域,所述第一区域包括N个交通灯组,所述N为正整数,所述交通灯组包括至少一个交通灯头;采用所述神经网络根据所述第一区域,获取交通灯信息,其中,所述交通灯信息包括每个交通灯组中的交通灯头的个数信息。由此可以通过神经网络对交通灯组识别检测,直接输出交通灯的信息,尤其是交通灯头的个数信息,从而能够提供更详尽、准确的交通灯信息。本申请的方案可以应用于车联网,如车辆外联V2X、车间通信长期演进技术LTE-V、车辆-车辆V2V等。

Description

交通灯检测的方法和装置 技术领域
本申请涉及自动驾驶技术领域,并且,更具体地,涉及一种交通灯检测的方法和装置。
背景技术
交通灯是交通规范的重要组成部分。自动驾驶车辆在行进过程中需要准确、实时地确定前方交通灯的位置及状态,以便实现正确行为决策。
基于深度学习的检测算法比传统图像处理技术准确高效,因而成为了当前交通灯检测的主流方法。但现在已知的基于深度学习的交通灯检测技术通常仅能实现灯头颜色和灯头形状的输出,难以满足自动驾驶的需求,不利于提高驾驶的安全系数。
因此,亟需一种交通灯检测技术,能够提供包括交通灯灯头个数在内的交通灯信息。
申请内容
本申请提供了一种交通灯检测的方法和装置,能够提供包括交通灯灯头个数在内的交通灯信息。
第一方面,提供了一种交通灯检测的方法,该方法包括:采用神经网络获取待检测图像的第一区域,所述第一区域包括N个交通灯组,所述N为正整数,所述交通灯组包括至少一个交通灯头;采用所述神经网络根据所述第一区域,获取交通灯信息,其中,所述交通灯信息包括每个交通灯组中的交通灯头的个数信息。
根据本申请的方案,通过神经网络对交通灯组识别检测,直接输出交通灯的信息,尤其是交通灯头的个数信息,从而能够提供更详尽、准确的交通灯信息。
这样,例如,能够有效解决无法区别组合交通灯的问题。并且,在检测交通灯的基础上,在小的局部区域检测灯头,保证了检测效率和检测准确性。同时,更详尽、准确的交通灯信息也有助于下游决策判断。
在一种可能的实现方式中,第一区域可以是待检测图像经神经网络重编码处理后的特征图中的包含交通灯检测框的小区域。
在一种可能的实现方式中,除了交通灯头的个数信息之外,上述交通灯信息还可以包括其他有关交通灯的信息,例如交通灯灯头亮、灭信息、交通灯灯头颜色信息、交通灯灯头形状信息和交通灯灯头类别信息。这样可以提供更详尽的信息。
通过神经网络对待检测图像进行处理,得到上述交通灯信息,这样能够实现端到端的检测处理,提高检测效率和准确性。
在一种可能的实现方式中,待检测图像可以为车载摄像头拍摄的图像中的感兴趣区域,这样可以减少数据处理量,提高检测效率。再例如,待检测图像也可以是车载摄像头拍摄的图像,这样可以简化处理流程。
在一种可能的实现方式中,神经网络可包括分类器,将待检测图像的第一区域输入该 分类器,输出上述交通灯信息。
在一种可能的实现方式中,神经网络可包括灯头个数分类器。可将待检测图像的第一区域输入灯头个数分类器,输出每个交通灯组中的交通灯头的个数信息。
在一种可能的实现方式中,所述神经网络可包括交通灯亮灭分类器和灯头检测器,将所述待检测图像的第一区域输入交通灯亮灭分类器,输出所述待检测图像的第一区域的交通灯亮灭信息;在所述待检测图像的第一区域存在亮着的交通灯时,将所述待检测图像的第一区域输入所述灯头检测器,输出所述交通灯信息。
结合第一方面,在第一方面的某些实现方式中,所述将所述待检测图像的第一区域输入所述灯头检测器,输出所述交通灯信息,包括:将所述待检测图像的第一区域输入所述灯头检测器,输出第一信息,其中所述第一信息包括:交通灯检测框长度信息、亮着的交通灯检测框长度信息和亮着的交通灯检测框个数信息;根据所述第一信息,输出上述交通灯信息。
根据本申请的方案,通过神经网络对交通灯组识别检测,直接输出交通灯的信息,尤其是交通灯头的个数信息,从而能够提供更详尽、准确的交通灯信息。
这样,例如,能够有效解决无法区别组合交通灯的问题。并且,在检测交通灯的基础上,在小的局部区域检测灯头,保证了检测效率和检测准确性。同时,更详尽、准确的交通灯信息也有助于下游决策判断。
在一种可能的实现方式中,所述神经网络可包括灯头个数分类器、交通灯亮灭分类器和灯头检测器;将所述待检测图像的第一区域输入交通灯亮灭分类器,输出所述待检测图像的第一区域的交通灯亮灭信息;在所述待检测图像的第一区域存在亮着的交通灯时,将所述待检测图像的第一区域输入所述灯头个数分类器,输出每个交通灯组中的交通灯头的第一个数信息和第一置信度;将所述待检测图像的第一区域输入所述灯头检测器,输出每个交通灯组中的交通灯头的第二个数信息和第二置信度;根据所述第一置信度和第二置信度,将所述第一个数信息或所述第二个数信息中的一个确定为所述每个交通灯组中的交通灯头的个数信息。
根据本申请的方案,通过神经网络对交通灯组识别检测,直接输出交通灯的信息,尤其是交通灯头的个数信息,从而能够提供更详尽、准确的交通灯信息。
这样,例如,能够有效解决无法区别组合交通灯的问题。并且,在检测交通灯的基础上,在小的局部区域检测灯头,保证了检测效率和检测准确性。同时,更详尽、准确的交通灯信息也有助于下游决策判断。
第二方面,提供了一种交通灯检测的装置,包括:获取单元,用于采用神经网络获取待检测图像的第一区域,所述第一区域包括N个交通灯组,所述N为正整数,所述交通灯组包括至少一个交通灯头;处理单元,用于采用所述神经网络根据所述第一区域,获取交通灯信息,其中,所述交通灯信息包括每个交通灯组中的交通灯头的个数信息。
根据本申请的方案,通过神经网络对交通灯组识别检测,直接输出交通灯的信息,尤其是交通灯头的个数信息,从而能够提供更详尽、准确的交通灯信息。
这样,例如,能够有效解决无法区别组合交通灯的问题。并且,在检测交通灯的基础上,在小的局部区域检测灯头,保证了检测效率和检测准确性。同时,更详尽、准确的交通灯信息也有助于下游决策判断。
在一种可能的实现方式中,第一区域可以是待检测图像经神经网络重编码处理后的特征图中的包含交通灯检测框的区域。
在一种可能的实现方式中,除了交通灯头的个数信息之外,上述交通灯信息还可以包括其他有关交通灯的信息,例如交通灯灯头亮、灭信息、交通灯灯头颜色信息、交通灯灯头形状信息和交通灯灯头类别信息。这样可以提供更详尽的信息。
通过神经网络对待检测图像进行处理,得到上述交通灯信息,这样能够实现端到端的检测处理,提高检测效率和准确性。
在一种可能的实现方式中,待检测图像可以为车载摄像头拍摄的图像中的感兴趣区域,这样可以减少数据处理量,提高检测效率。再例如,待检测图像也可以是车载摄像头拍摄的图像,这样可以简化处理流程。
在本申请实施例的一些实现方式中,神经网络可包括分类器,处理单元具体用于:将待检测图像的第一区域输入该分类器,输出上述交通灯信息。
在一种可能的实现方式中,神经网络可包括灯头个数分类器,处理单元具体用于:将待检测图像的第一区域输入该灯头个数分类器,输出每个交通灯组中的交通灯头的个数信息。
在另一种可能的实现方式中,神经网络包括交通灯亮灭分类器和灯头检测器,处理单元用于:将待检测图像的第一区域输入交通灯亮灭分类器,输出待检测图像的第一区域的交通灯亮灭信息;在待检测图像的第一区域存在亮着的交通灯时,将待检测图像的第一区域输入灯头检测器,输出上述交通灯信息。
其中,处理单元用于将所述待检测图像的第一区域输入所述灯头检测器,输出所述交通灯信息,包括:处理单元具体用于:将待检测图像的第一区域输入灯头检测器,输出第一信息,其中第一信息包括:交通灯检测框长度信息、亮着的交通灯检测框长度信息和亮着的交通灯检测框个数信息;根据第一信息,输出上述交通灯信息。
根据本申请的方案,通过神经网络对交通灯组识别检测,直接输出交通灯的信息,尤其是交通灯头的个数信息,从而能够提供更详尽、准确的交通灯信息。
这样,例如,能够有效解决无法区别组合交通灯的问题。并且,在检测交通灯的基础上,在小的局部区域检测灯头,保证了检测效率和检测准确性。同时,更详尽、准确的交通灯信息也有助于下游决策判断。
在一种可能的实现方式中,神经网络可同时包括灯头个数分类器、交通灯亮灭分类器和灯头检测器,处理单元用于:将待检测图像的第一区域输入交通灯亮灭分类器,输出待检测图像的第一区域的交通灯亮灭信息;在待检测图像的第一区域存在亮着的交通灯时,将待检测图像的第一区域输入灯头个数分类器,输出每个交通灯组中的交通灯头的第一个数信息和第一置信度;将待检测图像的第一区域输入灯头检测器,输出每个交通灯组中的交通灯头的第二个数信息和第二置信度;根据第一置信度和第二置信度,将第一个数信息或第二个数信息中的一个确定为每个交通灯组中的交通灯头的个数信息。
根据本申请的方案,通过神经网络对交通灯组识别检测,直接输出交通灯的信息,尤其是交通灯头的个数信息,从而能够提供更详尽、准确的交通灯信息。
这样,例如,能够有效解决无法区别组合交通灯的问题。并且,在检测交通灯的基础上,在小的局部区域检测灯头,保证了检测效率和检测准确性。同时,更详尽、准确的交 通灯信息也有助于下游决策判断。
在一种可能设计中,该交通灯检测装置为芯片。该芯片包括处理模块与通信接口,所述处理模块用于控制所述通信接口与外部进行通信,所述处理模块还用于实现第一方面的方法。
第三方面,提供一种交通灯检测的装置,所述装置包括存储器和处理器,所述存储器用于存储指令,所述处理器用于执行所述存储器存储的指令,并且对所述存储器中存储的指令的执行使得所述处理器执行第一方面的方法。
第四方面,提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被计算机执行时使得所述计算机实现第一方面的方法。可选地,所述计算机可以为上述交通灯检测装置。
第五方面,提供一种包含指令的计算机程序产品,所述指令被计算机执行时使得所述计算机实现第一方面所述的方法。可选地,所述计算机可以为上述交通灯检测装置。
第六方面,提供一种车辆,所述车辆包括至少一个第二方面或第三方面所提到的交通灯检测装置,使得该车辆可以实现第一方面所述的方法。
附图说明
图1是适用于本申请实施例的车辆100的功能框图。
图2是适用于本申请实施例的自动驾驶系统200的功能框图。
图3是本申请实施例提供的交通灯检测方法的一例示意性流程图。
图4是本申请实施例提供的交通灯检测方法的检测流程框图。
图5是本申请实施例提供的交通灯检测方法的灯头个数推演示意图。
图6是本申请实施例提供的交通灯检测方法的一例输入输出示意图。
图7是本申请实施例提供的交通灯检测装置的一例示意性框图。
图8是本申请实施例提供的交通灯检测装置的另一例示意性框图。
具体实施方式
下面将结合附图,对本申请实施例中的技术方案进行描述。
图1示出了本申请实施例适用的车辆100的功能框图。其中,车辆100可以配置为完全或部分的自动驾驶模式。在车辆100处于自动驾驶模式中时,可以将车辆100配置为在没有和人交互的情况下操作。
车辆100可包括多个子系统,例如传感系统104、控制系统106、计算机系统112和用户接口116。可选地,车辆100可包括更多或更少的子系统,并且每个子系统可包括多个元件。另外,车辆100的每个子系统和元件可以通过有线或者无线互连。
传感系统104可包括感测关于车辆100周边的环境的信息的若干个传感器。例如,雷达126、激光测距仪128以及相机130。
雷达126可利用无线电信号来感测车辆100的周边环境内的物体,激光测距仪128可利用激光来感测车辆100所位于的环境中的物体,相机130可用于捕捉车辆100的周边环境的多个图像。其中,相机130可以是静态相机或视频相机。
控制系统106为控制车辆100及其组件的操作。控制系统106可包括各种元件,其中 包括计算机视觉系统140以及障碍物避免系统144。
计算机视觉系统140可以操作来处理和分析由相机130捕捉的图像以便识别车辆100周边环境中的物体和/或特征。所述物体和/或特征可包括交通信号、道路边界和障碍物。计算机视觉系统140可使用物体识别算法、运动中恢复结构(Structure from Motion,SFM)算法、视频跟踪和其他计算机视觉技术。
障碍物避免系统144用于识别、评估和避免或者以其他方式越过车辆100的环境中的潜在障碍物。
可选地,控制系统106可以增加或替换地包括除了所示出和描述的那些以外的组件,或者也可以减少一部分上述示出的组件。
车辆100的部分或所有功能受计算机系统112控制。计算机系统112可包括至少一个处理器113,处理器113执行存储在例如数据存储装置114这样的非暂态计算机可读介质中的指令115。计算机系统112还可以是采用分布式方式控制车辆100的个体组件或子系统的多个计算设备。
可选地,数据存储装置114可包含指令115(例如,程序逻辑),指令115可被处理器113执行来执行车辆100的各种功能,包括以上描述的那些功能。数据存储装置114也可包含额外的指令,包括向传感系统104和/或控制系统106中的一个或多个发送数据、从其接收数据、与其交互和/或对其进行控制的指令。
除了指令115以外,数据存储装置114还可存储数据,例如道路地图、路线信息,车辆的位置、方向、速度以及其它这样的车辆数据,以及其他信息。这种信息可在车辆100在自主、半自主和/或手动模式中操作期间被车辆100和计算机系统112使用。
用户接口116,用于向车辆100的用户提供信息或从其接收信息。
计算机系统112可基于从各种子系统(例如,传感器系统104和控制系统106)以及从用户接口116接收的输入来控制车辆100的功能。可选地,计算机系统112可操作来对车辆100及其子系统的许多方面提供控制。
可选地,上述这些组件中的一个或多个可与车辆100分开安装或关联。例如,数据存储装置114可以部分或完全地与车辆110分开存在。上述组件可以按有线和/或无线方式来通信地耦合在一起。
应理解,上述组件只是一个示例,实际应用中,上述各个模块中的组件有可能根据实际需要增添或者删除,图1不应理解为对本申请实施例的限制。
可选地,自动驾驶汽车车辆100或者与自动驾驶车辆100相关联的计算设备(如图1的计算机系统112、计算机视觉系统140、数据存储装置114)可以基于所识别的物体的特性(例如,交通灯等)来调整车辆的驾驶方式。例如,在本申请实施例中,当车辆检测到红灯或黄灯时,可以降低车速乃至停止;或者,当车辆检测到绿灯时,可以维持车速或者仅小幅度降低车速;或者,当车辆检测到转向绿灯时,可以按照转向指示进行转向驾驶。
上述识别的物体特性也可以用来验证或更新高精地图。即,高精地图可包括交通灯信息,该信息的来源或验证基准可以是上述识别的物体特性。例如,当车辆发现前方物体特性信息(如交通灯信息或其他交通标志信息)与高精地图所记录的信息不一致时,可以更新高精地图,或者向有权限的服务器发送高精地图验证错误信息,以使得高精地图的供应商及时确认准确的物体特性信息。
上述车辆100可以为轿车、卡车、摩托车、公共汽车、船、飞机、直升飞机、割草机、娱乐车、游乐场车辆、施工设备、电车、高尔夫球车、火车、和手推车等,本申请实施例不做特别的限定。
图2示出了本申请实施例适用的自动驾驶系统200功能框图。如图2所示,计算机系统201包括处理器203。处理器203和系统总线205耦合。处理器203可以是一个或者多个处理器,其中每个处理器都可以包括一个或多个处理器核。系统总线205和输入输出(I/O)接口215耦合。I/O接口215和多种I/O设备进行通信,例如,收发器223(可以发送和/或接受无线电通信信号),摄像头255(可以捕捉动态数字视频图像)等。
计算机202可以通过网络接口229和软件部署服务器249通信。网络接口229是硬件网络接口,比如,网卡。网络227可以是外部网络,比如因特网,也可以是内部网络,比如以太网或者虚拟私人网络(virtual private network,VPN)。可选地,网络227还可以是无线网络,比如WiFi网络,蜂窝网络等。
硬盘驱动器233和系统总线205耦合。系统内存235和系统总线205耦合。运行在系统内存235的数据可以包括计算机202的操作系统237和应用程序243。
应用程序243包括控制汽车自动驾驶相关的程序,比如,管理自动驾驶的汽车和路上障碍物交互的程序,控制自动驾驶汽车路线或者速度的程序,控制自动驾驶汽车和路上其他自动驾驶汽车交互的程序。应用程序243也存在于软件部署服务器(deploying server)249的系统上。
图3示出了根据本申请实施例提供的交通灯检测方法的一例示意流程图。图3的方法可以由图1的车辆100或图2的自动驾驶系统200执行。
S310,采用神经网络获取待检测图像的第一区域,该第一区域包括N个交通灯组,N为正整数,该交通灯组包括至少一个交通灯头。
例如,待检测图像可以是通过图1中所述车辆100中的相机130拍摄的一帧图像,也可以是相机130录像获取到的多帧图像的一帧。待检测图像可以是直接拍摄得到的图像帧,也可以是对拍摄得到的图像帧进行预处理得到的处理图像,例如对比度处理、亮度处理、降噪处理等优化处理。
在得到待检测图像后,图1中所述车辆100中的计算机视觉系统140可以操作来处理和分析所述待检测图像来识别交通灯。
在本申请所提供的实施例中,可通过交通灯检测器从待检测图像中获取适合后续检测的第一区域。其中,第一区域可包括N个交通灯组,N为正整数,所述交通灯组包括至少一个交通灯头。例如,第一区域可以是待检测图像经神经网络重编码处理后的特征图中的包含交通灯检测框的部分区域,这样可以降低数据处理量,从而提高检测的效率。
应理解,上述交通灯检测器用于对待检测图像进行重编码处理以得到便于后续检测的第一区域,其可以为任意目标检测器,如快速卷积神经网络(faster regions with convolutional neural network features,Faster RCNN)检测器,单次目标(you only look once,YOLO)检测器,本申请对此不作任何限定。
S320,采用神经网络根据第一区域,获取交通灯信息,其中,交通灯信息包括每个交通灯组中的交通灯头的个数信息。
其中,交通灯头是能够用来指挥交通通行的信号灯,一般由特定颜色(例如红、黄、 绿等颜色)和/或特定图案(例如,特定形状、数字图案、行人图案、方向图案、车道图案等)构成。
一个或多个交通灯头可以组成交通灯组,也可以称为组合交通灯。例如一个红灯、一个黄灯和一个绿灯可以组成一个基本功能的交通灯组。更加复杂的交通灯组,可以包括更复杂的交通灯头,例如可以指示车辆行驶方向、准许/禁止行驶车道、禁止行驶方向、倒数计时器等。如果允许的话,也可以由一个按时序变化的灯头组成交通灯组,例如灯头可显示一个倒数计时器,在计时到零时变为红灯。一个交通灯组所包括的交通灯头的个数常见为3个,但是也可能包括更少或更多的交通灯头个数。准确地识别交通灯头的个数信息,有助于后续车辆策略控制、高精地图验证/更新等应用需求。
应理解,在得到包含待检测交通灯组的第一区域后,图1中的计算机视觉系统140,或图2中的处理器203可以对所述第一区域进行处理和分析,以得到包含每个交通灯组中的交通灯头个数在内的交通灯信息。
在本申请所提供的实施例中,交通灯信息可包括但不限于:每个灯组中的交通灯头个数信息、交通灯头颜色信息、交通灯头形状信息和交通灯头类别信息。详尽地识别交通灯信息,有助于后续车辆策略控制、高精地图验证/更新等应用需求。
根据本申请的方案,通过神经网络对交通灯组识别检测,能够实现交通灯信息的端到端输出,尤其是交通灯头的个数信息,从而能够提供更详尽、准确的交通灯信息。
对于仅将交通灯组中某一个亮着的交通灯头的属性作为整个交通灯组的属性输出的技术,该技术输出的信息较少,不能满足复杂场景的需求。假设一个交通灯组包括三个交通灯头,即一个左向绿灯、一个圆形红灯和一个右向绿灯。
当一个交通灯组中包括的部分交通灯头不亮时,该技术可能无法输出这些不亮的交通灯头的任何信息,甚至无法输出该交通灯组的任何信息(例如,交通灯个数信息、交通灯类型信息等)。此时,可能导致高精地图无法准确地被验证或更新。例如,假设由于路况优化,在原先的三个灯头的交通灯组基础上增加一个新的圆形黄灯,改进为四个灯头的交通灯组,但是当车辆经过时,新增的黄灯未亮,仍然是原先三个灯头之一亮起,此时由于不能识别交通灯头的个数,车辆会认为该交通灯组未发生变化,这样会导致高精地图的准确性和更新效率降低。
再例如,当该交通灯组中左向绿灯和圆形红灯同时亮时,该技术只能将这两个亮着的灯头中的一个灯头的属性作为整个交通灯组的属性,即输出该交通灯组的交通灯信息为左向绿灯或圆形红灯,这会导致输出信息与实际情况不符,可能导致下游设备(例如,自动驾驶的车辆)决策错误。具体地,这两个灯头同时亮起,表示禁止直行,但允许左拐,但如果仅输出圆形红灯,会导致本来可以左拐的车辆也错误地减速停止。
根据本申请的方案,通过神经网络实现交通灯信息的端到端一次性输出(该输出可包括每个交通灯头粒度的信息),提高了处理效率;并且,本申请的方案可以输出交通灯头的个数信息,有利于满足下游应用的需求。另一方面,本申请的一些实施例能够对一个交通灯组中所有的交通灯头进行独立检测,分别输出灯头信息,避免了将某一个灯头属性作为整个灯组的属性输出,能够有效解决无法区别组合交通灯的问题。
图4示出了根据本申请实施例提供的交通灯检测流程框图。在本申请实施例中,为便于描述,“交通灯组亮”是指交通灯组中至少有一个灯头亮,“交通灯组灭”是指交通灯 组中所有的灯头都灭。
如图4所示,以及如上图3中S310所述,输入图像300包括待检测交通灯组。交通灯组包括至少一个交通灯头。
将该输入图像300作为交通灯检测器301的输入。交通灯检测器301的输出为包括N个交通灯组的第一区域。
例如,交通灯检测器可对待检测图像经神经网络进行重编码处理,提取所得到的特征图中的包含交通灯检测框的部分区域,作为上述第一区域。这样可以提高检测的快速性和准确性。
在得到第一区域后,可按照三种可能的方式执行如上图3中S320所述的过程,采用神经网络根据第一区域,获取交通灯信息,其中,交通灯信息包括每个交通灯组中的交通灯头的个数信息。
方式1:
可如图4中最左边的路径所示,将第一区域输入至灯头个数分类器302。灯头个数分类器302输出交通灯个数信息。具体地,灯头个数分类器302可以是对输入的第一区域进行检测、以端到端的方式输出交通灯头个数信息的神经网络。灯头个数分类器302可以处理亮着的交通灯和灭的交通灯。
例如,灯头个数分类器302可使用softmax多分类器或其他可实现多分类功能的分类器。当使用softmax多分类器时,softmax函数将x分类为类别j的概率为:
Figure PCTCN2021076430-appb-000001
其中,y为预测的类别,N为可能的类别总个数,T是转置符号,θ j为分类器预测为类别j所需的参数向量,其是由神经网络训练得到的。
作为示例而非限定,在本申请实施例中,假设可能的类别总数N为6,则,y可以为小于或等于6的正整数,该函数可以分别输出y为1到6之间正整数的概率,概率最高的作为输出,例如:P(y=1)=0.1、P(y=2)=0.1、P(y=3)=0.7、P(y=4)=0.05、P(y=5)=0.05、P(y=6)=0,则灯头个数N1为3,置信度P1为0.7。
置信度P1为可选的输出。在一些实现方式中,可以直接输出置信度最高的灯头个数N1作为交通灯头的个数信息。在另一些实现方式中,可以同时输出灯头个数N1和置信度P1。
通过该方式无需对交通灯亮灭的情况进行判断,可以以端到端的方式直接输出交通灯组的交通灯个数信息,从而提供更详尽的交通灯信息,有助于下游决策判断或其他应用需求。
方式2:
如图4中的中间路径所示,可以将第一区域输入至交通灯亮灭分类器303。交通灯亮灭分类器303用于检测第一区域中是否至少有一个亮着的交通灯。
如有至少一个交通灯亮,则交通灯亮灭分类器303将此第一区域输出至灯头检测器304。灯头检测器304输出灯头类别信息和灯头检测框信息。其中灯头类别信息表示灯头的类别,例如形状和颜色,如绿色左向箭头、红色圆形等。灯头检测框信息包括亮着的灯头个数、灯头检测框的长度等信息,可用于后续推演得到交通灯头个数信息。该方式可以 处理交通灯组亮的情形。
例如,交通灯亮灭分类器可使用softmax多分类器或其他可实现多分类功能的分类器。当使用softmax多分类器时,可利用上述公式(1)对交通灯亮、灭情况进行分类。作为示例而非限定,在本申请实施例中,可能的类别y可以为0(表示“否”)或1(表示“是”),则,可能的类别总数N为2,该函数分别可以输出y为0或1的概率,并将概率最高的作为输出。
上述交通灯检测框信息可称为第一信息,其包括:该交通灯组中亮着的灯头检测框的个数n、第i个亮着的灯头检测框的长度wi和置信度Pi以及灯头检测框的总长度W。
图5是交通灯检测框的一个示例性的示意图。如图5所示,最左侧的交通灯亮,其余交通灯灭,则亮着的的灯头检测框个数n为1,亮着的灯头检测框的长度为w1,交通灯检测框总长度为W。图5仅仅是示例性的,如果有更多个灯头亮,即n为大于1的整数,则亮着的灯头检测框的长度分别为w1、w2……wn。
可根据公式(2)计算出亮着的灯头的平均长度
Figure PCTCN2021076430-appb-000002
Figure PCTCN2021076430-appb-000003
再根据公式(3)计算出灯头个数N2:
Figure PCTCN2021076430-appb-000004
并根据公式(4)计算出置信度P2:
Figure PCTCN2021076430-appb-000005
由此,按照上述公式(2)至(4),可以推演得出该交通灯组中的交通灯头个数N2以及置信度P2。上述公式(2)至(4)仅仅是示例性的,本申请实施例可以采用其他等价的推演公式。
置信度P2为可选的输出。在一些实现方式中,可以得到灯头个数N2作为交通灯头的个数信息,不再执行公式(4)的运算。在另一些实现方式中,可以同时输出灯头个数N2和置信度P2。
通过该方式,我们可以对至少有一个交通灯亮的交通灯组进行检测,并最终输出该交通灯组的交通灯信息,例如,该交通灯组的交通灯头个数信息N2和亮着的灯头类别信息(如,绿色箭头等)。
方式3:
如图4中最右侧的路径所示,将第一区域输入至交通灯亮灭分类器303。方式3中的交通灯亮灭分类器303的处理过程与上述方式2中相同,因此不再重复描述。
方式3与方式2的区别在于,在交通灯亮灭分类器303的检测结果为第一区域中至少有一个亮着的交通灯的情况下,可采用方式1中的灯头个数分类器302和方式2中的灯头检测器304分别对第一区域进行检测,得到各自的检测结果。然后,可以基于灯头个数分类器302和灯头检测器304的结果,输出最终的交通灯个数信息。例如,可比较两个结果的置信度,将高置信度对应的交通灯个数作为输出。
具体地,类似于方式1,将第一区域输送至灯头个数分类器302,灯头个数分类器302对输入的第一区域进行检测,以端到端的方式直接输出交通灯组的第一交通灯个数信息N1以及第一置信度P1。另一方面,同时可类似于方式2,将第一区域输送给灯头检测器304,灯头检测器304对第一区域进行处理,输出灯头类别信息和第一信息。其中,对于 第一信息又可以经图5中所述的方法得到交通灯组的第二交通灯个数N2以及第二置信度P2。
最后,比较第一交通灯个数N1的置信度P1与第二交通灯个数N2的置信度P2,将置信度较高的一个作为最终输出。例如,假设P1>P2,则可以输出第一交通灯个数N1作为交通灯个数信息;反之亦然。
上述置信度判别方式仅仅是示例性的,本申请实施例还可以采用其他方式,根据等多个输出结果确定交通灯个数信息。例如,若N1=N2,则无需考虑置信度,直接将N1或N2作为交通灯个数信息。再例如,若P1=P2且N1不等于N2,还可以结合其他方式进一步检测,如结合用户确认、高精地图历史信息或服务器确认等方式。
类似于方式1和方式2,除了交通灯个数信息之外,方式3也可以同时输出其他交通灯信息,如交通灯类别信息(例如,绿色左向箭头等)。
通过该方式,可以对至少有一个交通灯亮的交通灯组进行检测,经过多种方式比较输出置信度较高的交通灯个数信息,从而提供更详尽、准确的交通灯信息,有助于下游决策判断或其他应用需求。
应理解,在本申请实施例中,输入的待检测图像也可以是待检测图像的感兴趣区域(region of interest,ROI),其中,感兴趣区域可以是待检测图像中最有可能包括待检测交通灯组的小区域或部分区域,或者,感兴趣区域还可以是待检测图像中需要进一步处理的小区域或部分区域。例如,因交通灯通常在待检测图像中的上半部分,如图中黑色框区域所示,可以设定待检测图像的上半部分为感兴趣区域,或者上面1/3部分为感兴趣区域,或者按照其他设定方式。感兴趣区域不限于待检测图像的上部,由于交通灯也可能出现在图像中的中间部分,或者左侧、右侧,因此,可以根据合适的方式确定感兴趣区域,本申请实施例对此不作限制。
设定感兴趣区域,可以减少所需处理的数据量,使检测过程更加高效。如上所述,感兴趣区域的选择方式和区域大小不应构成限制,本申请所描述的感兴趣区域可以是其他通过对待检测图像进行筛选、截取等方式获得的图像。
根据本申请的方案,通过神经网络对交通灯组识别检测,直接输出交通灯的信息,尤其是交通灯头的个数信息,从而能够提供更详尽、准确的交通灯信息。
这样,例如,当灯组中有多个交通灯头亮时,对交通灯头进行独立检测,分别输出灯头信息,避免了将某一个灯头属性作为整个灯组的类别属性,能够有效解决无法区别组合交通灯的问题。并且,在检测交通灯的基础上,在小的局部区域检测灯头,保证了检测效率和检测准确性。同时,更详尽、准确的交通灯信息也有助于下游决策判断。
为更加直观的展示本申请实施例提供的检测方法,图6示出了根据本申请实施例提供的交通灯检测方法的一例输入输出示意图。
如图6所示,输入待检测图像401,该待检测图像包括多个交通灯组,可通过图4中所述的检测方法,分别输出各交通灯组的检测结果图像403至406以及相关的交通灯信息。其中,交通灯信息可作为检测结果图像的特征信息或标注信息等,与该图像一起输出。
例如,对于待检测图像中无法判断亮灭的交通灯组或交通灯全灭的交通灯组,可经图4中方式1所述的方法,直接输出该交通灯组中的交通灯个数信息。
可选地,可将待检测图像输送至图4中所示交通灯检测器301,交通灯检测器301对 待检测图像进行处理,输出包含该交通灯组在内的第一区域。将第一区域输送至灯头个数分类器302,灯头个数分类器302对第一区域进行处理,直接输出该交通灯组中的交通灯个数信息。如图中图像403所示的交通灯信息“Trafficlight_3”,其中“Trafficlight”字段表示此图像包含交通灯组,“3”表示该交通灯组中交通灯头个数为3个,其中“_”为字段分隔符。
再例如,对于有至少一个亮着的交通灯的交通灯组,可经图4中方式2或方式3所述的方法进行检测,并输出交通灯组中交通灯头个数信息和灯头类别信息。
具体地,可将待检测图像输送至图4中所示交通灯检测器301,交通灯检测器301对待检测图像进行处理,输出包含待检测交通灯组在内的第一区域。将第一区域输送至图4中所示的灯头亮灭分类器303,灯头亮灭分类器303对第一区域进行检测,输出第一区域中交通灯亮灭结果。在第一区域中有至少一个亮着的交通灯时,可以执行上述图4中方式2所示的检测方法。将第一区域输送至图4中所示的灯头检测器304,灯头检测器304对第一区域进行检测,输出灯头类别信息和第一信息。其中,对于第一信息,可经图5中所述方法,得到每个灯组中的交通灯个数信息。最终输出每个交通灯组中的交通灯个数信息和灯头类别信息。如图中图像404所示的交通灯信息“arrow_left_3”中,“arrow_left”表示该交通灯组中的亮着的交通灯类别信息为左向箭头,“3”表示该交通灯组中交通灯头个数为3个。
可选地,在第一区域中有至少一个亮着的交通灯时,还可以执行上述图4中方式3所示的检测方法。将第一区域输送给图4中所示的灯头个数分类器302,灯头个数分类器302对第一区域进行处理,直接输出该交通灯组中的交通灯个数信息N1和置信度P1。同时,将第一区域输送给图4中所示的灯头检测器304,灯头检测器304对第一区域进行检测,输出灯头类别信息和第一信息。其中,对于第一信息,可经图5中所述方法,得到每个灯组中的交通灯个数信息N2和置信度P2。通过比较置信度P1和P2,最终输出灯头类别信息和置信度较高的交通灯个数信息N1或N2。如图像405和图像406所示的交通灯信息“circle_3”中,“circle”表示该交通灯组中的亮着的交通灯类别信息为圆形,“3”表示该交通灯组中交通灯头个数为3个。
应理解,上述交通灯信息的具体形式仅仅是示例性的,本申请实施例的交通灯信息可以采用任何合适的表示形式。
根据本申请的方案,通过神经网络对交通灯组识别检测,直接输出交通灯的信息,尤其是交通灯头的个数信息,从而能够提供更详尽、准确的交通灯信息。
这样,例如,当灯组中有多个交通灯头亮时,对交通灯头进行独立检测,分别输出灯头信息,避免了将某一个灯头属性作为整个灯组的类别属性,能够有效解决无法区别组合交通灯的问题。并且,在检测交通灯的基础上,在小的局部区域检测灯头,保证了检测效率和检测准确性。同时,更详尽、准确的交通灯信息也有助于下游决策判断。
图7是本申请实施例提供的交通灯检测装置的示意性框图。图7的装置可以是图1中的计算机系统112或图2中的处理器203的具体例子。
应理解,交通灯检测装置500可以执行上述交通灯检测方法的各个过程,未避免重复,不再详细描述。
如图7所示,该检测装置500包括获取单元510和处理单元520。
其中,获取单元510用于采用神经网络获取待检测图像的第一区域,该第一区域包括N个交通灯组,N为正整数。上述交通灯组包括至少一个交通灯头。获取单元510的一个例子是图4中的交通灯检测器301,为避免重复,不再详细描述。
处理单元520用于采用神经网络根据第一区域,获取交通灯信息,其中,交通灯信息包括每个交通灯组中的交通灯头的个数信息。
根据本申请的方案,通过神经网络对交通灯组识别检测,直接输出交通灯的信息,尤其是交通灯头的个数信息,从而能够提供更详尽、准确的交通灯信息。
这样,例如,当灯组中有多个交通灯头亮时,对交通灯头进行独立检测,分别输出灯头信息,避免了将某一个灯头属性作为整个灯组的类别属性,能够有效解决无法区别组合交通灯的问题。并且,在检测交通灯的基础上,在小的局部区域检测灯头,保证了检测效率和检测准确性。同时,更详尽、准确的交通灯信息也有助于下游决策判断。
可选地,作为一个实施例,上述神经网络可包括灯头个数分类器,例如图4中的灯头个数分类器302,为避免重复,不再详细描述。此时,处理单元520可根据第一区域,得到交通灯信息,其中,交通灯信息包括每个交通灯组中的交通灯头的个数信息。
可选地,作为一个实施例,上述神经网络可包括交通灯亮灭分类器和灯头检测器,例如图4中的交通灯亮灭分类器303和灯头检测器304。此时,处理单元520具体用于:将待检测图像的第一区域输送给交通灯亮灭分类器,输出待检测图像的第一区域的交通灯亮灭信息。在待检测图像的第一区域存在亮着的交通灯时,将待检测图像的第一区域输送给灯头检测器,输出交通灯类别信息和第一信息,其中第一信息包括:交通灯检测框总长度、亮着的交通灯检测框长度和亮着的交通灯检测框个数。处理单元520还可以根据第一信息,输出交通灯信息。其中,交通灯信息包括每个交通灯组中的交通灯头的个数信息。
可选地,作为一个实施例,上述神经网络可包括灯头个数分类器、交通灯亮灭分类器和灯头检测器,例如,图4中所示的灯头个数分类器302、交通灯亮灭分类器303以及灯头检测器304。处理单元520用于:将待检测图像的第一区域输送给交通灯亮灭分类器,输出待检测图像的第一区域的交通灯亮灭信息。在待检测图像的第一区域存在亮着的交通灯时,将待检测图像的第一区域输送给灯头个数分类器,输出每个交通灯组中的交通灯头的第一个数信息和第一置信度。处理单元520还用于将待检测图像的第一区域输送给灯头检测器,输出每个交通灯组中的交通灯头的第二个数信息和第二置信度。根据第一置信度和第二置信度,将第一个数信息或第二个数信息中的一个确定为每个交通灯组中的交通灯头的个数信息。
在本申请实施例中,上述交通信息还可以包括:交通灯灯头亮灭信息、交通灯灯头颜色信息、交通灯灯头形状信息。
需要说明的是,上述检测装置500以功能单元的形式体现。这里的术语“单元”可以通过软件和/或硬件形式实现,对此不作具体限定。
例如,“单元”可以是实现上述功能的软件程序、硬件电路或二者结合。所述硬件电路可能包括应用特有集成电路、电子电路、用于执行一个或多个软件或固件程序的处理器(例如共享处理器、专有处理器或组处理器等)和存储器、合并逻辑电路和/或其它支持所描述的功能的合适组件。
因此,在本申请的实施例中描述的各示例的单元,能够以电子硬件、或者计算机软件 和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
图8是本申请实施例提供的交通灯检测装置600的另一示意性框图。如图8所示,该装置600包括:通信接口610、处理器620和存储器630。其中,存储器630中存储有程序,处理器620用于执行存储器630中存储的程序,对存储器630中存储的程序的执行,使得处理器620用于执行上文方法实施例中的相关处理步骤,对存储器630中存储的程序的执行,使得处理器620控制通信接口610执行上文方法实施例中的获取和输出的相关步骤。在一种可能的设计中,该图像处理装置600为芯片。
应注意,本申请实施例中的处理器可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器可以是通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路、现场可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。
根据本申请实施例提供的方法,本申请还提供一种计算机程序产品,该计算机程序产品包括:计算机程序代码,当该计算机程序代码在计算机上运行时,使得该计算机执行前述实施例中任意一个实施例的方法。
根据本申请实施例提供的方法,本申请还提供一种计算机可读介质,该计算机可读介质存储有程序代码,当该程序代码在计算机上运行时,使得该计算机执行前述实施例中任意一个实施例的方法。
根据本申请实施例提供的方法,本申请还提供一种车辆,该车辆包括至少一个本申请上述实施例提到的交通灯检测装置,使得该车辆能够执行前述实施例中任意一个实施例的方法。
应理解,在本申请所提供的几个实施例中,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能模块可以集成在一个处理单元中,也可以是各 个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (20)

  1. 一种交通灯检测的方法,其特征在于,所述方法包括:
    采用神经网络获取待检测图像的第一区域,所述第一区域包括N个交通灯组,所述N为正整数,所述交通灯组包括至少一个交通灯头;
    采用所述神经网络根据所述第一区域,获取交通灯信息,其中,所述交通灯信息包括每个交通灯组中的交通灯头的个数信息。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    所述神经网络包括分类器;
    将所述待检测图像的第一区域输入所述分类器,输出所述交通灯信息。
  3. 根据权利要求1或2所述的方法,其特征在于,所述方法还包括:
    所述神经网络包括灯头个数分类器;
    将所述待检测图像的第一区域输入灯头个数分类器,输出所述交通灯信息。
  4. 根据权利要求1或2所述的方法,其特征在于,所述方法还包括:
    所述神经网络包括交通灯亮灭分类器和灯头检测器;
    将所述待检测图像的第一区域输入交通灯亮灭分类器,输出所述待检测图像的第一区域的交通灯亮灭信息;
    在所述待检测图像的第一区域存在亮着的交通灯时,将所述待检测图像的第一区域输入所述灯头检测器,输出所述交通灯信息。
  5. 根据权利要求4所述的方法,其特征在于,所述将所述待检测图像的第一区域输入所述灯头检测器,输出所述交通灯信息,包括:
    将所述待检测图像的第一区域输入所述灯头检测器,输出第一信息,其中所述第一信息包括:交通灯检测框长度信息、亮着的交通灯检测框长度信息和亮着的交通灯检测框个数信息;
    根据所述第一信息,输出所述交通灯信息。
  6. 根据权利要求1或2所述的方法,其特征在于,所述方法还包括:
    所述神经网络包括灯头个数分类器、交通灯亮灭分类器和灯头检测器;
    将所述待检测图像的第一区域输入交通灯亮灭分类器,输出所述待检测图像的第一区域的交通灯亮灭信息;
    在所述待检测图像的第一区域存在亮着的交通灯时,将所述待检测图像的第一区域输入所述灯头个数分类器,输出每个交通灯组中的交通灯头的第一个数信息和第一置信度;
    将所述待检测图像的第一区域输入所述灯头检测器,输出每个交通灯组中的交通灯头的第二个数信息和第二置信度;
    根据所述第一置信度和所述第二置信度,将所述第一个数信息或所述第二个数信息中的一个确定为所述每个交通灯组中的交通灯头的个数信息。
  7. 根据权利要求1至6中任一项所述的方法,其特征在于,所述交通灯信息还包括:交通灯灯头亮灭信息、交通灯灯头颜色信息、交通灯灯头形状信息和交通灯灯头类别信息。
  8. 根据权利要求1至7中任一项所述的方法,其特征在于,所述待检测图像为车载摄 像头拍摄的图像或车载摄像头拍摄的图像中的感兴趣区域。
  9. 一种交通灯检测装置,其特征在于,包括:
    获取单元,用于采用神经网络获取待检测图像的第一区域,所述第一区域包括N个交通灯组,所述N为正整数,所述交通灯组包括至少一个交通灯头;
    处理单元,用于采用所述神经网络根据所述第一区域,获取交通灯信息,其中,所述交通灯信息包括每个交通灯组中的交通灯头的个数信息。
  10. 根据权利要求9所述的装置,其特征在于,
    所述神经网络包括分类器;
    所述处理单元具体用于:
    将所述待检测图像的第一区域输入所述分类器,输出所述交通灯信息。
  11. 根据权利要求10所述的装置,其特征在于,
    所述神经网络包括灯头个数分类器;
    所述处理单元具体用于:
    将所述待检测图像的第一区域输入灯头个数分类器,输出所述交通灯信息。
  12. 根据权利要求10或11所述的装置,其特征在于,
    所述神经网络包括交通灯亮灭分类器和灯头检测器;
    所述处理单元用于:
    将所述待检测图像的第一区域输入交通灯亮灭分类器,输出所述待检测图像的第一区域的交通灯亮灭信息;
    在所述待检测图像的第一区域存在亮着的交通灯时,将所述待检测图像的第一区域输入所述灯头检测器,输出所述交通灯信息。
  13. 根据权利要求12所述的装置,其特征在于,所述处理单元用于将所述待检测图像的第一区域输入所述灯头检测器,输出所述交通灯信息,包括:
    所述处理单元具体用于:
    将所述待检测图像的第一区域输入所述灯头检测器,输出第一信息,其中所述第一信息包括:交通灯检测框长度信息、亮着的交通灯检测框长度信息和亮着的交通灯检测框个数信息;
    根据所述第一信息,输出所述交通灯信息。
  14. 根据权利要求9或10所述的装置,其特征在于,
    所述神经网络包括灯头个数分类器、交通灯亮灭分类器和灯头检测器;
    所述处理单元用于:
    将所述待检测图像的第一区域输入交通灯亮灭分类器,输出所述待检测图像的第一区域的交通灯亮灭信息;
    在所述待检测图像的第一区域存在亮着的交通灯时,将所述待检测图像的第一区域输入所述灯头个数分类器,输出每个交通灯组中的交通灯头的第一个数信息和第一置信度;
    将所述待检测图像的第一区域输入所述灯头检测器,输出每个交通灯组中的交通灯头的第二个数信息和第二置信度;
    根据所述第一置信度和所述第二置信度,将所述第一个数信息或所述第二个数信息中的一个确定为所述每个交通灯组中的交通灯头的个数信息。
  15. 根据权利要求9至14中任一项所述的装置,其特征在于,所述交通灯信息还包括:交通灯灯头亮灭信息、交通灯灯头颜色信息、交通灯灯头形状信息和交通灯灯头类别信息。
  16. 根据权利要求9至15中任一项所述的装置,其特征在于,所述待检测图像为车载摄像头拍摄的图像或车载摄像头拍摄的图像中的感兴趣区域。
  17. 一种计算机可读存储介质,其特征在于,其上存储有指令,所述指令在被计算机执行时使得所述计算机执行如权利要求1至8中任一项所述的方法。
  18. 一种计算机程序产品,其特征在于,包括指令,所述指令在被计算机执行时使得所述计算机执行如权利要求1至8中任一项所述方法。
  19. 一种交通灯检测的装置,包括:处理器,所述处理器与存储器耦合,所述存储器用于存储程序或指令,当所述程序或指令被所述处理器执行时,使得所述装置实现如权利要求1至8中任一项所述的方法。
  20. 一种车辆,所述车辆包括如权利要求9-16或19中任一项所述的交通灯检测装置。
PCT/CN2021/076430 2021-02-10 2021-02-10 交通灯检测的方法和装置 WO2022170540A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2021/076430 WO2022170540A1 (zh) 2021-02-10 2021-02-10 交通灯检测的方法和装置
CN202180000611.4A CN112970030A (zh) 2021-02-10 2021-02-10 交通灯检测的方法和装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/076430 WO2022170540A1 (zh) 2021-02-10 2021-02-10 交通灯检测的方法和装置

Publications (1)

Publication Number Publication Date
WO2022170540A1 true WO2022170540A1 (zh) 2022-08-18

Family

ID=76275632

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/076430 WO2022170540A1 (zh) 2021-02-10 2021-02-10 交通灯检测的方法和装置

Country Status (2)

Country Link
CN (1) CN112970030A (zh)
WO (1) WO2022170540A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115984826A (zh) * 2023-03-02 2023-04-18 安徽蔚来智驾科技有限公司 交通信号灯感知方法、车辆控制方法、设备、介质及车辆

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102176287A (zh) * 2011-02-28 2011-09-07 无锡中星微电子有限公司 一种交通信号灯识别系统和方法
CN106650641A (zh) * 2016-12-05 2017-05-10 北京文安智能技术股份有限公司 一种交通信号灯定位识别方法、装置及系统
US20200250982A1 (en) * 2019-01-31 2020-08-06 StradVision, Inc. Method and device for alerting abnormal driver situation detected by using humans' status recognition via v2v connection
CN112347206A (zh) * 2019-08-06 2021-02-09 华为技术有限公司 地图更新方法、装置及存储介质

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110543814B (zh) * 2019-07-22 2022-05-10 华为技术有限公司 一种交通灯的识别方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102176287A (zh) * 2011-02-28 2011-09-07 无锡中星微电子有限公司 一种交通信号灯识别系统和方法
CN106650641A (zh) * 2016-12-05 2017-05-10 北京文安智能技术股份有限公司 一种交通信号灯定位识别方法、装置及系统
US20200250982A1 (en) * 2019-01-31 2020-08-06 StradVision, Inc. Method and device for alerting abnormal driver situation detected by using humans' status recognition via v2v connection
CN112347206A (zh) * 2019-08-06 2021-02-09 华为技术有限公司 地图更新方法、装置及存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115984826A (zh) * 2023-03-02 2023-04-18 安徽蔚来智驾科技有限公司 交通信号灯感知方法、车辆控制方法、设备、介质及车辆
WO2024179135A1 (zh) * 2023-03-02 2024-09-06 安徽蔚来智驾科技有限公司 交通信号灯感知方法、车辆控制方法、设备、介质及车辆

Also Published As

Publication number Publication date
CN112970030A (zh) 2021-06-15

Similar Documents

Publication Publication Date Title
US10860896B2 (en) FPGA device for image classification
KR102593948B1 (ko) 주석 달기를 위한 데이터 샘플의 자동 선택
US11854212B2 (en) Traffic light detection system for vehicle
US20200269874A1 (en) Track prediction method and device for obstacle at junction
EP4152204A1 (en) Lane line detection method, and related apparatus
CN111874006B (zh) 路线规划处理方法和装置
US20200174107A1 (en) Lidar and camera rotational position calibration using multiple point cloud comparisons
US20220230449A1 (en) Automatically perceiving travel signals
KR20210078439A (ko) 카메라 대 lidar 교정 및 검증
WO2021155685A1 (zh) 一种更新地图的方法、装置和设备
US20180300567A1 (en) Automatically perceiving travel signals
US20180299893A1 (en) Automatically perceiving travel signals
KR20210089588A (ko) 신호등 검출을 위한 시스템들 및 방법들
US12012102B2 (en) Method for determining a lane change indication of a vehicle
WO2022082571A1 (zh) 一种车道线检测方法和装置
KR20220052846A (ko) Lidar를 사용한 대상체 식별
CN111094095A (zh) 自动地接收行驶信号
US20180300566A1 (en) Automatically perceiving travel signals
WO2022228251A1 (zh) 一种车辆驾驶方法、装置及系统
WO2022170540A1 (zh) 交通灯检测的方法和装置
TWI743637B (zh) 號誌辨識系統及其方法
CN112215042A (zh) 一种车位限位器识别方法及其系统、计算机设备
US12123734B2 (en) Automatic annotation of drivable road segments
WO2023178510A1 (zh) 图像处理方法、装置和系统、可移动平台
US20230024799A1 (en) Method, system and computer program product for the automated locating of a vehicle

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21925205

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21925205

Country of ref document: EP

Kind code of ref document: A1