WO2018201835A1 - 信号灯状态识别方法、装置、车载控制终端及机动车 - Google Patents

信号灯状态识别方法、装置、车载控制终端及机动车 Download PDF

Info

Publication number
WO2018201835A1
WO2018201835A1 PCT/CN2018/081575 CN2018081575W WO2018201835A1 WO 2018201835 A1 WO2018201835 A1 WO 2018201835A1 CN 2018081575 W CN2018081575 W CN 2018081575W WO 2018201835 A1 WO2018201835 A1 WO 2018201835A1
Authority
WO
WIPO (PCT)
Prior art keywords
traffic
state
image
traffic signal
traffic light
Prior art date
Application number
PCT/CN2018/081575
Other languages
English (en)
French (fr)
Inventor
王珏
王斌
李宇明
邢腾飞
李成军
苏奎峰
陈仁
向南
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2018201835A1 publication Critical patent/WO2018201835A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Definitions

  • the present application relates to the field of data processing technologies, and in particular, to a signal light state recognition method, device, vehicle control terminal, and motor vehicle.
  • Traffic signal state recognition refers to the identification of the state of the traffic signal, such as a traffic light in the form of a conventional traffic light.
  • the traffic signal state recognition may be a light and dark state of the red, green, and yellow lights in the traffic light.
  • the traffic signal status recognition can provide a basis for judging the traffic status of the intersection, determining the driving style of the motor vehicle, and has more in-depth application in automatic driving and navigation (driving tips), especially for improving the reliable driving of the self-driving motor vehicle on the road. It is of great significance.
  • the status recognition of traffic lights mainly relies on computer vision technology.
  • the complex effects of light changes such as backlight, backlight, haze, night, leaf occlusion, etc.
  • the status of traffic lights based on computer vision technology There is often a problem of low recognition accuracy; therefore, how to improve the accuracy of traffic signal state recognition has been an urgent problem for those skilled in the art.
  • the embodiment of the present application provides a signal light state recognition method, device, vehicle control terminal, and a motor vehicle to improve the accuracy of traffic signal state recognition.
  • a signal state recognition method includes:
  • CNN Convolutional Neural Network
  • the embodiment of the present application further provides a signal state recognition device, including:
  • An image acquisition module is configured to acquire an image to be recognized collected by the target image collection device
  • An area identification module configured to identify a traffic signal image area in the image to be identified
  • a feature extraction module configured to extract a convolutional neural network CNN feature of the traffic signal image area
  • a first light state determining module configured to determine a first traffic light state represented by the traffic light image area according to the CNN feature
  • the recognition result determining module is configured to determine a traffic light state recognition result according to the first traffic light state.
  • the embodiment of the present application further provides an in-vehicle control terminal, including: a memory and a processor;
  • the memory stores a program
  • the processor calls a program stored in the memory, the program is used to:
  • the embodiment of the present application further provides a motor vehicle, including: at least one image acquisition device, and an in-vehicle control terminal;
  • the at least one image acquisition device is configured to collect an image to be recognized in front of the vehicle body
  • the in-vehicle control terminal is configured to acquire an image to be recognized collected by the target image capturing device, where the target image capturing device is included in the at least one image capturing device; and identify a traffic signal image region in the image to be recognized; a convolutional neural network CNN feature of the traffic signal image area; determining a first traffic light state represented by the traffic signal image area according to the CNN feature; and determining a traffic light state recognition result according to the first traffic light state.
  • the embodiment of the present application further provides a storage medium, wherein the storage medium stores a computer program, wherein the computer program is configured to execute the traffic signal state recognition method described in any one of the above.
  • the vehicle control terminal may acquire the image to be recognized collected by the target image acquisition device, identify the traffic signal image region in the image to be recognized, and thereby extract the
  • the CNN feature of the traffic signal image area determines a first traffic light state represented by the traffic signal image area according to the CNN feature, and determines a traffic light state recognition result according to the first traffic light state. Because CNN features are extracted based on massive sample training, it can resist multiple effects such as scale transformation, color transformation and ray transformation. Therefore, CNN features are used to extract image features in traffic signal image regions, and traffic signals are determined based on CNN features.
  • the traffic signal state indicated by the image area can reduce the influence of complex light changes of the environment on the accuracy of the traffic signal state recognition, and improve the accuracy of the traffic signal state recognition.
  • FIG. 1 is a schematic diagram of a motor vehicle according to an embodiment of the present application.
  • FIG. 2 is a signaling flowchart of a signal state recognition method according to an embodiment of the present application
  • FIG. 3 is a flowchart of a method for identifying a state of a signal light according to an embodiment of the present application
  • Figure 5 is a schematic diagram of training of the Softmax classifier
  • FIG. 6 is a flow chart of a method for determining a signal state recognition result
  • FIG. 7 is a schematic diagram showing the setting of the lamp state conversion logic of the traffic signal at the intersection
  • FIG. 8 is a diagram showing an example of matching of lamp state conversion logic
  • 9 is a flow chart of another method for determining a signal state recognition result
  • FIG. 10 is an exemplary diagram of lamp state matching based on a sliding time window
  • 11 is a flow chart of still another method for determining a signal state recognition result
  • FIG. 12 is a flowchart of a method for predicting a traffic state of a front intersection according to an embodiment of the present application
  • FIG. 13 is a schematic diagram of application of a traffic signal state recognition method
  • FIG. 14 is a structural block diagram of a signal state recognition apparatus according to an embodiment of the present application.
  • 15 is another structural block diagram of a signal state recognition apparatus according to an embodiment of the present application.
  • 16 is a block diagram of still another structure of a signal state recognition apparatus according to an embodiment of the present application.
  • 17 is a block diagram showing the hardware structure of the in-vehicle control terminal.
  • the signal state recognition method provided by the embodiment of the present invention can be applied to a motor vehicle, and the vehicle control terminal of the motor vehicle can implement the signal state recognition method provided by the embodiment of the present application by loading a corresponding program, so that when the motor vehicle runs on the road, The state of the intersection traffic signal can be accurately identified.
  • the energy form of the motor vehicle includes but is not limited to electric, steam, and the like.
  • the motor vehicle 10 may include: at least one image capturing device 11 and an in-vehicle control terminal 12; it should be noted that the number of image capturing devices shown in FIG. 1 is three, but in actual use, image capturing is performed. The number of devices may be one or more, depending on actual usage requirements.
  • the image capturing device 11 can be implemented by using a camera, and of course, other devices having an image capturing function can also be implemented;
  • the image capturing device 11 may be disposed on the top of the vehicle body, and the capturing angle of view corresponds to the front of the vehicle body (optional, the angle of view of the image capturing device may face the front of the vehicle body) to collect the image to be recognized in front of the vehicle body.
  • the image to be recognized may be considered as an image to be recognized by the traffic signal state in the embodiment of the present application; here, the image collected by the image capturing device may cover the road ahead of the vehicle body, may include a vehicle in front, a traffic signal, and the like;
  • the plurality of image capturing devices may be horizontally arranged at a predetermined interval distance from the top of the vehicle body, and the collected viewing angle of each image capturing device corresponds to the front of the vehicle body (eg, facing the front of the vehicle body) ;
  • the embodiment of the present application may select image capturing devices with different focal lengths, that is, the focal lengths of the multiple image capturing devices are different; preferably, the focal lengths of the multiple image capturing devices
  • the image capture device used in the embodiment of the present application may have a focal length adjustment capability. , such as the focal length of an image acquisition device, can be adjusted within the focal length corresponding to the focal length level;
  • the above-mentioned image collecting device setting manner is only optional.
  • the embodiment of the present application does not exclude other setting manners of the image capturing device, as long as the image capturing device can capture the image covering the traffic signal during the running of the motor vehicle.
  • the image capturing device is disposed at the front windshield of the vehicle body (such as the top of the front windshield), and the image capturing device has a certain inclination angle, the collected image to be recognized can cover the traffic signal light disposed at the front intersection;
  • the image capture device may also be disposed at the junction of the front windshield and the roof; the manner of setting the image capture device described above may be applied to the case of using one or more image capture devices;
  • the image to be recognized by the image acquisition device 11 may have a traffic signal or may not have a traffic signal, which may be detected and determined by the vehicle control terminal; if the vehicle is far from the intersection, the image acquisition device may not be able to collect the traffic signal. image.
  • the vehicle control terminal 12 can be a control terminal with data processing capability built in the motor vehicle, such as a driving computer built in the motor vehicle.
  • the vehicle control terminal 12 and the image capturing device 11 can be connected through a vehicle communication bus (such as a bus bus), or Connected by wireless communication such as Bluetooth or wifi (Wireless Fidelity);
  • the in-vehicle control terminal 12 may also be a user equipment (such as a user's mobile phone) placed in the motor vehicle, and the user equipment may not be connected to an external communication interface of the motor vehicle (such as an external USB interface), but through Bluetooth, Wireless communication mode such as wifi (wireless fidelity) is connected to the image capturing device 11; optionally, the user device can also be connected to an external USB interface of the motor vehicle for accessing the communication bus of the motor vehicle, through the communication bus and image acquisition of the motor vehicle.
  • the devices 11 interact.
  • the in-vehicle control terminal 12 can communicate with the image capture device 11 to acquire the image to be recognized collected by the image capture device 11 , locate the traffic light image region in the image to be recognized, and perform the traffic signal image region on the traffic signal image region.
  • CNN Convolutional Neural Network
  • FIG. 2 shows a signaling flowchart of a traffic signal state recognition method provided by an embodiment of the present application, which may be based on a situation in which multiple image acquisition devices (at least two image acquisition devices) are used.
  • the process may include:
  • Step S10 The vehicle control terminal locates the current location.
  • a locator can be set in the vehicle control terminal to locate the current position; positioning the current position can be implemented by using RTK (Real time kinematic) positioning, satellite (GPS, Beidou, etc.) positioning, base station positioning, etc. , of course, the form of the positioner can be adjusted according to the positioning method used;
  • RTK Real time kinematic
  • satellite GPS, Beidou, etc.
  • base station positioning etc.
  • the form of the positioner can be adjusted according to the positioning method used;
  • the current location of the vehicle-mounted control terminal can be considered to be the current location of the vehicle.
  • Step S11 The in-vehicle control terminal sends a query request to the map server according to the current location, where the query request is used to request to query the distance of the current position from the nearest stop line.
  • the nearest stop line in front is the stop line of the nearest intersection in the direction of travel of the motor vehicle.
  • the stop line is used to indicate that the motor vehicle can only wait for the traffic signal to turn green after the stop line when the traffic signal at the front intersection is red. Release, the traffic signal can not cross the line during the red light;
  • the vehicle control terminal After the vehicle control terminal locates the current location, it can send a query request to the map server; after receiving the query request, the map server can query the stop line position of the nearest intersection in the driving direction of the motor vehicle according to the current location, and the stop line position The distance from the current position is determined as the distance from the current stop position of the current position of the motor vehicle.
  • the vehicle control terminal can locate the current location in real time, and send a query request in real time to determine the distance of the current position from the nearest stop line in real time; optionally, the vehicle control terminal can also periodically locate the current position, and The corresponding timing is sent to the query request.
  • the positioning and query request sending frequency can be 10 Hz (hertz). Obviously, the value is only optional, and can be set according to the timing positioning and the query requirement.
  • Step S12 The map server feeds back the queried distance to the in-vehicle control terminal.
  • step S11 to step S12 show an optional form in which the in-vehicle control terminal determines the distance of the current position from the nearest stop line, and the in-vehicle control terminal is also inquired by the query to the map server as shown in step S11 to step S12.
  • the map data may be preset, and the position of the nearest stop line in the preset map data is queried by the current position of the positioning, and the distance of the current position from the nearest stop line is determined according to the distance between the stop line position and the current position, that is, the local pre-predetermined line
  • the local query is located at the distance from the nearest stop line in front.
  • the in-vehicle control terminal may also query the map server for the location of the nearest stop line according to the current location, so that the in-vehicle control terminal may determine the current stop position from the front stop line according to the current position and the position of the nearest stop line. the distance.
  • Step S13 The in-vehicle control terminal selects the target image capturing device from the plurality of preset image capturing devices according to the distance.
  • the preset plurality of image capturing devices may respectively correspond to different focal length levels, that is, one image capturing device may correspond to one focal length level, and one focal length level may correspond to one focal length range, and the higher the focal length level, the focal length range corresponds to The larger the focal length value is, the higher the corresponding clear visual distance range is; the image capturing device may have a focal length adjusting capability, and the image capturing device may perform the focal length adjustment within the focal length range of the corresponding focal length level;
  • the focal length range of the continuous focal length level may be continuous, such as the focal length range of the first focal length level, and the focal length range of the second focal length level may be continuous; and the clear visible distance range of the continuous focal length level may also be continuous
  • the clear visible distance range of the first focal length level and the clear visible distance range of the second focal length level may be consecutive.
  • the vehicle control terminal may determine the distance of the current position from the nearest stop line according to the preset distance range of each image acquisition device, and the image acquisition device corresponding to the distance range , thereby selecting the target image acquisition device.
  • the distance of the current position from the nearest stop line will dynamically change, and the target image acquisition device selected from the plurality of image acquisition devices will also be adjusted to dynamically adjust the selected target image acquisition device. Enhance the sharpness of the image to be recognized used by the process.
  • the embodiment of the present application may also limit the distance between the current position and the nearest stop line, and when the distance is less than the predetermined distance limit, step S13 is performed; if the current position is larger than the nearest stop line, the distance may be exceeded.
  • the image capturing device collects a clear visual distance of the image, so that the image of the traffic signal in the image collected at this time is relatively blurred, which leads to a decrease in the accuracy of the subsequent state recognition of the traffic signal; therefore, the embodiment of the present application can be
  • the step of "selecting the target image acquisition device from the preset plurality of image acquisition devices according to the distance” is performed (ie, step S13);
  • the predetermined distance limit may be set according to a highest image acquisition capability of the plurality of image acquisition devices (eg, according to a maximum clear visual distance setting of the plurality of image acquisition devices), such as selecting 150 meters (this The value is only an optional example).
  • Step S14 The in-vehicle control terminal acquires an image to be recognized collected by the target image collection device.
  • the plurality of image capturing devices can be in a state of acquiring images in real time, and the in-vehicle control terminal can acquire the image to be recognized currently collected by the target image capturing device after selecting the target image capturing device;
  • the onboard control terminal may also control the plurality of image acquisition devices to perform image acquisition when the distance of the current position from the nearest stop line is less than the predetermined distance limit, thereby selecting the target image from the plurality of image acquisition devices. After the device is acquired, the image to be recognized currently collected by the target image capturing device is acquired.
  • the step S10 to the step S14 are only an optional manner in which the in-vehicle control terminal acquires the image to be recognized collected by the target image capturing device, and the embodiment of the present application may also be based on the pre-preparation.
  • Setting the image capturing device selection order selecting the target image capturing device from the plurality of image capturing devices, and acquiring the image to be recognized collected by the target image capturing device, without necessarily following the current position according to steps S10 to S14 Selecting a target image capturing device from the plurality of image capturing devices from a distance from a front stop line; selecting a target image capturing device from the plurality of image capturing devices according to a distance of a current position from a front stop line, only one An optional implementation.
  • Step S15 The in-vehicle control terminal identifies the traffic signal image area in the image to be identified.
  • the vehicle control terminal may locate a location of the traffic signal in the image to be identified, and identify a traffic signal image region from the image to be identified according to the located location.
  • the embodiment of the present application may pre-train the traffic signal recognition model, and locate the traffic signal image area from the image to be identified according to the traffic signal recognition model;
  • the traffic signal image area may be obtained by training a positive sample and a negative sample according to a machine learning method (such as a deep convolutional neural network method, etc.), and the positive sample may be an image with a traffic light marked from a street view image (eg, from The traffic signal image of various traffic signal states marked in the street view image, such as traffic light images of red, yellow and green lights, respectively, and the negative sample may be a streetscape background image marked from the street view image (street view background) The image does not have a traffic light).
  • a machine learning method such as a deep convolutional neural network method, etc.
  • the image to be identified may have a traffic signal or may not have a traffic signal (eg, the traffic signal recognition image may identify the traffic signal image area from the image to be identified, or may not identify the traffic signal.
  • the step S15 specifically refers to the case where the traffic signal is included in the image to be identified.
  • Step S16 The in-vehicle control terminal extracts CNN features of the traffic signal image area.
  • the CNN feature can be considered as one of the image features.
  • the embodiment of the present application uses the CNN feature to implement image feature extraction in the traffic signal image region, so that the extracted image feature can resist the image due to scale transformation, color transformation, light transformation, and the like. Impact;
  • HSV Hue hue in H
  • S indicates Saturation saturation
  • V indicates Value brightness
  • HSV features of the image features are mainly represented by the chromaticity information of the image, and the image features represented by the CNN features are less susceptible to environmental influences such as illumination and occlusion;
  • the subsequent traffic signal state recognition result can have higher accuracy.
  • Step S17 The in-vehicle control terminal determines, according to the CNN feature, a first traffic light state represented by the traffic signal image area.
  • the traffic signal state indicated by the traffic signal image area refers to the bright and dark state of each of the red, green and yellow lights in the image area of the traffic signal (generally, one traffic light is only one of red, green and yellow lights at a time) In the bright state, other signal lights are in a dark state, and there may be a light group composed of a plurality of traffic lights in the traffic light image area);
  • a traffic light image indicating a state of each traffic signal (a traffic signal state corresponding to a plurality of classified traffic light images) may be pre-classified, and CNN feature extraction is performed on each traffic light image of each traffic light state, The CNN feature of the traffic signal image of each traffic signal state is trained, the traffic signal state classification model, and the traffic signal state classification model can represent the CNN feature corresponding to the traffic signal image of each traffic signal state;
  • the traffic signal is a traffic light group that integrates multiple traffic lights (for example, the traffic light at the intersection is generally composed of three signal lights, indicating the forward, left, and right turns respectively), then the traffic is The status of the traffic lights in the signal group needs to be classified (if there are 3 traffic lights in the light group, the status of each different traffic signal after the combination of the three traffic lights should be classified into one class), and various types of CNN features to train a traffic signal state classification model;
  • the embodiment of the present application can identify the traffic signal state corresponding to the CNN feature of the traffic signal image area, and obtain the traffic signal state represented by the traffic signal image area.
  • the traffic signal state classification model may be represented by a Softmax classifier, and Softmax may be categorized with the CNN feature to classify the traffic signal image into CNN features corresponding to various traffic signal states by training the Softmax classifier.
  • Step S18 The in-vehicle control terminal determines a traffic signal state recognition result according to the first traffic signal state.
  • the first traffic light state specifically refers to a traffic light state currently represented by the determined traffic light image area according to the CNN feature, and the first traffic light state may be any type of traffic light state.
  • the first traffic light state may be any type of traffic light state.
  • the first traffic signal state may be a corresponding state of a single traffic light; and for the case of a signal light group provided by multiple traffic lights (such as the case of an intersection), the first traffic light state may It is the lamp state of each signal lamp in the signal group.
  • the determination of the lamp state of each signal lamp in the signal group is also based on the CNN feature, and the principle is the same.
  • the embodiment of the present application may directly determine the first traffic signal state determined in step S17 as the traffic signal state recognition result; or may verify the first traffic signal state determined in step S17, and determine the first After the verification result of the traffic signal state is passed, the first traffic signal state determined in step S17 is used as the traffic signal state recognition result; of course, if the verification result of the first traffic light state is not passed, the first traffic may be determined. The status of the traffic light is not determined by the determined state of the traffic signal state, and the traffic signal state recognition result may be empty or the identification failure;
  • the verification manner may be that, according to the plurality of traffic signal states continuously determined in step S17, determining whether the change logic of the traffic signal state is correct, and determining whether the first traffic signal state determined in step S17 is not maintained within the set time. Wait.
  • the process shown in FIG. 2 is implemented by using multiple image collection devices.
  • the embodiment of the present application can also implement the traffic signal state recognition method provided by the embodiment of the present application by using a single image acquisition device.
  • the target image acquisition device selected from the plurality of image acquisition devices can be adjusted along with the driving of the motor vehicle to ensure that the image currently collected from the target image acquisition device has high definition, so that the scheme for selecting multiple image acquisition devices is It is preferred, but does not preclude the possibility of using a single image acquisition device when the embodiment of the present application identifies the traffic signal state based on the CNN feature.
  • FIG. 3 is a flowchart of a traffic signal state recognition method provided by an embodiment of the present application.
  • the method may be applied to an in-vehicle control terminal.
  • the method may include:
  • Step S20 Acquire an image to be recognized collected by the target image collection device.
  • the target image capturing device may be an image collecting device separately provided in the embodiment of the present application.
  • the target image capturing device in the embodiment of the present application can be in an image capturing state in real time, so that the in-vehicle control terminal can acquire the image to be recognized collected by the target image capturing device in real time;
  • the in-vehicle control terminal may also perform step S20 when the distance of the current position from the nearest stop line is less than the predetermined distance limit, thereby acquiring the image to be recognized collected by the target image acquisition device; optionally, the vehicle control terminal may When the distance of the current position from the nearest stop line is less than the predetermined distance limit, the trigger target image acquisition device starts to acquire an image, thereby acquiring the image to be recognized collected by the target image acquisition device; the target image acquisition device may also be in the image acquisition state in real time. Therefore, the in-vehicle control terminal can acquire the image to be recognized currently acquired by the target image capturing device when the distance of the current position from the nearest stop line is less than the predetermined distance limit.
  • the determination of the distance of the current position from the nearest stop line can be implemented in the corresponding process as shown in FIG. 2, that is, the map server can be queried, or the local preset map data can be queried.
  • Step S21 Identify a traffic light image area in the image to be identified.
  • Step S22 Extract CNN features of the traffic signal image area.
  • Step S23 Determine, according to the CNN feature, a first traffic light state represented by the traffic light image area.
  • Step S24 Determine a traffic signal state recognition result according to the first traffic signal state.
  • the in-vehicle control terminal may acquire the image to be recognized collected by the target image acquisition device, identify the traffic signal image region in the image to be recognized, and extract the image of the traffic signal image.
  • the CNN feature determines a first traffic light state represented by the traffic signal image area according to the CNN feature, and determines a traffic light state recognition result according to the first traffic light state. Because CNN features are extracted based on massive sample training, it can resist multiple effects such as scale transformation, color transformation and ray transformation. Therefore, CNN features are used to extract image features in traffic signal image regions, and traffic signals are determined based on CNN features.
  • the traffic signal state indicated by the image area can reduce the influence of complex light changes of the environment on the accuracy of the traffic signal state recognition, and improve the accuracy of the traffic signal state recognition.
  • the embodiment of the present application preferably uses the multiple image acquisition devices shown in FIG. 2 to realize the identification of the state of the traffic signal, and according to the driving of the motor vehicle, according to the distance of the current position from the nearest stop line, Selecting a target image acquisition device among the plurality of image acquisition devices;
  • the embodiment of the present application can set a distance range corresponding to each image acquisition device, so as to continuously adjust the distance from the front position according to the current position according to the distance range corresponding to each preset image collection device.
  • Distance the determined target image acquisition device
  • the distance range corresponding to each image acquisition device may be according to the focal length of each image acquisition device, the clear visible distance, and the images collected by the traffic signal lamps in each image acquisition device under each distance (each distance from the stop line)
  • the number of pixels in the determination is determined; to ensure that the image quality of the corresponding image acquisition device is the clearest within any distance range set, and the number of pixels of the traffic signal in the image is sufficient (usually the short side is larger than 30 pixels, here 30
  • the pixel value is only an optional example, and the pixel threshold can be set as needed to ensure the stability and accuracy of the subsequent traffic signal state recognition;
  • the difference between the multi-channel image acquisition device is that the image acquisition device has different focal lengths and different viewing angle ranges (that is, the multi-channel image acquisition device belongs to different focal length levels), and the embodiment of the present application can clearly view the distance according to the lens focal length, and
  • the number of pixels in the image of the traffic signal is determined by the image acquisition device corresponding to each distance range, thereby ensuring that the image quality collected by the corresponding image acquisition device is the clearest within the set distance range, and the number of pixels in the image of the traffic light group is greater than
  • the pixel threshold for example, the short side is usually greater than 30 pixels
  • the multi-image acquisition device scheme can greatly improve the accuracy of the traffic signal state recognition, and also compensates for the limited range of the single-focus image acquisition device and the lack of clear visual distance.
  • the embodiment of the present application may pre-train the traffic light recognition model, and FIG. 4 shows a training schematic of the traffic light recognition model.
  • FIG. 4 shows a training schematic of the traffic light recognition model.
  • the present application is implemented.
  • a large number of street view images can be collected and included in the street view image database.
  • the image with traffic lights is determined from the street view image database and marked as a positive sample.
  • the street view background image is determined from the street view image database and marked as a negative sample. ;
  • the positive sample may be an image with traffic lights marked from the street view image database, such as traffic light images of red, yellow, and green lights respectively (for the case of traffic light groups, it may also be a traffic light The green light is on, other traffic lights are red lights, etc., and the light signal states of the traffic lights in the light group are combined to form a traffic light image of each light state.
  • the negative sample may be a street view background image marked from the street view image database ( There is no traffic signal), so that the positive and negative samples are trained by machine learning methods such as deep convolutional neural network, and the traffic signal recognition model is obtained;
  • the street view image database can record a large number of street view images, and the order of the images can be set according to needs, such as 100,000, etc., and the street view image in the street view image data can cover the street view images of multiple cities.
  • the positive and negative samples in the street view image database may be implemented by manual labeling, and the positive and negative samples of the street view image recorded in the street view image database may be marked in advance when the street view image database is established.
  • the embodiment of the present application may use a Softmax classifier to identify the state of the traffic signal represented by the extracted CNN feature;
  • the training process of the Softmax classifier can be as shown in FIG. 5 , taking the traffic light with the red light, the yellow light and the green light as an example.
  • the traffic light image of the red light, the yellow light and the green light can be classified, wherein The traffic light state of the red light corresponds to multiple traffic signal images, the traffic light state of the green light corresponds to multiple traffic signal images, and the traffic light state of the yellow light corresponds to multiple traffic signal images;
  • the CNN feature of the traffic light image of the red light, the CNN feature of the traffic light image with green light, and the CNN feature of the traffic light image of the yellow light can be separately extracted; obviously, for the situation of the traffic signal light group, the state of each traffic light needs
  • the traffic lights in the lamp group are combined to determine that any different lamp state formed by the combination of the traffic lights in the lamp group can be used as a traffic signal state of the lamp group, thereby classifying the traffic signal image of each traffic signal state. ;
  • Softmax after cascading the CNN features of the traffic signal image of each traffic signal state, the Softmax classifier (an optional form of the traffic signal state classification model) is trained.
  • the determined traffic signal state can reduce the influence of the complex light change on the accuracy of the traffic signal state recognition, and improve the accuracy of the traffic signal state recognition.
  • the present application implements For example, the first traffic light state determined based on the CNN feature may be verified, and the first traffic light state determined based on the CNN feature is used as the traffic light state recognition when the first traffic light state verification result is passed. As a result, the accuracy and stability of the traffic signal state recognition are further improved.
  • the lamp state change logic is generally preset, such as the lamp state change logic of the traffic signal (ie, the traffic light state change logic) generally follows a red light to a green light to a yellow light to a red light.
  • the cycle sequence is changed, and the maintenance time of the red light, the green light, and the yellow light is set at a certain time.
  • the embodiment of the present application can verify the continuous implementation of the embodiment according to the light state change logic of the traffic light of the front intersection. Determine the state of the traffic signal to filter the erroneous determination of the traffic signal state caused by the influence of weather, light, angle of view, mis-classification of traffic lights, etc., and improve the accuracy of the state recognition of the traffic signal;
  • FIG. 6 is a flowchart of a method for determining a state of a signal state recognition provided by an embodiment of the present application.
  • the method may be applied to an in-vehicle control terminal. Referring to FIG. 6, the method may include:
  • Step S30 Acquire a lamp state conversion logic of a traffic signal light at a front intersection.
  • the embodiment of the present application can locate the current location of the in-vehicle control terminal, and determine the traffic of each intersection recorded in the database (network database or local database) by querying the map data to determine the intersection of the motor vehicle driving ahead and the current location.
  • the lamp state conversion logic of the signal light acquiring the lamp state transformation logic of the corresponding traffic signal of the determined intersection;
  • the embodiment of the present application may set the intersection mark for each intersection, and associate the intersection mark of the intersection with the location, and for each intersection, define a light mark of the traffic light correspondingly set in each possible driving direction;
  • the embodiment of the present application can set the light mark of the traffic signal, so that for each intersection Determining the intersection mark of the intersection, the corresponding relationship with the lamp mark of the traffic signal corresponding to each possible driving direction, and correlating to the corresponding lamp state conversion logic of each lamp mark;
  • the vehicle control terminal can determine the corresponding intersection mark according to the intersection position of the nearest intersection, and determine the front intersection setting by the determined intersection mark and the driving direction of the motor vehicle.
  • the light of the traffic light is marked with the determined light mark to obtain the associated lamp state change logic.
  • the above-described lamp state change logic acquisition mode is only optional.
  • the lamp state change logic of each traffic signal can be unified.
  • Step S31 Acquire a continuously determined traffic signal state, where the continuously determined traffic signal state includes the first traffic light state.
  • the state of the traffic light that is continuously determined within the set verification distance is collected, so that after determining the state of the first traffic signal, it will be within the set verification distance range, and the history is determined.
  • the traffic signal state is combined with the first traffic signal state to obtain a continuously determined traffic signal state; optionally, in the continuously determined traffic light state, the first traffic light state may be at the end;
  • the embodiment of the present application may perform the collection of the determined traffic signal state when the distance from the front stop line is within 150 meters, and after determining the current first traffic signal state (eg, the current distance from the front stop line is 50 m), the distance of the stop line from the front can be within 150 meters, the historically determined traffic signal state (the current distance from the front stop line is 150 meters to 50 meters, the determined traffic signal state), and The currently determined first traffic light state (the current traffic light state when the distance from the front stop line is 50 meters) is combined to obtain a continuously determined traffic signal state; it is worth noting that the numerical content of this paragraph It is intended to be illustrative only and should not be construed as limiting the scope of the application.
  • Step S32 determining whether the lamp state conversion logic of the continuously determined traffic signal state matches the lamp state conversion logic of the traffic signal of the front intersection, and if not, executing step S33, and if yes, executing step S34.
  • the embodiment of the present application can determine whether the lighting sequence of the continuously determined traffic signal state matches the lighting sequence of the traffic signal of the front intersection; of course, except for the ratio of the simple lighting sequence, the embodiment of the present application It is also possible to add a lighting maintenance time. For example, when there is a lamp state jump in the continuously determined traffic signal state, it is determined whether the maintenance time of the lamp state after the jump is related to the maintenance time of the lamp state of the traffic signal of the front intersection. Match and so on.
  • Step S33 determining that the first traffic signal state is not the determined traffic signal state recognition result.
  • the lamp state change logic of the continuously determined traffic signal state after combining the state of the first traffic signal does not match the lamp state change logic of the traffic signal of the front intersection, indicating that the weather, the light, the angle of view, the traffic light misclassification, etc.
  • the flickering problem of the traffic signal recognition result flicker refers to the situation that the traffic signal recognition result is unstable in a short time, such as within 2 seconds (the specific time can be set according to actual demand)
  • the lamp state change logic based on the continuously determined traffic signal state is a green light to a yellow light sequence
  • the light state change logic of the traffic light of the front intersection is a red light to a green light to a red light to a yellow light cycle.
  • the order, the two do not match, there is a flicker problem of the traffic signal recognition result.
  • Step S34 determining that the first traffic signal state is the determined traffic signal state recognition result.
  • the embodiment of the present application may further have other verification modes, such as using a sliding time window of a set time size.
  • the first traffic light state is delayed output. If the determined traffic light state is the same as the first traffic light state within the set time, the first traffic light state may be determined as the determined traffic light state recognition result; If the state of the traffic signal determined during the period changes, the delayed output window processing of the sliding time window is re-executed with the latest determined traffic signal state (ie, the newly determined first traffic light state) after the change;
  • FIG. 9 is a flowchart of another method for determining a signal state recognition result provided by an embodiment of the present application.
  • the method may be applied to an in-vehicle control terminal. Referring to FIG. 9, the method may include:
  • Step S40 Add the first traffic signal state to a preset sliding time window, where the sliding time window corresponds to a preset time.
  • the preset time may be less than the reaction time of the human body after the traffic signal state is switched, that is, the embodiment of the present application may count the state of the traffic signal after the traffic signal light is switched (such as when the red light is switched to the green light), so that the setting time is less than
  • the set time of the human body reaction time is the length of time corresponding to the sliding time window; if the state of the traffic signal is switched, the reaction of the human body is generally 500 ms (millisecond), and the embodiment of the technical solution can adopt 300 ms or the like as the set time;
  • the values here are only examples, and the reaction time of the human body may also be adjusted according to different statistical methods.
  • Step S41 Determine whether the new traffic signal state determined in the preset time period corresponds to the first traffic signal state. If not, go to step S42, and if yes, go to step S43.
  • the sliding time window can slide over time and record the latest determined traffic signal state. If the sliding time window corresponds to the length of time, it is found that the newly determined traffic light state does not correspond to the first traffic light state. , it can be determined that there is a blinking problem of the traffic signal recognition result; as shown in FIG. 10, in a sliding time window, if the determined new traffic light state does not correspond to the first traffic light state, the traffic light recognition result may appear.
  • the flicker problem if the newly determined traffic signal state corresponds to the first traffic signal state within a corresponding time period of a sliding time window, it may be determined that the determined first traffic signal state is stable and output is possible.
  • Step S42 determining that the first traffic signal state is not the determined traffic signal state recognition result.
  • Step S43 determining that the first traffic signal state is the determined traffic signal state recognition result.
  • Delaying filtering of the determined first traffic light state by sliding time window control such as sliding the time window for a length of 300 ms (the numerical value is only an example, the length of the sliding time window can be set according to actual needs, that is, according to actual conditions It is necessary to set the set time), that is, the state of the first traffic signal is output after the state is changed for 300ms, and the state of the first traffic light is output as the result of the state recognition of the traffic light, avoiding repeated jumps in the state within 300ms, and smoothing due to smog and light effects.
  • the resulting traffic light recognition result flickers.
  • FIG. 11 is a flowchart of still another method for determining the state of the signal state recognition provided by the embodiment of the present application.
  • the method can be applied to the in-vehicle control terminal. Referring to FIG. 11, the method may include:
  • step S50 the attribute information of the traffic signal at the nearest intersection is obtained.
  • the attribute information of the traffic signal includes two types of attributes: static and dynamic.
  • the static attribute refers to the shape and arrangement of the light group (horizontal or vertical), and the number of the light body (commonly there are 1 or 3, wherein The three cases can be considered as one type of traffic signal group;
  • the dynamic attribute refers to the color of each lamp body, for example, the color of the three vertically arranged lamp bodies is red, yellow, green, and each lamp body. Current status (light or dark), etc.
  • the embodiment of the present application can query the map data, obtain the location of the nearest intersection, and obtain the position corresponding to the nearest intersection according to the corresponding relationship between the preset intersection location and the attribute information of the traffic signal.
  • the attribute information of the traffic signal to obtain the attribute information of the traffic signal to the nearest intersection.
  • Step S51 Determine attribute information of the traffic signal represented by the traffic signal image area according to the traffic signal image area.
  • the embodiment of the present application may use a technique such as graphic recognition to process the traffic signal image area to determine attribute information such as the arrangement and number of traffic lights represented by the traffic signal image area.
  • a technique such as graphic recognition to process the traffic signal image area to determine attribute information such as the arrangement and number of traffic lights represented by the traffic signal image area.
  • Step S52 determining whether the determined attribute information matches the attribute information of the traffic signal of the forward nearest intersection, and if not, executing step S53, and if yes, executing step S54.
  • the determined attribute information does not match the attribute information of the traffic signal at the nearest intersection, indicating that the traffic signal indicated by the traffic signal image area may not be the traffic signal of the nearest intersection, and needs to be filtered, correspondingly, cannot
  • the corresponding first traffic signal state is determined as the traffic signal state recognition result.
  • Step S53 determining that the first traffic signal state is not the determined traffic signal state recognition result.
  • Step S54 determining that the first traffic signal state is the determined traffic signal state recognition result.
  • the embodiment of the present application may further perform the chrominance-based color recognition method (eg, based on HSV feature recognition, etc.) to perform the traffic signal image region twice.
  • the result of the secondary recognition is consistent (consistent) with the state of the first traffic signal, it may be determined that the determined result of the first traffic light state is relatively stable and may be used as a traffic light state recognition result; optionally, based on color
  • the color recognition processing object is a traffic signal image area that has been extracted from the image to be identified, and performs color recognition processing on the traffic signal image area to obtain the identified traffic signal state, thereby the first traffic
  • the signal state is matched and matched, and it is determined whether the identified traffic signal state corresponds to the first traffic signal state, and if yes, the verification is passed, determining that the first traffic signal state is the determined traffic signal state recognition result, if No, the verification does not pass.
  • the above-mentioned various verification methods for the state of the first traffic signal may be parallel, alternatively used; or at least one of them may be used in combination, and the verification results of the various verification modes combined are all passed.
  • the first traffic signal state is determined as the traffic signal state recognition result.
  • the front road traffic state can be predicted to improve the effect of the motor vehicle automatic driving, navigation, and the like; optionally, determining the traffic signal state recognition.
  • the embodiment of the present application can combine the calibration parameters of the image acquisition device, the current position and the three-dimensional position coordinates of the traffic signal light at the front intersection to determine the traffic state in all directions of the front intersection (for the intersection, the front intersection may give a straight line.
  • the prompts of right turn and left turn, each traffic light corresponding to the prompt has a red, green and yellow light state change.
  • FIG. 12 is a flowchart of a method for predicting a traffic condition of a front intersection provided by an embodiment of the present application.
  • the method may be applied to an in-vehicle control terminal. Referring to FIG. 12, the method may include:
  • Step S60 Acquire the current position, and the three-dimensional position coordinates of the traffic lights indicating the directions in the traffic signal group of the front intersection.
  • the current location may be a current location of the motor vehicle, and is determined by the vehicle control terminal.
  • the embodiment of the present application may define a three-dimensional position coordinate of the traffic light corresponding to each intersection position, thereby matching corresponding positions according to the intersection position of the front intersection.
  • the three-dimensional position coordinate of the traffic signal; the three-dimensional position coordinate may include a three-dimensional position of the traffic signal indicating the direction in the traffic signal group of the front intersection; optionally, the three-dimensional position coordinate of the traffic signal may be included in the attribute information of the traffic signal .
  • Step S61 Determine, according to the current position and the three-dimensional position coordinates, a relative position of the motor vehicle and the traffic light indicating the directions.
  • the relative positions of the motor vehicles and the traffic lights indicating the directions may be determined.
  • Step S62 Determine, according to the calibration parameter of the target image capturing device, a relative position of the motor vehicle and the traffic signal, and a positional conversion relationship in the image to be identified.
  • the calibration parameter of the image acquisition device refers to an internal and external parameter of the calibration image acquisition device (such as a camera).
  • the relative position of the motor vehicle and the traffic signal can be converted, and the corresponding coordinate system of the image to be identified is The conversion relationship of the position.
  • Step S63 according to the conversion relationship, converting the relative position of the motor vehicle and the traffic light indicating the direction to the position indicating the conversion of the traffic signal in each direction in the image to be recognized.
  • the corresponding position in the two-dimensional image plane can be converted to obtain the position in the corresponding acquired image to be recognized.
  • the step S60 to the step S63 can be regarded as an optional implementation manner of obtaining a position where the traffic light of each direction indicates the position of the traffic signal in the image to be recognized.
  • the embodiment of the present application does not exclude other directions indicating the front intersection.
  • the three-dimensional position coordinates of the traffic light are converted into a way of position in the image to be identified.
  • Step S64 Determine, according to the position of the traffic signal light in each direction, the position of the traffic signal in each direction, and the light state of each traffic signal in the traffic signal state recognition result, determine the light state of the traffic signal indicating the direction of the front intersection.
  • the corresponding lamp state of each position may be matched from the traffic signal state recognition result to determine the traffic indicating the direction of the front intersection.
  • the light status of the signal light may be matched from the traffic signal state recognition result to determine the traffic indicating the direction of the front intersection.
  • Step S65 Determine a traffic state prediction result in each direction of the front intersection according to the state of the light of the traffic signal indicating the direction of the front intersection.
  • the embodiment of the present application can predict the traffic state of the traffic sign in the direction of the front intersection by indicating the state of the traffic signal in each direction of the front intersection.
  • the embodiment of the present application can preset the traffic state template in each direction of the front intersection. According to the determined light state of the traffic signal indicating the direction of the front intersection, the traffic state template is filled with the corresponding traffic state in each direction of the front intersection, so as to predict the traffic state in all directions of the front intersection; generally, one direction of the front intersection The state of the traffic light of the traffic signal is green, and the traffic state in the direction of the front intersection is allowed to pass. The state of the traffic signal in one direction of the front intersection is red, and the traffic state in the direction of the front intersection is prohibited.
  • a prediction scheme for the traffic state in each direction of the front intersection is given, which is determined from the determined state of the traffic signal state identification.
  • the prediction result of the traffic state of the front intersection can be directly obtained according to the determined traffic signal state recognition result, without determining the determined traffic signal.
  • the state recognition result the means of the lamp state of the traffic signal in each direction of the front intersection.
  • An optional application of the traffic signal state recognition method provided by the embodiment of the present application can be as shown in FIG. 13 .
  • three cameras can be set, and the sub-focus, the medium focus, and the far focus level are as follows.
  • the camera 1 is a camera with a near focus level
  • the focal length is in the first focal length range (for example, 5 mm to 12.5 mm (mm))
  • the clear visual distance is in the first distance range (for example, in the range of 4 m to 30 m (meter))
  • the camera 2 is Camera with medium focus level
  • focal length in the second focal length range for example, 12.5mm to 25mm
  • clear visual distance in the second distance range for example, 30m to 80m range
  • camera 3 is the camera with far focus level, focal length in the first
  • the three focal length range eg 25mm to 50mm
  • the clear visible distance is in the third distance range (eg 80m to 160m range); it is worth noting that the values referred to here are only optional examples, the near focus described here
  • the focal length value and the clear visible distance example value of the telephoto level and the far focus level are only the plurality of image capturing devices preset in the embodiment of the present application respectively corresponding to different focal length levels, and the focal length range
  • the camera 3 is enabled as the target image capturing device, and the image to be recognized is acquired for processing, and the line is stopped immediately before the self-driving car is in front of the line.
  • the camera 2 is enabled as the target image capturing device, and the image to be recognized is acquired for processing, and is activated when the distance of the auto-driving car from the front stop line is the third distance range.
  • the camera 1 is used as a target image capturing device, and acquires the image to be recognized collected for processing;
  • the embodiment of the present application may adopt a pre-trained traffic signal recognition model, identify a traffic signal image area in the image to be identified, and extract CNN features in the traffic signal image area. Identifying the traffic signal state represented by the CNN feature by the pre-trained Softmax classifier to obtain the currently determined first traffic signal state; the possible form of the first traffic light state here is: the light state in the case of a single traffic signal, or In the case of a plurality of traffic lights comprising a plurality of traffic lights, the state of the lights of the respective traffic lights (at this time, the position of each traffic signal in the image to be identified may also be determined);
  • the state of the light indicated by the traffic signal state recognition result may be used as a prediction basis for the traffic state of the front intersection (such as a green light or a red light prohibition);
  • the front intersection is a multi-directional road intersection such as an intersection
  • the state of the traffic light corresponding to each direction of the front intersection can be determined from the traffic signal state recognition result, and the traffic state prediction of each direction of the front intersection can be given;
  • the intersection type at the intersection is a one-way intersection, or the multi-directional intersection can be preset. For example, the intersection location or intersection mark of the intersection is set, and the correspondence relationship with the intersection type is obtained according to the intersection position or the intersection mark of the front intersection, and the corresponding intersection is obtained. Types of.
  • the traffic signal state recognition method provided by the embodiment of the present invention can reduce the influence of complex light changes on the accuracy of the traffic signal state recognition, and improve the accuracy of the traffic signal state recognition; meanwhile, it can be applied to the automatic driving and navigation of the motor vehicle. In other fields, it can accurately predict the traffic conditions of the intersections ahead, and improve the application effect of automatic driving and navigation of motor vehicles.
  • the traffic signal state recognition device provided by the embodiment of the present application is described below.
  • the traffic signal state recognition device described below may be associated with the traffic signal state recognition method described above.
  • the traffic signal state recognition device described below can be considered as a program module for implementing the traffic signal state recognition method provided by the vehicle control terminal in the embodiment of the present application, and the functions of the program modules can be implemented by a program loaded by the vehicle control terminal. .
  • FIG. 14 is a structural block diagram of a signal state recognition device according to an embodiment of the present disclosure.
  • the device is applicable to an in-vehicle control terminal.
  • the device may include:
  • the image acquisition module 100 is configured to acquire an image to be recognized collected by the target image collection device
  • the area identification module 200 is configured to identify a traffic signal image area in the image to be identified
  • the feature extraction module 300 is configured to extract a convolutional neural network CNN feature of the traffic signal image area
  • the first light state determining module 400 is configured to determine a first traffic light state represented by the traffic light image area according to the CNN feature;
  • the recognition result determining module 500 is configured to determine a traffic light state recognition result according to the first traffic light state.
  • the area identification module 200 is configured to identify the image area of the traffic light in the image to be identified, and specifically includes:
  • the traffic signal recognition model is obtained by training a positive sample and a negative sample according to a machine learning method, wherein the positive sample is from multiple street view images The image with the traffic light is marked in the middle, and the negative sample is the background image of the street view marked from the multiple street view images.
  • the first light state determining module 400 is configured to determine, according to the CNN feature, the first traffic light state represented by the traffic light image area, specifically:
  • the CNN feature of the signal image is trained.
  • the image obtaining module 100 is configured to acquire an image to be recognized collected by the target image capturing device, and specifically includes:
  • a target image capturing device from a plurality of preset image capturing devices to acquire an image to be recognized collected by the target image capturing device; wherein the plurality of image capturing devices respectively correspond to different focal length levels, and one focal length level corresponds to one The focal length range, the higher the focal length level, the larger the focal length value corresponding to the focal length range, and the higher the clear visible distance range.
  • the image obtaining module 100 is configured to select a target image capturing device from the plurality of preset image capturing devices, and specifically includes:
  • the image acquisition device corresponding to the distance range of the current position from the nearest stop line is determined according to the preset distance range corresponding to each image acquisition device, and the target image acquisition device is selected.
  • FIG. 15 is a block diagram showing another structure of the traffic signal state recognition device provided by the embodiment of the present application. As shown in FIG. 14 and FIG. 15, the device may further include:
  • the distance range setting module 600 is configured to set the number of pixels of the traffic signal in the image collected by each image acquisition device according to the focal length of each image acquisition device, the clear visible distance of each image acquisition device, and the distance from the stop line. Determine the distance range corresponding to each image acquisition device.
  • the image obtaining module 100 is configured to determine a distance of the current position from the nearest stop line, and specifically includes:
  • the image obtaining module 100 may be configured to perform the step of acquiring the image to be recognized collected by the target image acquiring device when the distance of the current position from the nearest stop line is less than the predetermined distance limit.
  • the identification result determining module 500 is configured to determine a traffic signal state recognition result according to the first traffic signal state, and specifically includes:
  • the first traffic light state is verified, and if the verification result is passed, the first traffic light state is used as a traffic light state recognition result.
  • At least one of the following methods and processes may be used for the verification result determining module 500:
  • the sliding time window corresponding to the length of time is a preset time; determining whether the new traffic signal state determined in the preset time is a traffic signal state corresponding; if yes, determining that the first traffic signal state is the determined traffic signal state recognition result;
  • FIG. 16 is a block diagram showing another structure of the signal state recognition device provided by the embodiment of the present application. As shown in FIG. 14 and FIG. 16, the device may further include:
  • the traffic state prediction module 700 is configured to determine, when the front intersection is a multi-directional travel intersection, from the traffic signal state recognition result, a light state indicating a traffic signal in each direction of the front intersection; according to the traffic light indicating the direction of the front intersection The state of the lamp determines the state of the traffic state prediction in all directions of the front intersection.
  • the traffic state prediction module 700 is configured to determine, according to the traffic signal state recognition result, a light state of the traffic signal indicating the direction of the front intersection, specifically including:
  • the light state of the traffic light indicating the direction of the front intersection is determined according to the position of the traffic signal light in each direction indicating the direction of the traffic signal in the direction to be recognized, and the light state of each traffic light in the traffic light state recognition result.
  • the traffic state prediction module 700 is configured to obtain a position where the traffic signal of each direction in the front road direction is converted in the image to be identified, and specifically includes:
  • the relative position of the motor vehicle and the traffic light indicating the directions is converted into a position indicating the change of the traffic signal in each direction in the image to be recognized.
  • FIG. 17 shows a hardware structural block diagram of the in-vehicle control terminal.
  • the in-vehicle control terminal may include at least one processor 1 At least one communication interface 2, at least one memory 3 and at least one communication bus 4; obviously, the vehicle control terminal may also have other hardware, such as a display module, a communication module such as Bluetooth, a microphone, a camera, etc., specifically visible on the vehicle control terminal Expand its hardware with the need;
  • the number of the processor 1, the communication interface 2, the memory 3, and the communication bus 4 is at least one, and the processor 1, the communication interface 2, and the memory 3 complete communication with each other through the communication bus 4.
  • the communication interface 2 can be an interface of the communication module, such as an interface of the GSM module;
  • the processor 1 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits configured to implement the embodiments of the present application.
  • CPU central processing unit
  • ASIC Application Specific Integrated Circuit
  • the memory 3 may include a high speed RAM memory and may also include a non-volatile memory such as at least one disk memory.
  • the memory 3 stores a program, and the processor 1 calls the stored program stored in the memory 3.
  • the program is specifically used to:
  • the function implementation details of the program, and the extension function can refer to the corresponding content above.
  • the embodiment of the present application further provides a motor vehicle, and the structure of the motor vehicle can refer to FIG. 1.
  • the motor vehicle can include at least one image acquisition device and an in-vehicle control terminal;
  • the at least one image acquisition device is configured to collect an image to be recognized in front of the vehicle body
  • the in-vehicle control terminal is configured to acquire an image to be recognized collected by the target image capturing device, where the target image capturing device is included in the at least one image capturing device; and identify a traffic signal image region in the image to be recognized; a convolutional neural network CNN feature of the traffic signal image area; determining a first traffic light state represented by the traffic signal image area according to the CNN feature; and determining a traffic light state recognition result according to the first traffic light state.
  • the function details and extended functions of the vehicle control terminal can be referred to the corresponding parts above.
  • the motor vehicle provided by the embodiment of the present application can improve the accuracy of the state recognition of the traffic signal, and provides a possibility for improving the automatic driving performance.
  • the signal light involved in the embodiment of the present invention may include a traffic signal light, and may also include other types of signal lights, which are not limited in this embodiment of the present invention.
  • the embodiment of the present application also provides a storage medium.
  • the storage medium includes a stored program in which the signal state recognition method of the present application is executed while the program is running.
  • a computer program is stored in the storage medium, wherein the computer program is configured to be used to execute a signal state recognition method during operation.
  • the foregoing storage medium may be located on at least one of the plurality of network devices in the network shown in the foregoing embodiment.
  • the storage medium is arranged to store program code for performing the following steps:
  • the identifying the traffic light image area in the image to be identified includes:
  • the traffic signal recognition model is obtained by training a positive sample and a negative sample according to a machine learning method, wherein the positive sample is from multiple street view images The image with the traffic signal marked in the middle, and the negative sample is the background image of the streetscape marked from the multiple street view images.
  • the determining, according to the CNN feature, the first traffic light state represented by the traffic light image area comprises:
  • the CNN feature of the signal image is trained.
  • the image to be recognized collected by the acquisition target image collection device includes:
  • a target image capturing device from a plurality of preset image capturing devices to acquire an image to be recognized collected by the target image capturing device; wherein the plurality of image capturing devices respectively correspond to different focal length levels, and one focal length level corresponds to one The focal length range, the higher the focal length level, the larger the focal length value corresponding to the focal length range, and the higher the clear visible distance range.
  • the selecting the target image collection device from the preset plurality of image collection devices comprises:
  • the image acquisition device corresponding to the distance range of the current position from the nearest stop line is determined according to the preset distance range corresponding to each image acquisition device, and the target image acquisition device is selected.
  • the preset process of the distance range corresponding to each image collection device includes:
  • each image acquisition device According to the focal length of each image acquisition device, the clear visual distance of each image acquisition device, and the distances of the traffic signal lights in the images collected by the image acquisition devices at various distances from the stop line, determining the corresponding image acquisition devices Distance range.
  • determining the distance of the current position from the nearest stop line includes:
  • the foregoing program code configured to store the foregoing step, when the distance of the current position from the nearest stop line is less than the predetermined distance limit, triggering execution of the image to be recognized acquired by the acquisition target image collection device A step of.
  • Determining the traffic signal state recognition result according to the first traffic light state includes:
  • the first traffic light state is verified, and if the verification result is passed, the first traffic light state is used as a traffic light state recognition result.
  • the verifying the status of the first traffic light includes:
  • the continuously determined traffic signal state including the first traffic light state
  • the first traffic light state as the traffic light state recognition result includes:
  • determining the first traffic light state is the determined traffic light state recognition result.
  • the verifying the status of the first traffic light includes:
  • the first traffic light state as the traffic light state recognition result includes:
  • determining the first traffic light state is the determined traffic light state recognition result.
  • the verifying the status of the first traffic light includes:
  • the first traffic light state as the traffic light state recognition result includes:
  • the first traffic signal state is determined as the determined traffic signal state recognition result.
  • the verifying the status of the first traffic light includes:
  • the first traffic light state as the traffic light state recognition result includes:
  • the identified traffic signal state corresponds to the first traffic light state, determining that the first traffic light state is the determined traffic light state recognition result.
  • the program code configured to store the foregoing steps, if the front intersection is a multi-directional travel intersection, from the traffic signal state recognition result, the light indicating the traffic light in each direction of the front intersection is determined. status;
  • the traffic state prediction result in each direction of the front intersection is determined according to the state of the light indicating the traffic signal in each direction of the front intersection.
  • the determining, from the traffic signal state recognition result, determining a light state of the traffic signal indicating each direction of the front intersection includes:
  • the light state of the traffic light indicating the direction of the front intersection is determined according to the position of the traffic signal light in each direction indicating the direction of the traffic signal in the direction to be recognized, and the light state of each traffic light in the traffic light state recognition result.
  • the acquiring the position of the traffic signal indicating the direction of the traffic light in each direction in the to-be-identified image includes:
  • the relative position of the motor vehicle and the traffic light indicating the directions is converted into a position indicating the change of the traffic signal in each direction in the image to be recognized.
  • the foregoing storage medium may include, but not limited to, a USB flash drive, a read-only memory (ROM), a random access memory (RAM), a mobile hard disk, and a magnetic
  • ROM read-only memory
  • RAM random access memory
  • mobile hard disk a magnetic
  • magnetic A variety of media that can store program code, such as a disc or a disc.
  • the steps of a method or algorithm described in connection with the embodiments disclosed herein can be implemented directly in hardware, a software module executed by a processor, or a combination of both.
  • the software module can be placed in random access memory (RAM), memory, read only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, removable disk, CD-ROM, or technical field. Any other form of storage medium known.
  • the vehicle control terminal may acquire the image to be recognized collected by the target image acquisition device, identify the traffic signal image region in the image to be identified, and extract the image. Describe a CNN feature of the traffic signal image area, determine a first traffic light state represented by the traffic light image area according to the CNN feature, and determine a traffic light state recognition result according to the first traffic light state. Because CNN features are extracted based on massive sample training, it can resist multiple effects such as scale transformation, color transformation and ray transformation. Therefore, CNN features are used to extract image features in traffic signal image regions, and traffic signals are determined based on CNN features.
  • the traffic signal state indicated by the image area can reduce the influence of complex light changes of the environment on the accuracy of the traffic signal state recognition, and improve the accuracy of the traffic signal state recognition.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

本申请实施例提供一种信号灯状态识别方法、装置、车载控制终端及机动,该方法包括:获取目标图像采集装置采集的待识别图像;识别所述待识别图像中的交通信号灯图像区域;提取所述交通信号灯图像区域的卷积神经网络CNN特征;根据所述CNN特征确定所述交通信号灯图像区域表示的第一交通信号灯状态;根据所述第一交通信号灯状态,确定交通信号灯状态识别结果。本申请实施例能够提升交通信号灯状态识别的准确率。

Description

信号灯状态识别方法、装置、车载控制终端及机动车
本申请要求于2017年5月3日提交中国专利局、优先权号为2017103042071、申请名称为“交通信号灯状态识别方法、装置、车载控制终端及机动车”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及数据处理技术领域,具体涉及一种信号灯状态识别方法、装置、车载控制终端及机动车。
背景技术
交通信号灯状态识别是指对交通信号灯的状态进行识别,如常规的红绿灯形式的交通信号灯,交通信号灯状态识别可以是识别红绿灯中的红灯、绿灯、黄灯的亮暗状态。交通信号灯状态识别可为判断路口通行状态,决策机动车驾驶方式提供依据,在自动驾驶、导航(行车提示)等方面具有较为深入的应用,尤其是对提升自动驾驶机动车在道路上的可靠驾驶具有重要意义。
目前交通信号灯状态识别主要依赖计算机视觉技术完成,然而由于环境复杂的光线变化影响(如逆光、背光、雾霾、夜晚、树叶遮挡等光线变化影响),基于计算机视觉技术实现的交通信号灯状态识别,往往存在识别准确率较低的问题;因此如何提升交通信号灯状态识别的准确率,一直是本领域技术人员急切解决的问题。
发明内容
有鉴于此,本申请实施例提供一种信号灯状态识别方法、装置、车载控制终端及机动车,以提升交通信号灯状态识别的准确率。
为实现上述目的,本申请实施例提供如下技术方案:
一种信号灯状态识别方法,包括:
获取目标图像采集装置采集的待识别图像;
识别所述待识别图像中的交通信号灯图像区域;
提取所述交通信号灯图像区域的卷积神经网络(Convolutional Neural Network,简称为CNN)特征;
根据所述CNN特征确定所述交通信号灯图像区域表示的第一交通信号灯状态;
根据所述第一交通信号灯状态,确定交通信号灯状态识别结果。
本申请实施例还提供一种信号灯状态识别装置,包括:
图像获取模块,设置为获取目标图像采集装置采集的待识别图像;
区域识别模块,设置为识别所述待识别图像中的交通信号灯图像区域;
特征提取模块,设置为提取所述交通信号灯图像区域的卷积神经网络CNN特征;
第一灯状态确定模块,设置为根据所述CNN特征确定所述交通信号灯图像区域表示的第一交通信号灯状态;
识别结果确定模块,设置为根据所述第一交通信号灯状态,确定交通信号灯状态识别结果。
本申请实施例还提供一种车载控制终端,包括:存储器和处理器;
所述存储器存储有程序,所述处理器调用所述存储器存储的程序,所述程序用于:
获取目标图像采集装置采集的待识别图像;
识别所述待识别图像中的交通信号灯图像区域;
提取所述交通信号灯图像区域的卷积神经网络CNN特征;
根据所述CNN特征确定所述交通信号灯图像区域表示的第一交通信号灯状态;
根据所述第一交通信号灯状态,确定交通信号灯状态识别结果。
本申请实施例还提供一种机动车,包括:至少一个图像采集装置,车载控制终端;
其中,所述至少一个图像采集装置设置为采集车身前方的待识别图像;
所述车载控制终端,设置为获取目标图像采集装置采集的待识别图像,所述目标图像采集装置包含于所述至少一个图像采集装置中;识别所述待识别图像中的交通信号灯图像区域;提取所述交通信号灯图像区域的卷积神经网络CNN特征;根据所述CNN特征确定所述交通信号灯图像区域表示的第一交通信号灯状态;根据所述第一交通信号灯状态,确定交通信号灯状态识别结果。
本申请实施例还提供一种存储介质,所述存储介质中存储有计算机程序,其中,所述计算机程序被设置为运行时执行以上任一项中所述的交通信号灯状态识别方法。
基于上述技术方案,本申请实施例提供的信号灯状态识别方法中,车载控制终端可获取目标图像采集装置采集的待识别图像,识别所述待识别图像中的交通信号灯图像区域,从而提取出所述交通信号灯图像区域的CNN特征,根据所述CNN特征确定所述交通信号灯图像区域表示的第一交通信号灯状态,并根据所述第一交通信号灯状态,确定交通信号灯状态识别结果。由于CNN特征是基于海量丰富的样本训练提取得到,可抵抗尺度变换、颜色变换、光线变换等多种影响,因此使用CNN特征实现交通信号灯图像区域中的图像特征提取,并基于CNN特征确定交通信号灯图像区域表示的交通信号灯状态,可以降低环境复杂的光线变化对于交通信号灯状态识别的准确率的影响,提升交通信号灯状态识别的准确率。
附图说明
为了更清楚地说明本申请实施例或相关技术中的技术方案,下面将对实施例或相关技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的实施例,对于本领域普通技术人员来 讲,在不付出创造性劳动的前提下,还可以根据提供的附图获得其他的附图。
图1为本申请实施例提供的机动车的示意图;
图2为本申请实施例提供的信号灯状态识别方法的信令流程图;
图3为本申请实施例提供的信号灯状态识别方法的流程图;
图4为交通信号灯识别模型的训练示意图;
图5为Softmax分类器的训练示意图;
图6为确定信号灯状态识别结果的方法流程图;
图7为路口的交通信号灯的灯状态变换逻辑的设置示意图;
图8为灯状态变换逻辑的匹配示例图;
图9为确定信号灯状态识别结果的另一方法流程图;
图10为基于滑动时间窗进行灯状态匹配的示例图;
图11为确定信号灯状态识别结果的再一方法流程图;
图12为本申请实施例提供的前方路口通行状态的预测方法流程图;
图13为交通信号灯状态识别方法的应用示意图;
图14为本申请实施例提供的信号灯状态识别装置的结构框图;
图15为本申请实施例提供的信号灯状态识别装置的另一结构框图;
图16为本申请实施例提供的信号灯状态识别装置的再一结构框图;
图17为车载控制终端的硬件结构框图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请实施例提供的信号灯状态识别方法可应用于机动车,机动车的车载控制终端可以通过装载相应程序,实现本申请实施例提供的信号灯状 态识别方法,以使得机动车在道路上行驶时,能够准确的识别路口交通信号灯的状态;可选的,本申请实施例中,机动车的能源形式包括但不限于电动、汽动等。
如图1所示,机动车10可以包括:至少一个图像采集装置11,车载控制终端12;需要说明的是,图1所示图像采集装置的数量为3个,但在实际使用中,图像采集装置的数量可以是一个或多个,具体可视实际使用需求而定。
在本申请实施例中,图像采集装置11可以选用摄像头实现,当然也可采用其他具有图像采集功能的器件实现;
可选的,图像采集装置11可以设置于机动车的车身顶部,且采集视角对应车身前方(可选的,图像采集装置的采集视角可以正视车身前方),以对车身前方的待识别图像进行采集,待识别图像可以认为是本申请实施例待进行交通信号灯状态识别的图像;这里,图像采集装置所采集的图像可能涵盖车身前方道路、可能包含前方车辆、交通信号灯等;
可选的,如果图像采集装置的数量为多个,则该多个图像采集装置可在车身顶部间隔预定间隔距离,水平排列,且各图像采集装置的采集视角对应车身前方(如正视车身前方);
可选的,在使用多个图像采集装置时,本申请实施例可选用不同焦距的图像采集装置,即该多个图像采集装置的焦距各不相同;优选的,该多个图像采集装置的焦距可分属于不同的焦距等级,一个焦距等级可对应一个焦距范围,焦距等级越高,则焦距范围对应的焦距数值越大;可选地,本申请实施例使用的图像采集装置可以具有焦距调节能力,如一个图像采集装置的焦距,可在所属焦距等级对应的焦距范围内调节;
显然,上述的图像采集装置设置方式仅是可选的,本申请实施例并不排除图像采集装置的其他设置方式,只要能够使得图像采集装置在机动车行驶过程中,采集到涵盖交通信号灯的图像即可;如将图像采集装置设置于车身前挡风玻璃处(如前挡风玻璃顶部),并使得图像采集装置具有一定的倾角,使得采集的待识别图像能够涵盖前方路口设置的交通信号灯; 当然,图像采集装置也可能设置于前挡风玻璃与车顶的衔接处;上述描述的图像采集装置的设置方式可以适用于使用一个或多个图像采集装置的情况;
图像采集装置11所采集的待识别图像中可能具有交通信号灯也可能不具有交通信号灯,具体可由车载控制终端检测确定;如机动车距离路口较远,则图像采集装置可能无法采集到具有交通信号灯的图像。
车载控制终端12可以是机动车内置的具有数据处理能力的控制终端,如机动车内置的行车电脑,车载控制终端12与图像采集装置11可通过机动车通信总线(如bus总线)连接,也可以通过蓝牙、wifi(无线保真)等无线通信方式连接;
另一方面,车载控制终端12也可能是置于机动车内的用户设备(如用户手机),该用户设备可不与机动车的外置通信接口(如外置USB接口)连接,而通过蓝牙、wifi(无线保真)等无线通信方式连接图像采集装置11;可选的,该用户设备也可连接机动车的外置USB接口,以便接入机动车通信总线,通过机动车通信总线与图像采集装置11相交互。
在本申请实施例中,车载控制终端12可与图像采集装置11相通信,获取图像采集装置11所采集的待识别图像,定位待识别图像中的交通信号灯图像区域,对该交通信号灯图像区域进行CNN(卷积神经网络)特征提取,识别所提取的CNN特征对应的第一交通信号灯状态,根据该第一交通信号灯状态,确定交通信号灯状态识别结果,实现前方道路的交通信号灯的状态识别。
以此思路,图2示出了本申请实施例提供的交通信号灯状态识别方法的信令流程图,该信令流程图可以基于采用多个图像采集装置(至少2个的图像采集装置)的情况,参照图2,该流程可以包括:
步骤S10、车载控制终端定位当前位置。
可选的,车载控制终端中可以设置定位器,以定位当前位置;定位当前位置可以使用RTK(Real time kinematic,载波相位差分技术)定位、卫星(GPS、北斗等)定位、基站定位等方式实现,当然,定位器的形式可 以根据所使用的定位方式相应调整;
可选的,车载控制终端定位的当前位置可以视为是机动车的当前位置使用。
步骤S11、车载控制终端根据所述当前位置向地图服务器发送查询请求,所述查询请求用于请求查询当前位置距前方最近停止线的距离。
前方最近停止线是指机动车行驶方向上最近路口的停止线,停止线的作用是在前方路口的交通信号灯为红灯禁行时,指示机动车只能在停止线之后等待交通信号灯变为绿灯放行,交通信号灯为红灯期间不能越线;
车载控制终端在定位到当前位置后,可向地图服务器发送查询请求;地图服务器接收查询请求后,可根据当前位置查询地图数据中机动车行驶方向上最近路口的停止线位置,将该停止线位置与当前位置的距离,确定为机动车当前位置距前方最近停止线的距离。
可选的,车载控制终端可实时定位当前位置,并相应的实时发送查询请求,以便实时的确定当前位置距前方最近停止线的距离;可选的,车载控制终端也可定时定位当前位置,并相应的定时发送查询请求,如定位和查询请求发送频率可以为10Hz(赫兹),显然此数值仅是可选的,具体可以根据定时定位和查询需求而设定。
步骤S12、地图服务器向车载控制终端反馈所查询到的距离。
可选的,步骤S11至步骤S12示出了车载控制终端确定当前位置距前方最近停止线的距离的可选形式,除通过步骤S11至步骤S12所示的向地图服务器查询外,车载控制终端也可预置地图数据,以定位的当前位置查询预置的地图数据中前方最近停止线的位置,根据停止线位置与当前位置的距离,确定当前位置距前方最近停止线的距离,即通过本地预置地图数据的形式,根据定位的当前位置,本地查询当前位置距前方最近停止线的距离。
可选的,车载控制终端也可根据所述当前位置向地图服务器查询前方最近停止线的位置,从而车载控制终端可根据当前位置与前方最近停止线的位置,确定出当前位置距前方最近停止线的距离。
步骤S13、车载控制终端根据所述距离,从预置的多个图像采集装置 中选择目标图像采集装置。
可选的,预置的多个图像采集装置可以分别对应不同的焦距等级,即一个图像采集装置可以对应一个焦距等级,一个焦距等级可对应一个焦距范围,焦距等级越高,则焦距范围对应的焦距数值越大,对应的清晰可视距离范围越高;可选地,图像采集装置可以具有焦距调节能力,图像采集装置可在所对应的焦距等级的焦距范围内进行焦距调节;
优选的,连续的焦距等级的焦距范围可以相连续,如第一焦距等级的焦距范围,和第二焦距等级的焦距范围可以相连续;且连续的焦距等级的清晰可视距离范围也可以相连续,如第一焦距等级的清晰可视距离范围,和第二焦距等级的清晰可视距离范围可以相连续。
在确定当前位置距前方最近停止线的距离后,车载控制终端可以根据预置的各图像采集装置对应的距离范围,确定当前位置距前方最近停止线的距离,所处距离范围对应的图像采集装置,从而选择出目标图像采集装置。
可见,随着机动车的行驶,当前位置距前方最近停止线的距离将动态变化,同时也将调整从多个图像采集装置中选择的目标图像采集装置,以动态调整选择的目标图像采集装置,提升处理所使用的待识别图像的清晰度可能性。
可选的,本申请实施例也可以限制当前位置距前方最近停止线的距离,小于预定距离限值时,才执行步骤S13;如果当前位置距前方最近停止线的距离较大,则可能会超出该多个图像采集装置采集图像的清晰可视距离,致使此时采集的图像中交通信号灯图像较为模糊,导致后续的交通信号灯状态识别的准确性降低;因此较为优选的,本申请实施例可在当前位置距前方最近停止线的距离,小于预定距离限值时,执行“根据所述距离,从预置的多个图像采集装置中选择目标图像采集装置”的步骤(即步骤S13);
可选的,预定距离限值可以根据该多个图像采集装置中最高的图像采集能力设定(如根据该多个图像采集装置中最大的清晰可视距离设定), 如选取150米(此处数值仅为可选示例)。
步骤S14、车载控制终端获取所述目标图像采集装置采集的待识别图像。
可选的,多个图像采集装置可实时处于采集图像的状态,车载控制终端可在选择目标图像采集装置后,获取目标图像采集装置当前所采集的待识别图像;
可选的,车载控制终端也可在当前位置距前方最近停止线的距离,小于预定距离限值时,控制该多个图像采集装置进行图像采集,从而在从多个图像采集装置中选择目标图像采集装置后,获取标图像采集装置当前所采集的待识别图像。
可选的,步骤S10至步骤S14仅是在采用多个图像采集装置的情况下,车载控制终端获取目标图像采集装置采集的待识别图像的一种可选方式,本申请实施例也可根据预置的图像采集装置选择顺序,从该多个图像采集装置中选择目标图像采集装置,并获取目标图像采集装置采集的待识别图像,而不一定按照步骤S10至步骤S14所示的,根据当前位置距前方最近停止线的距离,从该多个图像采集装置中选择目标图像采集装置;根据当前位置距前方最近停止线的距离,从该多个图像采集装置中选择目标图像采集装置,仅是一种可选实现。
步骤S15、车载控制终端识别所述待识别图像中的交通信号灯图像区域。
可选的,车载控制终端在获取所述待识别图像后,可定位所述待识别图像中交通信号灯的位置,根据所定位的位置,从所述待识别图像中识别出交通信号灯图像区域。
可选的,本申请实施例可预先训练交通信号灯识别模型,根据该交通信号灯识别模型从待识别图像中定位出交通信号灯图像区域;
可选的,该交通信号灯图像区域可以根据机器学习方法(如深度卷积神经网络方法等)训练正样本和负样本得到,正样本可以是从街景图像中标注的具有交通信号灯的图像(如从街景图像中标注的各种交通信号灯状 态的交通信号灯图像,如分别为红、黄、绿三种亮灯状态的交通信号灯图像),负样本可以是从街景图像中标注的街景背景图像(街景背景图像不具有交通信号灯)。
可选的,所述待识别图像中可能具有交通信号灯,也可能不具有交通信号灯(如通过交通信号灯识别模型可能从所述待识别图像中识别出交通信号灯图像区域,也可能识别不出交通信号灯图像区域),步骤S15特指所述待识别图像中具有交通信号灯的情况。
步骤S16、车载控制终端提取所述交通信号灯图像区域的CNN特征。
CNN特征可以认为是图像特征的一种,本申请实施例使用CNN特征实现交通信号灯图像区域中的图像特征提取,可使得所提取的图像特征可抵抗图像由于尺度变换、颜色变换、光线变换等造成的影响;
需要说明的是,相比于使用HSV(HSV中H表示Hue色调,S表示Saturation饱和度,V表示Value明度)特征等表示图像特征,由于CNN特征是基于海量丰富的样本训练提取得到,可抵抗尺度变换、颜色变换、光线变换等多种影响,因此相对于主要依据图像的色度信息来表示图像特征的HSV特征等,CNN特征所表示的图像特征更不容易受光照、遮挡等环境影响;可使得后续的交通信号灯状态识别结果具有较高的准确性。
步骤S17、车载控制终端根据所述CNN特征确定所述交通信号灯图像区域表示的第一交通信号灯状态。
交通信号灯图像区域表示的交通信号灯状态是指,交通信号灯图像区域中红灯、绿灯、黄灯各自的亮暗状态(一般而言,一个交通信号灯一次仅红灯、绿灯、黄灯中的一个信号灯处于亮状态,其他信号灯处于暗状态,而交通信号灯图像区域中可能存在由多个交通信号灯组成的灯组的情况);
本申请实施例可预先分类出表示各交通信号灯状态的交通信号灯图像(一个交通信号灯状态对应有分类出的多个交通信号灯图像),分别对各交通信号灯状态的交通信号灯图像进行CNN特征提取,以各交通信号灯状态的交通信号灯图像的CNN特征训练出,交通信号灯状态分类模型,该 交通信号灯状态分类模型可以表示出各交通信号灯状态的交通信号灯图像所对应的CNN特征;
可选的,如果交通信号灯为集合了多个交通信号灯的交通信号灯组(如十字路口的交通信号灯一般由三路信号灯组成,分别指示了前行、左转、右转的情况),则针对交通信号灯组中的交通信号灯的状态需进行分类(如灯组中具有3个交通信号灯,则3个交通信号灯结合后的每一不同的交通信号灯状态需独自归为一类),并提前各类的CNN特征,以训练出交通信号灯状态分类模型;
以预先训练的交通信号灯状态分类模型,本申请实施例可识别所述交通信号灯图像区域的CNN特征所对应的交通信号灯状态,得到所述交通信号灯图像区域表示的交通信号灯状态。
可选的,交通信号灯状态分类模型可以使用Softmax分类器表示,Softmax可与CNN特征级联后,通过训练Softmax分类器将交通信号灯图像分类为各种交通信号灯状态相应的CNN特征。
步骤S18、车载控制终端根据所述第一交通信号灯状态,确定交通信号灯状态识别结果。
可选的,在本申请实施例中,第一交通信号灯状态特指当前根据CNN特征,所确定的交通信号灯图像区域表示的交通信号灯状态,第一交通信号灯状态可能是任一类型的交通信号灯状态(如绿灯亮、红灯亮、黄灯亮灯);由于本申请实施例可通过不断采集的待识别图像,实时进行交通信号灯状态的确定,因此随着时间的推移,可不断的确定出第一交通信号灯状态,当前所指的第一交通信号灯状态,可以是指通过当前的待识别图像所确定的交通信号灯状态;
值得注意的是,针对单个信号灯的情况,第一交通信号灯状态可以是单个信号灯相应的状态;而对于由多个信号灯具有的信号灯组的情况(如十字路口的情况),第一交通信号灯状态可以是信号灯组中各个信号灯的灯状态,信号灯组中各个信号灯的灯状态的确定也是基于CNN特征实现,原理相同。
可选的,本申请实施例可直接将步骤S17所确定的第一交通信号灯状态,作为交通信号灯状态识别结果;也可以是对步骤S17所确定的第一交通信号灯状态进行验证,并确定第一交通信号灯状态的验证结果为通过后,将步骤S17所确定的第一交通信号灯状态,作为交通信号灯状态识别结果;当然,如果第一交通信号灯状态的验证结果为不通过,则可确定第一交通信号灯状态,不为所确定的交通信号灯状态识别结果,交通信号灯状态识别结果可以为空或者识别失败;
可选的,验证方式可以是结合步骤S17连续确定的多个交通信号灯状态,判断交通信号灯状态的变化逻辑是否正确,判断步骤S17所确定的第一交通信号灯状态是否在设定时间内维持不跳转等。
可选的,图2所示流程是通过多个图像采集装置实现,本申请实施例也可以使用单个图像采集装置实现本申请实施例所提供的交通信号灯状态识别方法,当然由于图1所示流程能够随着机动车的行驶,调整从多个图像采集装置中选择的目标图像采集装置,保障当前从目标图像采集装置采集的图像具有较高的清晰度,因此选用多个图像采集装置的方案是较为优选的,但是并不排除本申请实施例基于CNN特征识别交通信号灯状态时,使用单个图像采集装置实现的可能性。
可选的,图3示出了本申请实施例提供的交通信号灯状态识别方法的流程图,该方法可应用于车载控制终端,参照图3,该方法可以包括:
步骤S20、获取目标图像采集装置采集的待识别图像。
该目标图像采集装置可以是本申请实施例单独设置的图像采集装置。
可选的,本申请实施例中目标图像采集装置可以实时处于图像采集状态,从而车载控制终端可获取目标图像采集装置实时采集的待识别图像;
可选的,车载控制终端也可在当前位置距前方最近停止线的距离小于预定距离限值时,执行步骤S20,从而获取目标图像采集装置采集的待识别图像;可选的,车载控制终端可在当前位置距前方最近停止线的距离小于预定距离限值时,触发目标图像采集装置开始采集图像,从而获取到目标图像采集装置采集的待识别图像;目标图像采集装置也可以实时处于图 像采集状态,从而车载控制终端可在当前位置距前方最近停止线的距离小于预定距离限值时,获取目标图像采集装置当前采集的待识别图像。
可选的,当前位置距前方最近停止线的距离的确定可以如图2所示相应流程实现,即可以查询地图服务器实现,也可以查询本地预置地图数据实现。
步骤S21、识别所述待识别图像中的交通信号灯图像区域。
步骤S22、提取所述交通信号灯图像区域的CNN特征。
步骤S23、根据所述CNN特征确定所述交通信号灯图像区域表示的第一交通信号灯状态。
步骤S24、根据所述第一交通信号灯状态,确定交通信号灯状态识别结果。
本申请实施例提供的信号灯状态识别方法中,车载控制终端可获取目标图像采集装置采集的待识别图像,识别所述待识别图像中的交通信号灯图像区域,从而提取出所述交通信号灯图像区域的CNN特征,根据所述CNN特征确定所述交通信号灯图像区域表示的第一交通信号灯状态,并根据所述第一交通信号灯状态,确定交通信号灯状态识别结果。由于CNN特征是基于海量丰富的样本训练提取得到,可抵抗尺度变换、颜色变换、光线变换等多种影响,因此使用CNN特征实现交通信号灯图像区域中的图像特征提取,并基于CNN特征确定交通信号灯图像区域表示的交通信号灯状态,可以降低环境复杂的光线变化对于交通信号灯状态识别的准确率的影响,提升交通信号灯状态识别的准确率。
在此基础上,本申请实施例优选使用图2所示的使用多个图像采集装置实现交通信号灯状态的识别,并且随着的机动车的行驶,根据当前位置距前方最近停止线的距离,从该多个图像采集装置中选择目标图像采集装置;
优选的,本申请实施例可设置各图像采集装置对应的距离范围,从而随着机动车的行驶,以根据预置的各图像采集装置对应的距离范围,不断调整根据当前位置距前方最近停止线的距离,所确定的目标图像采集装 置;
可选的,各图像采集装置对应的距离范围可以根据各图像采集装置的焦距,清晰可视距离,以及各距离下(距停止线的各距离下)交通信号灯在各图像采集装置所采集的图像内的像素数确定;以保证在所设置的任一距离范围内,相应图像采集装置的图像质量最清晰,交通信号灯在图像内的像素数足够(通常短边大于30个像素,此处的30个像素数值仅是可选示例,具体可根据需要设置像素阈值),保障后续交通信号灯状态识别的稳定性和准确性;
即多路图像采集装置的区别是由不同焦距不同视角范围的图像采集装置组成(即多路图像采集装置分属不同的焦距等级),本申请实施例可根据镜头焦距,清晰可视距离,以及交通信号灯在图像内的像素数确定,各距离范围相应的图像采集装置,从而保证在所设置的各距离范围内,相应图像采集装置采集的图像质量最清晰,红绿灯组在图像内的像素数大于像素阈值(例如通常短边大于30个像素),保障后续交通信号灯状态识别结果的准确稳定性。在使用相同识别算法处理的情况下,多图像采集装置方案可大幅度提高交通信号灯状态识别准确率,另外也弥补了单个焦距的图像采集装置的视角范围有限、清晰可视距离不足的缺陷。
可选的,为实现从待识别图像中识别出交通信号灯图像区域,本申请实施例可预先训练交通信号灯识别模型,图4示出了交通信号灯识别模型的训练示意图,参照图4,本申请实施例可收集海量的街景图像,并收录到街景图像数据库中,从街景图像数据库中确定出具有交通信号灯的图像,并标注为正样本,从街景图像数据库中确定街景背景图像,并标注为负样本;
即正样本可以是从街景图像数据库中标注的具有交通信号灯的图像,如分别为红、黄、绿三种亮灯状态的交通信号灯图像(对于交通信号灯组的情况,还可能是一个交通信号灯为绿灯亮,其他交通信号灯为红灯等,灯组中各交通信号灯的灯状态结合后,形成的各亮灯状态的交通信号灯图像),负样本可以是从街景图像数据库中标注的街景背景图像(不具有交 通信号灯),从而以深度卷积神经网络等机器学习方法训练正样本和负样本,得到交通信号灯识别模型;
可选的,街景图像数据库可以记录有大量的街景图像,图像的数量级可以根据需要设定,如十万级等,街景图像数据可中的街景图像可以覆盖多个城市的街景图像。
可选地,在街景图像数据库中标注正、负样本可以通过人工标注实现,也可以在街景图像数据库建立之时,提前对录入街景图像数据库中的街景图像进行正、负样本的标注。
可选的,在从待识别图像中识别出交通信号灯图像区域,并提取CNN特征后,本申请实施例可以使用Softmax分类器,识别所提取的CNN特征所表示的交通信号灯状态;可选的,Softmax分类器的训练过程可以如图5所示,以具有红灯、黄灯、绿灯的交通信号灯为例,本申请实施例可分类出红灯亮、黄灯亮和绿灯亮的交通信号灯图像,其中,红灯亮的交通信号灯状态对应有多个交通信号灯图像,绿灯亮的交通信号灯状态对应有多个交通信号灯图像,黄灯亮的交通信号灯状态对应有多个交通信号灯图像;
从而可分别提取红灯亮的交通信号灯图像的CNN特征,绿灯亮的交通信号灯图像的CNN特征,黄灯亮的交通信号灯图像的CNN特征;显然,对于交通信号灯组的情况,各交通信号灯状态,需要灯组中的各交通信号灯相结合判断,灯组中各交通信号灯结合后形成的任一不同的灯状态,均可作为灯组的一个交通信号灯状态,从而分类出各交通信号灯状态的交通信号灯图像;
根据Softmax,与各交通信号灯状态的交通信号灯图像的CNN特征级联后,训练出Softmax分类器(交通信号灯状态分类模型的一种可选形式)。
上述所描述的根据CNN特征,所确定的交通信号灯状态已能够降低环境复杂的光线变化对于交通信号灯状态识别的准确率的影响,提升交通信号灯状态识别的准确率;在此基础上,本申请实施例可对基于CNN特征,所确定的第一交通信号灯状态进行验证,并在第一交通信号灯状态验证结 果为通过时,才将基于CNN特征所确定的第一交通信号灯状态,作为交通信号灯状态识别结果,以进一步提升交通信号灯状态识别的准确率和稳定性。
可选的,对于每一交通信号灯,其灯状态变换逻辑一般是预置好的,如交通信号灯的灯状态变换逻辑(即红绿灯状态变换逻辑)一般按照红灯至绿灯至黄灯至红灯的循环顺序进行变换,且红灯、绿灯、黄灯在某一时刻的维持时间是设定的;基于此,本申请实施例可以根据前方路口交通信号灯的灯状态变换逻辑,验证本申请实施例连续确定的交通信号灯状态,以过滤由于天气、光线、视角,红绿灯误分类等影响所引起的交通信号灯状态的误确定,提升交通信号灯状态识别的准确率;
可选的,图6示出了本申请实施例提供的确定信号灯状态识别结果的方法流程图,该方法可应用于车载控制终端,参照图6,该方法可以包括:
步骤S30、获取前方路口的交通信号灯的灯状态变换逻辑。
可选的,本申请实施例可定位车载控制终端的当前位置,通过查询地图数据,确定机动车驾驶前方与当前位置最近的路口,从数据库(网络数据库或本地数据库)中记录的各路口的交通信号灯的灯状态变换逻辑中,获取所确定的路口相应的交通信号灯的灯状态变换逻辑;
可选的,本申请实施例可为各个路口设置路口标记,并将路口的路口标记与所在位置相关联,且对于每一个路口,定义各可能行驶方向相应设置的交通信号灯的灯标记;如图7所示,对于每一个路口,设置路口标记并关联上路口位置后,对于路口各可能行驶方向上设置的交通信号灯,本申请实施例可以均设定交通信号灯的灯标记,从而对于每一个路口,确定出路口的路口标记,与各可能行驶方向相应设置的交通信号灯的灯标记的对应关系,并关联到各灯标记相应的灯状态变换逻辑;
从而车载控制终端确定机动车驾驶前方与当前位置最近的路口后,可根据前方最近路口的路口位置确定出相应的路口标记,以所确定的路口标记和机动车的驾驶方向,确定出前方路口设置的交通信号灯的灯标记,以所确定的灯标记获取到所关联的灯状态变换逻辑。
显然,上述描述的灯状态变换逻辑获取方式仅是可选的,在仅考虑灯状态变换顺序的情况下,各交通信号灯的灯状态变换逻辑可以是统一的。
步骤S31、获取连续确定的交通信号灯状态,所述连续确定的交通信号灯状态包括所述第一交通信号灯状态。
可选的,本申请实施例可在距前方停止线的距离处于设定验证距离范围时(例如设定验证距离范围,为距前方停止线的距离150米以内,此处的数值仅是示例,具体可根据实际需要设定),对处于设定验证距离范围内连续确定的交通信号灯状态进行收集,从而在确定出第一交通信号灯状态后,将处于设定验证距离范围内时,历史确定的交通信号灯状态与第一交通信号灯状态进行结合,获取到连续确定的交通信号灯状态;可选的,在该连续确定的交通信号灯状态中,第一交通信号灯状态可以处于末尾;
例如,本申请实施例可在距前方停止线的距离处于150米以内时,进行所确定的交通信号灯状态的收集,在确定当前的第一交通信号灯状态后(如当前距前方停止线的距离为50米),则可将距前方停止线的距离处于150米以内时,历史确定的交通信号灯状态(当前距前方停止线的距离为150米至50米时,所确定的交通信号灯状态),与当前确定的第一交通信号灯状态(当前距前方停止线的距离为50米时,所确定的交通信号灯状态)相结合,获取到连续确定的交通信号灯状态;值得注意的是,本段的数值内容仅是举例性说明,其不应成为本申请实施例保护范围的限制。
步骤S32、判断所述连续确定的交通信号灯状态的灯状态变换逻辑,是否与前方路口的交通信号灯的灯状态变换逻辑相匹配,若否,执行步骤S33,若是,执行步骤S34。
可选的,本申请实施例可以判断连续确定的交通信号灯状态的亮灯顺序,与前方路口的交通信号灯的亮灯顺序是否匹配;当然,除单纯的亮灯顺序的比对外,本申请实施例还可以加入亮灯维持时间,如在连续确定的交通信号灯状态中存在灯状态跳转时,判断跳转后的灯状态的维持时间,是否与前方路口的交通信号灯的该灯状态的维持时间相匹配等。
步骤S33、确定第一交通信号灯状态不为,所确定的交通信号灯状态 识别结果。
结合第一交通信号灯状态后的连续确定的交通信号灯状态的灯状态变换逻辑,与前方路口的交通信号灯的灯状态变换逻辑不相匹配,说明由于天气、光线、视角、红绿灯误分类等影响,造成了交通信号灯识别结果的闪烁问题(闪烁指的是在短时间内,如2秒内(具体时间可视实际需求设定),交通信号灯识别结果输出不稳定的情况);
如图8所示,基于连续确定的交通信号灯状态的灯状态变换逻辑为绿灯至黄灯的顺序,而前方路口的交通信号灯的灯状态变换逻辑为红灯至绿灯至红灯至黄灯的循环顺序,两者不匹配,存在交通信号灯识别结果的闪烁问题。
步骤S34、确定第一交通信号灯状态为,所确定的交通信号灯状态识别结果。
可选的,除利用交通信号灯的灯状态变换逻辑,进行第一交通信号灯状态的验证外,本申请实施例还可以具有其他的验证方式,如使用设定时间大小的滑动时间窗将所确定的第一交通信号灯状态进行延迟输出,如果在设定时间内,所确定的交通信号灯状态与第一交通信号灯状态相同,则可确定第一交通信号灯状态为,所确定的交通信号灯状态识别结果;如果期间所确定的交通信号灯状态发生变化,则以变化后最新确定的交通信号灯状态(即最新确定的第一交通信号灯状态),重新进行滑动时间窗的延迟输出处理;
可选的,图9示出了本申请实施例提供的确定信号灯状态识别结果的另一方法流程图,该方法可应用于车载控制终端,参照图9,该方法可以包括:
步骤S40、将所述第一交通信号灯状态加入预置的滑动时间窗,所述滑动时间窗对应时间长短为预设时间。
可选的,预设时间可以小于交通信号灯状态切换后,人体的反应时间,即本申请实施例可统计交通信号灯状态切换后(如红灯切换至绿灯时),人体的反应时间,从而设置小于该人体反应时间的设定时间,作为滑动时 间窗对应的时间长短;如交通信号灯状态切换后,人体的反应一般在500ms(毫秒),本技术方案实施例可采用300ms等作为设定时间;显然,此处的数值仅是举例说明,人体的反应时间也可能根据统计方法的不同而调整。
步骤S41、判断所述预设时间内所确定的新的交通信号灯状态是否与第一交通信号灯状态相应,若否,执行步骤S42,若是,执行步骤S43。
滑动时间窗可以随着时间推移而滑动,并记录不断确定的最新的交通信号灯状态,如果在一个滑动时间窗对应时间长短内,发现新确定的交通信号灯状态与所述第一交通信号灯状态不相应,则可确定存在交通信号灯识别结果闪烁问题;如图10所示,在一个滑动时间窗内,如果确定的新的交通信号灯状态与第一交通信号灯状态不相应,则可能出现了交通信号灯识别结果闪烁问题;如果在一个滑动时间窗对应时间长短内,新确定的交通信号灯状态与所述第一交通信号灯状态相应,则可确定所确定的第一交通信号灯状态稳定,可以进行输出。
步骤S42、确定第一交通信号灯状态不为,所确定的交通信号灯状态识别结果。
步骤S43、确定第一交通信号灯状态为,所确定的交通信号灯状态识别结果。
通过滑动时间窗控制对所确定的第一交通信号灯状态进行延迟过滤,如滑动时间窗的时间长短为300ms(数值仅为举例说明,可以根据实际需要设定滑动时间窗的时间长短,即根据实际需要对设定时间进行设置),即状态发生变化300ms后相对稳定时再输出第一交通信号灯状态,来作为交通信号灯状态识别结果,避免300ms内状态反复跳跃,可以平滑由于雾霾、光线影响所造成的交通信号灯识别结果闪烁问题。
可选的,由于相邻两个路口的距离小于一定距离,则图像采集装置采集的图像中可能会涵盖相邻两个路口的交通信号灯图像,此时需要过滤下一路口的交通信号灯信号,以降低识别干扰;针对此,图11示出了本申请实施例提供的确定信号灯状态识别结果的再一方法流程图,该方法可应用于车载控制终端,参照图11,该方法可以包括:
步骤S50、获取前方最近路口的交通信号灯的属性信息。
可选的,交通信号灯的属性信息包括静态和动态两类属性,静态属性指灯组的形状、排列方式(横向还是竖向),灯体的个数(常见的有1个或者3个,其中3个的情况可以认为是交通信号灯组的一种);动态属性指每个灯体的颜色,如竖向三个排列的灯体分别对应的颜色是红、黄、绿,以及每个灯体当前状态(亮或者暗)等。
本申请实施例可以根据车载控制终端的当前位置,查询地图数据,获取前方最近路口的位置,并根据预置的路口位置与交通信号灯的属性信息的对应关系,获取与前方最近路口的位置相应的交通信号灯的属性信息,以获取到前方最近路口的交通信号灯的属性信息。
步骤S51、根据所述交通信号灯图像区域,确定所述交通信号灯图像区域所表示的交通信号灯的属性信息。
可选的,本申请实施例可以采用图形识别等技术,对所述交通信号灯图像区域进行处理,确定出交通信号灯图像区域所表示的交通信号灯的排列,个数等属性信息。
步骤S52、判断所确定的属性信息,与所述前方最近路口的交通信号灯的属性信息是否匹配,若否,执行步骤S53,若是,执行步骤S54。
所确定的属性信息,与前方最近路口的交通信号灯的属性信息不相匹配,说明所述交通信号灯图像区域所表示的交通信号灯,可能不是前方最近路口的交通信号灯,需要进行过滤,相应的,不能将相应确定的第一交通信号灯状态,作为交通信号灯状态识别结果。
步骤S53、确定第一交通信号灯状态不为,所确定的交通信号灯状态识别结果。
步骤S54、确定第一交通信号灯状态为,所确定的交通信号灯状态识别结果。
可选的,在基于CNN特征确定出第一交通信号灯状态后,本申请实施例还可采用基于色度的颜色识别方法(如基于HSV特征识别等),对所述交通信号灯图像区域进行二次处理,如果二次识别的结果与第一交通信号 灯状态相应(一致),则可确定所确定的第一交通信号灯状态的结果较为稳定,可以作为交通信号灯状态识别结果使用;可选的,基于色度的颜色识别处理对象是已经从待识别图像中提取出的交通信号灯图像区域,对所述交通信号灯图像区域进行色度的颜色识别处理,,得到所识别的交通信号灯状态,从而与第一交通信号灯状态进行匹配比对,判断所识别的交通信号灯状态与所述第一交通信号灯状态是否相应,若是,则验证通过,则确定第一交通信号灯状态为,所确定的交通信号灯状态识别结果,若否,则验证不通过。
上述列出的多种对第一交通信号灯状态的验证方式可以是并行的,择一使用;也可以是其中的至少一种结合使用,并在所结合的各种验证方式的验证结果均为通过时,才将第一交通信号灯状态确定为交通信号灯状态识别结果。
可选的,基于本申请实施例所确定的交通信号灯状态识别结果,可以实现前方路口通行状态的预测,以提升机动车自动驾驶,导航等领域的效果;可选的,在确定交通信号灯状态识别结果后,本申请实施例可结合图像采集装置的标定参数,当前位置及前方路口的交通信号灯的三维位置坐标,确定前方路口各方向的通行状态(如对于十字路口,前方路口可能会给出直行、右转、左转的提示,每一个提示对应的交通信号灯均具有红绿黄的灯状态变化,此时需要确定出前方路口各方向对应的交通信号灯,并将所确定的交通信号灯状态识别结果,与前方路口各方向对应的交通信号灯匹配对应上);
可选的,图12示出了本申请实施例提供的前方路口通行状态的预测方法流程图,该方法可应用于车载控制终端,参照图12,该方法可以包括:
步骤S60、获取当前位置,以及前方路口的交通信号灯组中指示各方向的交通信号灯的三维位置坐标。
可选的,当前位置可以是机动车的当前位置,由车载控制终端定位得出;本申请实施例可定义各个路口位置相应的交通信号灯的三维位置坐 标,从而根据前方路口的路口位置匹配出相应的交通信号灯的三维位置坐标;该三维位置坐标可以包括前方路口的交通信号灯组中指示各方向的交通信号灯的三维位置;可选的,交通信号灯的三维位置坐标可涵盖在交通信号灯的属性信息中。
步骤S61、根据所述当前位置与所述三维位置坐标,确定机动车与指示各方向的交通信号灯的相对位置。
在确定机动车的当前位置,与前方路口指示各方向的交通信号灯的三维位置坐标后,可确定出机动车分别与指示各方向的交通信号灯的相对位置。
步骤S62、根据目标图像采集装置的标定参数,确定机动车与交通信号灯的相对位置,在待识别图像中位置的转换关系。
可选的,图像采集装置的标定参数指的是标定图像采集装置(如摄像机)的内外参数,根据此标定参数,可以换算出机动车与交通信号灯的相对位置,在待识别图像相应的坐标系中位置的转换关系。
步骤S63、根据所述转换关系,将机动车与指示各方向的交通信号灯的相对位置,转换为,指示各方向的交通信号灯在待识别图像中转换的位置。
根据步骤S62所确定的转换关系,可以根据实际世界坐标系中机动车与交通信号灯的相对位置,转换成在二维图像平面中对应的位置,得到在相应采集的待识别图像中的位置。
可选的,步骤S60至步骤S63可以认为是获取前方路口指示各方向的交通信号灯在待识别图像中转换的位置的可选实现方式,本申请实施例并不排除其他的将前方路口指示各方向的交通信号灯的三维位置坐标,转换为在待识别图像中的位置的方式。
步骤S64、根据前方路口指示各方向的交通信号灯在待识别图像中转换的位置,与交通信号灯状态识别结果中各个交通信号灯的灯状态,确定指示前方路口各方向的交通信号灯的灯状态。
可选的,在得到前方路口指示各方向的交通信号灯在待识别图像中转 换的位置后,可从交通信号灯状态识别结果中匹配出各位置相应的灯状态,确定出指示前方路口各方向的交通信号灯的灯状态。
步骤S65、根据指示前方路口各方向的交通信号灯的灯状态,确定前方路口各方向的通行状态预测结果。
相应的,本申请实施例可通过指示前方路口各方向的交通信号灯的灯状态,预测出前方路口各方向的通行状态;此处,本申请实施例可以预置前方路口各方向的通行状态模板,根据所确定的指示前方路口各方向的交通信号灯的灯状态,在通行状态模板中填充前方路口各方向相应的通行状态,实现前方路口各方向的通行状态的预测;一般而言,前方路口一个方向的交通信号灯的灯状态为绿灯亮,则前方路口该方向的通行状态为允许通行,前方路口一个方向的交通信号灯的灯状态为红灯亮,则前方路口该方向的通行状态为禁止通行。
值得注意的是,图12所示是针对十字路口等前方路口是多方向行驶路口的情况,给出前方路口各方向的通行状态的预测方案,是从所确定的交通信号灯状态识别结果中,确定指示前方路口各个方向的交通信号灯的灯状态,从而根据前方路口各个方向的交通信号灯的灯状态,预测前方路口各个方向的通行状态的一种可选方式;
显然,针对单行方向的路口,前方路口仅有一个通行方向,此时可以直接根据所确定的交通信号灯状态识别结果,给出前方路口的通行状态的预测结果,而不需要确定所确定的交通信号灯状态识别结果中,前方路口各个方向的交通信号灯的灯状态的手段。
本申请实施例提供的交通信号灯状态识别方法的一种可选应用可以如图13所示,例如,本申请实施例可设置3个摄像头,且分属近焦,中焦,远焦等级,如摄像头1为近焦等级的摄像头,焦距在第一焦距范围(例如5mm至12.5mm(毫米)),清晰可视距离在第一距离范围(例如4m至30m(米)范围)内,摄像头2为中焦等级的摄像头,焦距在第二焦距范围(例如12.5mm至25mm),清晰可视距离在第二距离范围(例如30m至80m范围) 内,摄像头3为远焦等级的摄像头,焦距在第三焦距范围(例如25mm至50mm),清晰可视距离在第三距离范围(例如80m至160m范围)内;值得注意的是,此处涉及的数值仅是可选示例,此处描述的近焦,中焦,远焦等级相应的焦距数值和清晰可视距离举例数值,仅是本申请实施例为预置的多个图像采集装置分别对应不同的焦距等级,且设置各焦距等级对应的焦距范围和清晰可视距离范围的一种可选方式;
从而在自动驾驶汽车距前方最近停止线的距离为第三距离范围的阶段时,启用摄像头3作为目标图像采集装置,获取其采集的待识别图像进行处理,在自动驾驶汽车距前方最近停止线的距离为第二距离范围的阶段时,启用摄像头2作为目标图像采集装置,获取其采集的待识别图像进行处理,在自动驾驶汽车距前方最近停止线的距离为第三距离范围的阶段时,启用摄像头1作为目标图像采集装置,获取其采集的待识别图像进行处理;
对于上文任一阶段获取到的各个待识别图像,本申请实施例可以采用预先训练的交通信号灯识别模型,识别待识别图像中的交通信号灯图像区域,并提取交通信号灯图像区域中的CNN特征,通过预训练的Softmax分类器识别出CNN特征表示的交通信号灯状态,得到当前确定的第一交通信号灯状态;此处的第一交通信号灯状态的可能形式是:单个交通信号灯情况下的灯状态,或者,多个交通信号灯组成的灯组情况下,各个交通信号灯的灯状态(此时还可确定各个交通信号灯在待识别图像中的位置);
从而通过前方路口的交通信号灯的灯状态变换逻辑,和/或,预置的滑动时间窗,和/或,前方最近路口的交通信号灯的属性信息,和/或,基于色度的颜色识别对交通信号灯图像区域的二次处理,对第一交通信号灯状态进行验证,并在验证通过后,将第一交通信号灯状态确定为交通信号灯状态识别结果;
如果前方路口为单行路口,则可将交通信号灯状态识别结果表示的灯状态,作为前方路口通行状态的预测依据(如绿灯前行,或者红灯禁行);
如果前方路口为十字路口等多方向行驶路口,则可从交通信号灯状态 识别结果中,确定前方路口各方向的交通信号灯对应的灯状态,并给出前方路口各方向的通行状态预测;可选的,前方路口的路口类型是单行路口,还是多方向行驶路口可以预置,如设置路口的路口位置或路口标记,与路口类型的对应关系,根据前方路口的路口位置或路口标记,获取相应的路口类型。
本申请实施例提供的交通信号灯状态识别方法,可以降低环境复杂的光线变化对于交通信号灯状态识别的准确率的影响,提升交通信号灯状态识别的准确率;同时,可应用于机动车自动驾驶,导航等领域,实现前方路口通行状态的准确预测,提升机动车自动驾驶,导航的应用效果。
下面对本申请实施例提供的交通信号灯状态识别装置进行介绍,下文描述的交通信号灯状态识别装置可与,上文描述的交通信号灯状态识别方法相互对应参照。下文描述的交通信号灯状态识别装置可以认为是,车载控制终端为实现本申请实施例提供的交通信号灯状态识别方法,所需设置的程序模块,这些程序模块的功能可以通过车载控制终端装载的程序实现。
图14为本申请实施例提供的信号灯状态识别装置的结构框图,该装置可应用于车载控制终端,参照图14,该装置可以包括:
图像获取模块100,设置为获取目标图像采集装置采集的待识别图像;
区域识别模块200,设置为识别所述待识别图像中的交通信号灯图像区域;
特征提取模块300,设置为提取所述交通信号灯图像区域的卷积神经网络CNN特征;
第一灯状态确定模块400,设置为根据所述CNN特征确定所述交通信号灯图像区域表示的第一交通信号灯状态;
识别结果确定模块500,设置为根据所述第一交通信号灯状态,确定交通信号灯状态识别结果。
可选的,区域识别模块200,设置为识别所述待识别图像中的交通信 号灯图像区域,具体包括:
根据预训练的交通信号灯识别模型,识别所述待识别图像中的交通信号灯图像区域;所述交通信号灯识别模型根据机器学习方法训练正样本和负样本得到,其中,正样本为从多个街景图像中标注具有交通信号灯的图像,负样本为从多个街景图像中标注的街景背景图像。
可选的,第一灯状态确定模块400,设置为根据所述CNN特征确定所述交通信号灯图像区域表示的第一交通信号灯状态,具体包括:
根据预训练的交通信号灯状态分类模型,确定所述CNN特征相应的交通信号灯状态,得到所述交通信号灯图像区域表示的第一交通信号灯状态;其中,交通信号灯状态分类模型根据各交通信号灯状态的交通信号灯图像的CNN特征训练得到。
可选的,图像获取模块100,设置为获取目标图像采集装置采集的待识别图像,具体包括:
从预置的多个图像采集装置中选择目标图像采集装置,获取所述目标图像采集装置采集的待识别图像;其中,所述多个图像采集装置分别对应不同的焦距等级,一个焦距等级对应一个焦距范围,焦距等级越高,焦距范围对应的焦距数值越大,对应的清晰可视距离范围越高。
可选的,图像获取模块100,设置为从预置的多个图像采集装置中选择目标图像采集装置,具体包括:
确定当前位置距前方最近停止线的距离;
根据预置的各图像采集装置对应的距离范围,确定当前位置距前方最近停止线的距离所处距离范围对应的图像采集装置,选择出目标图像采集装置。
可选的,图15示出了本申请实施例提供的交通信号灯状态识别装置的另一结构框图,结合图14和图15所示,该装置还可以包括:
距离范围设置模块600,设置为根据各图像采集装置的焦距,各图像采集装置的清晰可视距离,以及距停止线的各距离下,交通信号灯在各图像采集装置所采集的图像内的像素数,确定各图像采集装置对应的距离范 围。
可选的,图像获取模块100,设置为确定当前位置距前方最近停止线的距离,具体包括:
定位当前位置;
根据所述当前位置向地图服务器发送查询请求,所述查询请求用于请求查询当前位置距前方最近停止线的距离;
接收所述地图服务器反馈的所查询到的距离。
可选的,图像获取模块100可以是在当前位置距前方最近停止线的距离,小于预定距离限值时,触发执行获取目标图像采集装置采集的待识别图像的步骤。
可选的,识别结果确定模块500,设置为根据所述第一交通信号灯状态,确定交通信号灯状态识别结果,具体包括:
对所述第一交通信号灯状态进行验证,如果验证结果为通过,则将所述第一交通信号灯状态作为交通信号灯状态识别结果。
可选的,识别结果确定模块500所采用的验证方式和过程可以选用如下至少一种:
一、获取前方路口的交通信号灯的灯状态变换逻辑;获取连续确定的交通信号灯状态,所述连续确定的交通信号灯状态包括所述第一交通信号灯状态;判断所述连续确定的交通信号灯状态的灯状态变换逻辑,是否与前方路口的交通信号灯的灯状态变换逻辑相匹配;若是,确定第一交通信号灯状态为,所确定的交通信号灯状态识别结果;
二、将所述第一交通信号灯状态加入预置的滑动时间窗,所述滑动时间窗对应时间长短为预设时间;判断所述预设时间内所确定的新的交通信号灯状态,是否与第一交通信号灯状态相应;若是,确定第一交通信号灯状态为,所确定的交通信号灯状态识别结果;
三、获取前方最近路口的交通信号灯的;根据所述交通信号灯图像区域,确定所述交通信号灯图像区域所表示的交通信号灯的属性信息;判断所确定的属性信息,与所述前方最近路口的交通信号灯的属性信息是否匹 配;若是,确定第一交通信号灯状态为,所确定的交通信号灯状态识别结果;
四、对所述交通信号灯图像区域进行色度的颜色识别处理,得到所识别的交通信号灯状态;判断所识别的交通信号灯状态与所述第一交通信号灯状态是否相应;若是,确定第一交通信号灯状态为,所确定的交通信号灯状态识别结果。
可选的,图16示出了本申请实施例提供的信号灯状态识别装置的再一结构框图,结合图14和图16所示,该装置还可以包括:
通行状态预测模块700,设置为如果前方路口为多方向行驶路口,从所述交通信号灯状态识别结果中,确定指示前方路口各个方向的交通信号灯的灯状态;根据指示前方路口各方向的交通信号灯的灯状态,确定前方路口各方向的通行状态预测结果。
可选的,通行状态预测模块700,设置为从所述交通信号灯状态识别结果中,确定指示前方路口各个方向的交通信号灯的灯状态,具体包括:
获取前方路口指示各方向的交通信号灯在待识别图像中转换的位置;
根据前方路口指示各方向的交通信号灯在待识别图像中转换的位置,与交通信号灯状态识别结果中各个交通信号灯的灯状态,确定指示前方路口各方向的交通信号灯的灯状态。
可选的,通行状态预测模块700,设置为获取前方路口指示各方向的交通信号灯在待识别图像中转换的位置,具体包括:
获取当前位置,以及前方路口的交通信号灯组中指示各方向的交通信号灯的三维位置坐标;
根据所述当前位置与所述三维位置坐标,确定机动车与指示各方向的交通信号灯的相对位置;
根据目标图像采集装置的标定参数,确定机动车与交通信号灯的相对位置,在待识别图像中位置的转换关系;
根据所述转换关系,将机动车与指示各方向的交通信号灯的相对位置,转换为,指示各方向的交通信号灯在待识别图像中转换的位置。
上文描述的程序模块可以程序形式装载在车载控制终端中,可选的,图17示出了车载控制终端的硬件结构框图,参照图17,该车载控制终端至少可以包括:至少一个处理器1,至少一个通信接口2,至少一个存储器3和至少一个通信总线4;显然,车载控制终端还可以具有其他的硬件,如显示屏、蓝牙等通信模块、麦克风、摄像头等,具体可视车载控制终端的需求而扩展其硬件;
在本申请实施例中,处理器1、通信接口2、存储器3、通信总线4的数量为至少一个,且处理器1、通信接口2、存储器3通过通信总线4完成相互间的通信;
可选的,通信接口2可以为通信模块的接口,如GSM模块的接口;
处理器1可能是一个中央处理器CPU,或者是特定集成电路ASIC(Application Specific Integrated Circuit),或者是被配置成实施本申请实施例的一个或多个集成电路。
存储器3可能包含高速RAM存储器,也可能还包括非易失性存储器(non-volatile memory),例如至少一个磁盘存储器。
其中,存储器3存储有程序,处理器1调用存储3所存储的程序,该程序具体用于:
获取目标图像采集装置采集的待识别图像;
识别所述待识别图像中的交通信号灯图像区域;
提取所述交通信号灯图像区域的卷积神经网络CNN特征;
根据所述CNN特征确定所述交通信号灯图像区域表示的第一交通信号灯状态;
根据所述第一交通信号灯状态,确定交通信号灯状态识别结果。
可选的,程序的功能实现细节,和扩展程序功能可参照上文相应内容。
本申请实施例还提供一种机动车,该机动车的结构可以参照图1,在本申请实施例中,该机动车可以包括至少一个图像采集装置,车载控制终端;
其中,所述至少一个图像采集装置设置为采集车身前方的待识别图像;
所述车载控制终端,设置为获取目标图像采集装置采集的待识别图像,所述目标图像采集装置包含于所述至少一个图像采集装置中;识别所述待识别图像中的交通信号灯图像区域;提取所述交通信号灯图像区域的卷积神经网络CNN特征;根据所述CNN特征确定所述交通信号灯图像区域表示的第一交通信号灯状态;根据所述第一交通信号灯状态,确定交通信号灯状态识别结果。
其中,车载控制终端的功能细节和扩展功能,可参照上文相应部分描述。
本申请实施例提供的机动车能够提升交通信号灯状态识别的准确率,为提升自动驾驶性能提供可能。
需要说明的是,本发明实施例中涉及的信号灯,可以包括交通信号灯,也可以包括其他类型的信号灯,本发明实施例对此不作限定。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令终端设备相关的硬件来完成,该程序可以存储于一计算机可读存储介质中。
本申请实施例还提供了一种存储介质。该存储介质包括存储的程序,其中,程序运行时执行本申请的信号灯状态识别方法。可选地,在本实施例中,上述存储介质中存储有计算机程序,其中,所述计算机程序被设置为运行时可以用于执行信号灯状态识别方法。
可选地,在本实施例中,上述存储介质可以位于上述实施例所示的网络中的多个网络设备中的至少一个网络设备上。
可选地,在本实施例中,存储介质被设置为存储用于执行以下步骤的程序代码:
获取目标图像采集装置采集的待识别图像;
识别所述待识别图像中的交通信号灯图像区域;
提取所述交通信号灯图像区域的CNN特征;
根据所述CNN特征确定所述交通信号灯图像区域表示的第一交通信号灯状态;
根据所述第一交通信号灯状态,确定交通信号灯状态识别结果。
可选地,在上述被设置为存储用于执行上述步骤的程序代码中,所述识别所述待识别图像中的交通信号灯图像区域包括:
根据预训练的交通信号灯识别模型,识别所述待识别图像中的交通信号灯图像区域;所述交通信号灯识别模型根据机器学习方法训练正样本和负样本得到,其中,正样本为从多个街景图像中标注的具有交通信号灯的图像,负样本为从多个街景图像中标注的街景背景图像。
可选地,在上述被设置为存储用于执行上述步骤的程序代码中,所述根据所述CNN特征确定所述交通信号灯图像区域表示的第一交通信号灯状态包括:
根据预训练的交通信号灯状态分类模型,确定所述CNN特征相应的交通信号灯状态,得到所述交通信号灯图像区域表示的第一交通信号灯状态;其中,交通信号灯状态分类模型根据各交通信号灯状态的交通信号灯图像的CNN特征训练得到。
可选地,在上述被设置为存储用于执行上述步骤的程序代码中,所述获取目标图像采集装置采集的待识别图像包括:
从预置的多个图像采集装置中选择目标图像采集装置,获取所述目标图像采集装置采集的待识别图像;其中,所述多个图像采集装置分别对应不同的焦距等级,一个焦距等级对应一个焦距范围,焦距等级越高,焦距范围对应的焦距数值越大,对应的清晰可视距离范围越高。
可选地,在上述被设置为存储用于执行上述步骤的程序代码中,所述从预置的多个图像采集装置中选择目标图像采集装置包括:
确定当前位置距前方最近停止线的距离;
根据预置的各图像采集装置对应的距离范围,确定当前位置距前方最 近停止线的距离所处距离范围对应的图像采集装置,选择出目标图像采集装置。
可选地,在上述被设置为存储用于执行上述步骤的程序代码中,所述各图像采集装置对应的距离范围的预置过程包括:
根据各图像采集装置的焦距,各图像采集装置的清晰可视距离,以及距停止线的各距离下,交通信号灯在各图像采集装置所采集的图像内的像素数,确定各图像采集装置对应的距离范围。
可选地,在上述被设置为存储用于执行上述步骤的程序代码中,所述确定当前位置距前方最近停止线的距离包括:
定位当前位置;
根据所述当前位置向地图服务器发送查询请求,所述查询请求用于请求查询当前位置距前方最近停止线的距离;
接收所述地图服务器反馈的所查询到的距离。
可选地,在上述被设置为存储用于执行上述步骤的程序代码中,在当前位置距前方最近停止线的距离,小于预定距离限值时,触发执行获取目标图像采集装置采集的待识别图像的步骤。
所述根据所述第一交通信号灯状态,确定交通信号灯状态识别结果包括:
对所述第一交通信号灯状态进行验证,如果验证结果为通过,则将所述第一交通信号灯状态作为交通信号灯状态识别结果。
所述对所述第一交通信号灯状态进行验证包括:
获取前方路口的交通信号灯的灯状态变换逻辑;
获取连续确定的交通信号灯状态,所述连续确定的交通信号灯状态包括所述第一交通信号灯状态;
判断所述连续确定的交通信号灯状态的灯状态变换逻辑,是否与前方路口的交通信号灯的灯状态变换逻辑相匹配;
所述如果验证结果为通过,则将所述第一交通信号灯状态作为交通信号灯状态识别结果包括:
如果所述连续确定的交通信号灯状态的灯状态变换逻辑,与前方路口的交通信号灯的灯状态变换逻辑相匹配,确定第一交通信号灯状态为,所确定的交通信号灯状态识别结果。
所述对所述第一交通信号灯状态进行验证包括:
将所述第一交通信号灯状态加入预置的滑动时间窗,所述滑动时间窗对应时间长短为预设时间;
判断所述预设时间内所确定的新的交通信号灯状态,是否与第一交通信号灯状态相应;
所述如果验证结果为通过,则将所述第一交通信号灯状态作为交通信号灯状态识别结果包括:
如果所述预设时间内所确定的新的交通信号灯状态,与所述第一交通信号灯状态相应,确定第一交通信号灯状态为,所确定的交通信号灯状态识别结果。
所述对所述第一交通信号灯状态进行验证包括:
获取前方最近路口的交通信号灯的属性信息;
根据所述交通信号灯图像区域,确定所述交通信号灯图像区域所表示的交通信号灯的属性信息;
判断所确定的属性信息,与所述前方最近路口的交通信号灯的属性信息是否匹配;
所述如果验证结果为通过,则将所述第一交通信号灯状态作为交通信号灯状态识别结果包括:
如果所确定的属性信息,与所述前方最近路口的交通信号灯的属性信息相匹配,确定第一交通信号灯状态为,所确定的交通信号灯状态识别结果。
所述对所述第一交通信号灯状态进行验证包括:
对所述交通信号灯图像区域进行色度的颜色识别处理,得到所识别的交通信号灯状态;
判断所识别的交通信号灯状态与所述第一交通信号灯状态是否相应;
所述如果验证结果为通过,则将所述第一交通信号灯状态作为交通信号灯状态识别结果包括:
如果所识别的交通信号灯状态与所述第一交通信号灯状态相应,确定第一交通信号灯状态为,所确定的交通信号灯状态识别结果。
可选地,在上述被设置为存储用于执行上述步骤的程序代码中,如果前方路口为多方向行驶路口,从所述交通信号灯状态识别结果中,确定指示前方路口各个方向的交通信号灯的灯状态;
根据指示前方路口各方向的交通信号灯的灯状态,确定前方路口各方向的通行状态预测结果。
可选地,在上述被设置为存储用于执行上述步骤的程序代码中,所述从所述交通信号灯状态识别结果中,确定指示前方路口各个方向的交通信号灯的灯状态包括:
获取前方路口指示各方向的交通信号灯在待识别图像中转换的位置;
根据前方路口指示各方向的交通信号灯在待识别图像中转换的位置,与交通信号灯状态识别结果中各个交通信号灯的灯状态,确定指示前方路口各方向的交通信号灯的灯状态。
可选地,在上述被设置为存储用于执行上述步骤的程序代码中,所述获取前方路口指示各方向的交通信号灯在待识别图像中转换的位置包括:
获取当前位置,以及前方路口的交通信号灯组中指示各方向的交通信号灯的三维位置坐标;
根据所述当前位置与所述三维位置坐标,确定机动车与指示各方向的交通信号灯的相对位置;
根据目标图像采集装置的标定参数,确定机动车与交通信号灯的相对位置,在待识别图像中位置的转换关系;
根据所述转换关系,将机动车与指示各方向的交通信号灯的相对位置,转换为,指示各方向的交通信号灯在待识别图像中转换的位置。
可选地,在本实施例中,上述存储介质可以包括但不限于:U盘、只 读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
本说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。对于实施例公开的装置而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处参见方法部分说明即可。
专业人员还可以进一步意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
结合本文中所公开的实施例描述的方法或算法的步骤可以直接用硬件、处理器执行的软件模块,或者二者的结合来实施。软件模块可以置于随机存储器(RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、CD-ROM、或技术领域内所公知的任意其它形式的存储介质中。
对所公开的实施例的上述说明,使本领域专业技术人员能够实现或使用本申请。对这些实施例的多种修改对本领域的专业技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本申请的核心思想或范围的情况下,在其它实施例中实现。因此,本申请将不会被限制于本文所示的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。
工业实用性
基于上述技术方案,本申请实施例提供的交通信号灯状态识别方法中, 车载控制终端可获取目标图像采集装置采集的待识别图像,识别所述待识别图像中的交通信号灯图像区域,从而提取出所述交通信号灯图像区域的CNN特征,根据所述CNN特征确定所述交通信号灯图像区域表示的第一交通信号灯状态,并根据所述第一交通信号灯状态,确定交通信号灯状态识别结果。由于CNN特征是基于海量丰富的样本训练提取得到,可抵抗尺度变换、颜色变换、光线变换等多种影响,因此使用CNN特征实现交通信号灯图像区域中的图像特征提取,并基于CNN特征确定交通信号灯图像区域表示的交通信号灯状态,可以降低环境复杂的光线变化对于交通信号灯状态识别的准确率的影响,提升交通信号灯状态识别的准确率。

Claims (23)

  1. 一种信号灯状态识别方法,包括:
    获取目标图像采集装置采集的待识别图像;
    识别所述待识别图像中的交通信号灯图像区域;
    提取所述交通信号灯图像区域的卷积神经网络CNN特征;
    根据所述CNN特征确定所述交通信号灯图像区域表示的第一交通信号灯状态;
    根据所述第一交通信号灯状态,确定交通信号灯状态识别结果。
  2. 根据权利要求1所述的信号灯状态识别方法,其中,所述识别所述待识别图像中的交通信号灯图像区域包括:
    根据预训练的交通信号灯识别模型,识别所述待识别图像中的交通信号灯图像区域;所述交通信号灯识别模型根据机器学习方法训练正样本和负样本得到,其中,正样本为从多个街景图像中标注的具有交通信号灯的图像,负样本为从多个街景图像中标注的街景背景图像。
  3. 根据权利要求1所述的信号灯状态识别方法,其中,所述根据所述CNN特征确定所述交通信号灯图像区域表示的第一交通信号灯状态包括:
    根据预训练的交通信号灯状态分类模型,确定所述CNN特征相应的交通信号灯状态,得到所述交通信号灯图像区域表示的第一交通信号灯状态;其中,交通信号灯状态分类模型根据各交通信号灯状态的交通信号灯图像的CNN特征训练得到。
  4. 根据权利要求1所述的信号灯状态识别方法,其中,所述获 取目标图像采集装置采集的待识别图像包括:
    从预置的多个图像采集装置中选择目标图像采集装置,获取所述目标图像采集装置采集的待识别图像;其中,所述多个图像采集装置分别对应不同的焦距等级,一个焦距等级对应一个焦距范围,焦距等级越高,焦距范围对应的焦距数值越大,对应的清晰可视距离范围越高。
  5. 根据权利要求4所述的信号灯状态识别方法,其中,所述从预置的多个图像采集装置中选择目标图像采集装置包括:
    确定当前位置距前方最近停止线的距离;
    根据预置的各图像采集装置对应的距离范围,确定当前位置距前方最近停止线的距离所处距离范围对应的图像采集装置,选择出目标图像采集装置。
  6. 根据权利要求5所述的信号灯状态识别方法,其中,所述各图像采集装置对应的距离范围的预置过程包括:
    根据各图像采集装置的焦距,各图像采集装置的清晰可视距离,以及距停止线的各距离下,交通信号灯在各图像采集装置所采集的图像内的像素数,确定各图像采集装置对应的距离范围。
  7. 根据权利要求5所述的信号灯状态识别方法,其中,所述确定当前位置距前方最近停止线的距离包括:
    定位当前位置;
    根据所述当前位置向地图服务器发送查询请求,所述查询请求用于请求查询当前位置距前方最近停止线的距离;
    接收所述地图服务器反馈的所查询到的距离。
  8. 根据权利要求1-7任一项所述的信号灯状态识别方法,其中,所述方法还包括:
    在当前位置距前方最近停止线的距离,小于预定距离限值时,触发执行获取目标图像采集装置采集的待识别图像的步骤。
  9. 根据权利要求1所述的信号灯状态识别方法,其中,所述根据所述第一交通信号灯状态,确定交通信号灯状态识别结果包括:
    对所述第一交通信号灯状态进行验证,如果验证结果为通过,则将所述第一交通信号灯状态作为交通信号灯状态识别结果。
  10. 根据权利要求9所述的信号灯状态识别方法,其中,所述对所述第一交通信号灯状态进行验证包括:
    获取前方路口的交通信号灯的灯状态变换逻辑;
    获取连续确定的交通信号灯状态,所述连续确定的交通信号灯状态包括所述第一交通信号灯状态;
    判断所述连续确定的交通信号灯状态的灯状态变换逻辑,是否与前方路口的交通信号灯的灯状态变换逻辑相匹配;
    所述如果验证结果为通过,则将所述第一交通信号灯状态作为交通信号灯状态识别结果包括:
    如果所述连续确定的交通信号灯状态的灯状态变换逻辑,与前方路口的交通信号灯的灯状态变换逻辑相匹配,确定第一交通信号灯状态为,所确定的交通信号灯状态识别结果。
  11. 根据权利要求9所述的信号灯状态识别方法,其中,所述对所述第一交通信号灯状态进行验证包括:
    将所述第一交通信号灯状态加入预置的滑动时间窗,所述滑动时间窗对应时间长短为预设时间;
    判断所述预设时间内所确定的新的交通信号灯状态,是否与第一交通信号灯状态相应;
    所述如果验证结果为通过,则将所述第一交通信号灯状态作为交通信号灯状态识别结果包括:
    如果所述预设时间内所确定的新的交通信号灯状态,与所述第一交通信号灯状态相应,确定第一交通信号灯状态为,所确定的交通信号灯状态识别结果。
  12. 根据权利要求9所述的信号灯状态识别方法,其中,所述对所述第一交通信号灯状态进行验证包括:
    获取前方最近路口的交通信号灯的属性信息;
    根据所述交通信号灯图像区域,确定所述交通信号灯图像区域所表示的交通信号灯的属性信息;
    判断所确定的属性信息,与所述前方最近路口的交通信号灯的属性信息是否匹配;
    所述如果验证结果为通过,则将所述第一交通信号灯状态作为交通信号灯状态识别结果包括:
    如果所确定的属性信息,与所述前方最近路口的交通信号灯的属性信息相匹配,确定第一交通信号灯状态为,所确定的交通信号灯状态识别结果。
  13. 根据权利要求9所述的信号灯状态识别方法,其中,所述对所述第一交通信号灯状态进行验证包括:
    对所述交通信号灯图像区域进行色度的颜色识别处理,得到所识别的交通信号灯状态;
    判断所识别的交通信号灯状态与所述第一交通信号灯状态是否相应;
    所述如果验证结果为通过,则将所述第一交通信号灯状态作为交通信号灯状态识别结果包括:
    如果所识别的交通信号灯状态与所述第一交通信号灯状态相应,确定第一交通信号灯状态为,所确定的交通信号灯状态识别结果。
  14. 根据权利要求1所述的信号灯状态识别方法,其中,还包括:
    如果前方路口为多方向行驶路口,从所述交通信号灯状态识别结果中,确定指示前方路口各个方向的交通信号灯的灯状态;
    根据指示前方路口各方向的交通信号灯的灯状态,确定前方路口各方向的通行状态预测结果。
  15. 根据权利要求14所述的信号灯状态识别方法,其中,所述从所述交通信号灯状态识别结果中,确定指示前方路口各个方向的交通信号灯的灯状态包括:
    获取前方路口指示各方向的交通信号灯在待识别图像中转换的位置;
    根据前方路口指示各方向的交通信号灯在待识别图像中转换的位置,与交通信号灯状态识别结果中各个交通信号灯的灯状态,确定指示前方路口各方向的交通信号灯的灯状态。
  16. 根据权利要求15所述的信号灯状态识别方法,其中,所述获取前方路口指示各方向的交通信号灯在待识别图像中转换的位置 包括:
    获取当前位置,以及前方路口的交通信号灯组中指示各方向的交通信号灯的三维位置坐标;
    根据所述当前位置与所述三维位置坐标,确定机动车与指示各方向的交通信号灯的相对位置;
    根据目标图像采集装置的标定参数,确定机动车与交通信号灯的相对位置,在待识别图像中位置的转换关系;
    根据所述转换关系,将机动车与指示各方向的交通信号灯的相对位置,转换为,指示各方向的交通信号灯在待识别图像中转换的位置。
  17. 一种信号灯状态识别装置,包括:
    图像获取模块,设置为获取目标图像采集装置采集的待识别图像;
    区域识别模块,设置为识别所述待识别图像中的交通信号灯图像区域;
    特征提取模块,设置为提取所述交通信号灯图像区域的卷积神经网络CNN特征;
    第一灯状态确定模块,设置为根据所述CNN特征确定所述交通信号灯图像区域表示的第一交通信号灯状态;
    识别结果确定模块,设置为根据所述第一交通信号灯状态,确定交通信号灯状态识别结果。
  18. 根据权利要求17所述的信号灯状态识别装置,其中,所述图像获取模块,设置为获取目标图像采集装置采集的待识别图像,具 体包括:
    从预置的多个图像采集装置中选择目标图像采集装置,获取所述目标图像采集装置采集的待识别图像;其中,所述多个图像采集装置分别对应不同的焦距等级,一个焦距等级对应一个焦距范围,焦距等级越高,焦距范围对应的焦距数值越大,对应的清晰可视距离范围越高。
  19. 根据权利要求17所述的信号灯状态识别装置,其中,所述识别结果确定模块,设置为根据所述第一交通信号灯状态,确定交通信号灯状态识别结果,具体包括:
    对所述第一交通信号灯状态进行验证,如果验证结果为通过,则将所述第一交通信号灯状态作为交通信号灯状态识别结果。
  20. 根据权利要求17所述的信号灯状态识别装置,其中,还包括:
    通行状态预测模块,设置为如果前方路口为多方向行驶路口,从所述交通信号灯状态识别结果中,确定指示前方路口各个方向的交通信号灯的灯状态;根据指示前方路口各方向的交通信号灯的灯状态,确定前方路口各方向的通行状态预测结果。
  21. 一种车载控制终端,包括:存储器和处理器;
    所述存储器存储有程序,所述处理器调用所述存储器存储的程序,所述程序设置为:
    获取目标图像采集装置采集的待识别图像;
    识别所述待识别图像中的交通信号灯图像区域;
    提取所述交通信号灯图像区域的卷积神经网络CNN特征;
    根据所述CNN特征确定所述交通信号灯图像区域表示的第一交通信号灯状态;
    根据所述第一交通信号灯状态,确定交通信号灯状态识别结果。
  22. 一种机动车,包括:至少一个图像采集装置,车载控制终端;
    其中,所述至少一个图像采集装置设置为采集车身前方的待识别图像;
    所述车载控制终端,设置为获取目标图像采集装置采集的待识别图像,所述目标图像采集装置包含于所述至少一个图像采集装置中;识别所述待识别图像中的交通信号灯图像区域;提取所述交通信号灯图像区域的卷积神经网络CNN特征;根据所述CNN特征确定所述交通信号灯图像区域表示的第一交通信号灯状态;根据所述第一交通信号灯状态,确定交通信号灯状态识别结果。
  23. 一种存储介质,所述存储介质中存储有计算机程序,其中,所述计算机程序被设置为运行时执行所述权利要求1至16任一项中所述的方法。
PCT/CN2018/081575 2017-05-03 2018-04-02 信号灯状态识别方法、装置、车载控制终端及机动车 WO2018201835A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710304207.1A CN108804983B (zh) 2017-05-03 2017-05-03 交通信号灯状态识别方法、装置、车载控制终端及机动车
CN201710304207.1 2017-05-03

Publications (1)

Publication Number Publication Date
WO2018201835A1 true WO2018201835A1 (zh) 2018-11-08

Family

ID=64016437

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/081575 WO2018201835A1 (zh) 2017-05-03 2018-04-02 信号灯状态识别方法、装置、车载控制终端及机动车

Country Status (2)

Country Link
CN (1) CN108804983B (zh)
WO (1) WO2018201835A1 (zh)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109635782A (zh) * 2018-12-31 2019-04-16 天合光能股份有限公司 一种获取无人驾驶所需静态交通信息的方法
CN111009003A (zh) * 2019-10-24 2020-04-14 合肥讯图信息科技有限公司 交通信号灯纠偏的方法、系统及存储介质
CN111114424A (zh) * 2019-12-19 2020-05-08 斑马网络技术有限公司 雾灯开启方法、装置、控制设备及存储介质
CN111340890A (zh) * 2020-02-20 2020-06-26 北京百度网讯科技有限公司 相机外参标定方法、装置、设备和可读存储介质
CN111612034A (zh) * 2020-04-15 2020-09-01 中国科学院上海微系统与信息技术研究所 一种对象识别模型的确定方法、装置、电子设备及存储介质
CN112327855A (zh) * 2020-11-11 2021-02-05 东软睿驰汽车技术(沈阳)有限公司 自动驾驶车辆的控制方法、装置和电子设备
CN112866382A (zh) * 2021-01-18 2021-05-28 湖南大学 一种交通信号灯状态发布方法和系统
CN112908006A (zh) * 2021-04-12 2021-06-04 吉林大学 一种识别道路交通信号灯状态和倒计时显示器时间的方法
CN113221878A (zh) * 2021-04-26 2021-08-06 阿波罗智联(北京)科技有限公司 应用于信号灯检测的检测框调整方法、装置及路侧设备
CN113538911A (zh) * 2020-02-11 2021-10-22 北京百度网讯科技有限公司 路口距离的检测方法、装置、电子设备和存储介质
US20210383692A1 (en) * 2020-12-23 2021-12-09 Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. Signal lamp recognition method, device, and storage medium
US11210571B2 (en) 2020-03-13 2021-12-28 Argo AI, LLC Using rasterization to identify traffic signal devices
CN114475429A (zh) * 2022-02-21 2022-05-13 重庆长安汽车股份有限公司 一种结合用户行驶意图的红绿灯提醒方法、系统及汽车
US11436842B2 (en) 2020-03-13 2022-09-06 Argo AI, LLC Bulb mask representation for traffic light classification
CN115507865A (zh) * 2022-08-31 2022-12-23 广州文远知行科技有限公司 一种标注三维地图交通灯的方法及装置
CN116071937A (zh) * 2023-03-07 2023-05-05 山东华夏高科信息股份有限公司 一种信号灯管控方法和系统
US11704912B2 (en) 2020-06-16 2023-07-18 Ford Global Technologies, Llc Label-free performance evaluator for traffic light classifier system

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111294552A (zh) * 2018-12-07 2020-06-16 浙江宇视科技有限公司 图像采集设备确定方法及装置
CN110021176B (zh) * 2018-12-21 2021-06-15 文远知行有限公司 交通灯决策方法、装置、计算机设备和存储介质
CN109544955A (zh) * 2018-12-26 2019-03-29 广州小鹏汽车科技有限公司 一种交通信号灯的状态获取方法及系统
CN109856591A (zh) * 2019-01-24 2019-06-07 腾讯科技(深圳)有限公司 移动终端的定位方法、装置、计算机可读介质及电子设备
CN111923915B (zh) * 2019-05-13 2021-11-09 上海汽车集团股份有限公司 一种交通灯智能提醒方法、装置及系统
CN112149697A (zh) * 2019-06-27 2020-12-29 商汤集团有限公司 指示灯的指示信息识别方法及装置、电子设备和存储介质
CN110543814B (zh) * 2019-07-22 2022-05-10 华为技术有限公司 一种交通灯的识别方法及装置
CN112307840A (zh) * 2019-07-31 2021-02-02 浙江商汤科技开发有限公司 指示灯检测方法、装置、设备及计算机可读存储介质
CN110598563A (zh) * 2019-08-15 2019-12-20 北京致行慕远科技有限公司 可移动设备行进的处理方法、装置及存储介质
WO2021134348A1 (zh) * 2019-12-30 2021-07-08 深圳元戎启行科技有限公司 交通灯状态识别方法、装置、计算机设备和存储介质
CN111428647B (zh) * 2020-03-25 2023-07-07 浙江中控信息产业股份有限公司 一种交通信号灯故障检测方法
CN111582189B (zh) * 2020-05-11 2023-06-23 腾讯科技(深圳)有限公司 交通信号灯识别方法、装置、车载控制终端及机动车
CN112133088A (zh) * 2020-08-25 2020-12-25 浙江零跑科技有限公司 一种车辆交通辅助指示方法及系统
CN111950535B (zh) * 2020-09-23 2022-07-12 苏州科达科技股份有限公司 交通信号灯灯色、颜色识别方法、电子设备及存储介质
CN112669624B (zh) * 2021-01-22 2022-08-12 胡渐佳 基于智能导航的交通路口信号控制方法和系统
CN113065466B (zh) * 2021-04-01 2024-06-04 安徽嘻哈网络技术有限公司 一种基于深度学习的驾培用红绿灯检测系统
CN113033464B (zh) * 2021-04-10 2023-11-21 阿波罗智联(北京)科技有限公司 信号灯检测方法、装置、设备以及存储介质
CN113343872B (zh) * 2021-06-17 2022-12-13 亿咖通(湖北)技术有限公司 交通灯识别方法、装置、设备、介质及产品
CN113781778B (zh) * 2021-09-03 2022-09-06 新奇点智能科技集团有限公司 数据处理方法、装置、电子设备及可读存储介质
CN113989774A (zh) * 2021-10-27 2022-01-28 广州小鹏自动驾驶科技有限公司 一种交通灯检测方法、装置、车辆和可读存储介质
CN116503832A (zh) * 2023-03-23 2023-07-28 合众新能源汽车股份有限公司 一种基于深度学习的信号灯识别方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101275839A (zh) * 2007-03-30 2008-10-01 爱信艾达株式会社 地物信息收集装置与地物信息收集方法
CN103679194A (zh) * 2013-11-26 2014-03-26 西安交通大学 一种基于滤光片的红绿灯识别方法
CN106023605A (zh) * 2016-07-15 2016-10-12 姹ゅ钩 一种基于深度卷积神经网络的交通信号灯控制方法
CN106295605A (zh) * 2016-08-18 2017-01-04 宁波傲视智绘光电科技有限公司 红绿灯检测与识别方法
CN106570494A (zh) * 2016-11-21 2017-04-19 北京智芯原动科技有限公司 基于卷积神经网络的交通信号灯识别方法及装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102117546B (zh) * 2011-03-10 2013-05-01 上海交通大学 车载交通信号灯辅助装置
CN103473946B (zh) * 2013-06-25 2017-05-10 中国计量学院 一种基于坐标的路口信号灯状态即时提示方法及系统
CN103971525B (zh) * 2014-05-28 2016-04-13 河南师范大学 一种交通信号灯系统
US10121370B2 (en) * 2014-09-20 2018-11-06 Mohamed Roshdy Elsheemy Comprehensive traffic control system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101275839A (zh) * 2007-03-30 2008-10-01 爱信艾达株式会社 地物信息收集装置与地物信息收集方法
CN103679194A (zh) * 2013-11-26 2014-03-26 西安交通大学 一种基于滤光片的红绿灯识别方法
CN106023605A (zh) * 2016-07-15 2016-10-12 姹ゅ钩 一种基于深度卷积神经网络的交通信号灯控制方法
CN106295605A (zh) * 2016-08-18 2017-01-04 宁波傲视智绘光电科技有限公司 红绿灯检测与识别方法
CN106570494A (zh) * 2016-11-21 2017-04-19 北京智芯原动科技有限公司 基于卷积神经网络的交通信号灯识别方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
V. JOHN ET AL.: "Saliency Map Generation by the Convolutional Neural Network for Real-Time Traffic Light Detection Using Template Matching", IEEE TRANS .COMPUTATIONAL IMAGING, vol. 1, no. 3, 30 September 2015 (2015-09-30), pages 159 - 173, XP011588186 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109635782A (zh) * 2018-12-31 2019-04-16 天合光能股份有限公司 一种获取无人驾驶所需静态交通信息的方法
CN111009003A (zh) * 2019-10-24 2020-04-14 合肥讯图信息科技有限公司 交通信号灯纠偏的方法、系统及存储介质
CN111009003B (zh) * 2019-10-24 2023-04-28 合肥讯图信息科技有限公司 交通信号灯纠偏的方法、系统及存储介质
CN111114424A (zh) * 2019-12-19 2020-05-08 斑马网络技术有限公司 雾灯开启方法、装置、控制设备及存储介质
CN113538911A (zh) * 2020-02-11 2021-10-22 北京百度网讯科技有限公司 路口距离的检测方法、装置、电子设备和存储介质
CN111340890A (zh) * 2020-02-20 2020-06-26 北京百度网讯科技有限公司 相机外参标定方法、装置、设备和可读存储介质
CN111340890B (zh) * 2020-02-20 2023-08-04 阿波罗智联(北京)科技有限公司 相机外参标定方法、装置、设备和可读存储介质
US11210571B2 (en) 2020-03-13 2021-12-28 Argo AI, LLC Using rasterization to identify traffic signal devices
US11670094B2 (en) 2020-03-13 2023-06-06 Ford Global Technologies, Llc Using rasterization to identify traffic signal devices
US11436842B2 (en) 2020-03-13 2022-09-06 Argo AI, LLC Bulb mask representation for traffic light classification
CN111612034A (zh) * 2020-04-15 2020-09-01 中国科学院上海微系统与信息技术研究所 一种对象识别模型的确定方法、装置、电子设备及存储介质
CN111612034B (zh) * 2020-04-15 2024-04-12 中国科学院上海微系统与信息技术研究所 一种对象识别模型的确定方法、装置、电子设备及存储介质
EP4165526A4 (en) * 2020-06-16 2024-10-02 Argo Ai Llc LABELLESS PERFORMANCE EVALUATOR FOR TRAFFIC LIGHT CLASSIFIER SYSTEM
US11704912B2 (en) 2020-06-16 2023-07-18 Ford Global Technologies, Llc Label-free performance evaluator for traffic light classifier system
CN112327855A (zh) * 2020-11-11 2021-02-05 东软睿驰汽车技术(沈阳)有限公司 自动驾驶车辆的控制方法、装置和电子设备
US20210383692A1 (en) * 2020-12-23 2021-12-09 Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. Signal lamp recognition method, device, and storage medium
US11715372B2 (en) * 2020-12-23 2023-08-01 Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. Signal lamp recognition method, device, and storage medium
CN112866382A (zh) * 2021-01-18 2021-05-28 湖南大学 一种交通信号灯状态发布方法和系统
CN112908006A (zh) * 2021-04-12 2021-06-04 吉林大学 一种识别道路交通信号灯状态和倒计时显示器时间的方法
CN113221878A (zh) * 2021-04-26 2021-08-06 阿波罗智联(北京)科技有限公司 应用于信号灯检测的检测框调整方法、装置及路侧设备
CN114475429A (zh) * 2022-02-21 2022-05-13 重庆长安汽车股份有限公司 一种结合用户行驶意图的红绿灯提醒方法、系统及汽车
CN114475429B (zh) * 2022-02-21 2024-03-22 重庆长安汽车股份有限公司 一种结合用户行驶意图的红绿灯提醒方法、系统及汽车
CN115507865A (zh) * 2022-08-31 2022-12-23 广州文远知行科技有限公司 一种标注三维地图交通灯的方法及装置
CN116071937A (zh) * 2023-03-07 2023-05-05 山东华夏高科信息股份有限公司 一种信号灯管控方法和系统

Also Published As

Publication number Publication date
CN108804983A (zh) 2018-11-13
CN108804983B (zh) 2022-03-18

Similar Documents

Publication Publication Date Title
WO2018201835A1 (zh) 信号灯状态识别方法、装置、车载控制终端及机动车
CN107506760B (zh) 基于gps定位与视觉图像处理的交通信号检测方法及系统
Jensen et al. Vision for looking at traffic lights: Issues, survey, and perspectives
CN103345766A (zh) 一种信号灯识别方法及装置
EP1930863B1 (en) Detecting and recognizing traffic signs
TWI302879B (en) Real-time nighttime vehicle detection and recognition system based on computer vision
WO2017171659A1 (en) Signal light detection
CN111079586A (zh) 基于深度学习与双目摄像的自动驾驶目标检测系统及方法
JP2011216051A (ja) 信号灯識別プログラムおよび信号灯識別装置
CN102201059A (zh) 一种行人检测方法及装置
CN106778534B (zh) 一种车辆行驶中周围环境灯光识别方法
CN105023452B (zh) 一种多路交通信号灯信号采集的方法及装置
JP2018063680A (ja) 交通信号認識方法および交通信号認識装置
CN102902957A (zh) 一种基于视频流的自动车牌识别方法
CN107657832B (zh) 一种车位引导方法及系统
JP3621065B2 (ja) 画像検出装置、プログラムおよび記録媒体
Hakim et al. Implementation of an image processing based smart parking system using Haar-Cascade method
US10217240B2 (en) Method and system to determine distance to an object in an image
CN103324957A (zh) 信号灯状态的识别方法及识别装置
KR20180055083A (ko) 주차면 관리시스템
CN108229447B (zh) 一种基于视频流的远光灯检测方法
JP5200861B2 (ja) 標識判定装置および標識判定方法
CN113111682A (zh) 目标对象感知方法和装置、感知基站、感知系统
CN114582146A (zh) 红绿灯剩余时长智能提醒方法、系统、存储介质及汽车
CN105389993A (zh) 视觉交通信号的处理与识别方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18794342

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18794342

Country of ref document: EP

Kind code of ref document: A1