WO2021017341A1 - 识别智能行驶设备的行驶状态的方法及装置、设备 - Google Patents

识别智能行驶设备的行驶状态的方法及装置、设备 Download PDF

Info

Publication number
WO2021017341A1
WO2021017341A1 PCT/CN2019/121057 CN2019121057W WO2021017341A1 WO 2021017341 A1 WO2021017341 A1 WO 2021017341A1 CN 2019121057 W CN2019121057 W CN 2019121057W WO 2021017341 A1 WO2021017341 A1 WO 2021017341A1
Authority
WO
WIPO (PCT)
Prior art keywords
state
driving
driving device
smart
image
Prior art date
Application number
PCT/CN2019/121057
Other languages
English (en)
French (fr)
Inventor
陈锦生
蒋沁宏
Original Assignee
浙江商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 浙江商汤科技开发有限公司 filed Critical 浙江商汤科技开发有限公司
Priority to KR1020207036574A priority Critical patent/KR20210015861A/ko
Priority to JP2020567963A priority patent/JP7074896B2/ja
Priority to SG11202013001RA priority patent/SG11202013001RA/en
Priority to US17/124,940 priority patent/US20210103746A1/en
Publication of WO2021017341A1 publication Critical patent/WO2021017341A1/zh
Priority to JP2022077972A priority patent/JP2022105569A/ja

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights

Definitions

  • the embodiments of the present application relate to the field of automatic driving technology, and relate to, but are not limited to, methods, devices, and equipment for identifying the driving state of smart driving equipment.
  • Car light status recognition is a part of automatic driving. Through the recognition of car light status, the possible status of the surrounding intelligent driving equipment can be judged, such as left and right steering, braking, etc. This plays an auxiliary role in the decision-making of autonomous driving.
  • the embodiments of the present application provide a method, device, and device for identifying the driving state of a smart driving device.
  • An embodiment of the present application provides a method for identifying the driving state of a smart driving device.
  • the method includes: determining the subject orientation of the smart driving device according to a to-be-processed image containing the smart driving device; and determining the The state of the first driving state indicator light included in the smart driving device; the driving state of the smart driving device is determined according to the orientation of the main body and the state of the first driving state indicator light.
  • An embodiment of the present application provides an apparatus for recognizing the driving state of a smart driving device, the device comprising: a first determining module configured to determine the main body orientation of the smart driving device according to a to-be-processed image containing the smart driving device; and second The determining module is configured to determine the state of the first driving status indicator light included in the smart driving device according to the image to be processed; the third determining module is configured to determine the status of the first driving status indicator light according to the orientation of the main body and State, to determine the driving state of the smart driving device.
  • An embodiment of the present application provides a computer storage medium that stores computer-executable instructions. After the computer-executable instructions are executed, the method for identifying the driving state of a smart driving device provided by the embodiments of the present application can be implemented Steps in.
  • An embodiment of the application provides a computer device, the computer device includes a memory and a processor, the memory stores computer executable instructions, and the processor can implement the application when the processor runs the computer executable instructions on the memory The steps in the method for identifying the driving state of the smart driving device provided by the embodiment.
  • a computing program product according to an embodiment of the present application, wherein the computer program product includes computer-executable instructions. After the computer-executable instructions are executed, the method for identifying the driving state of a smart driving device provided in the embodiments of the present application can be implemented A step of.
  • the task of identifying the driving state of the smart driving device is subdivided into multiple subtasks.
  • the identification of the main body orientation of the smart driving device and the status of the first driving state indicator on the smart driving device Identify, and then combine the two recognition results to determine the driving state of the smart driving device, thereby reducing the difficulty of the task of identifying the driving state of the smart driving device, and improving the accuracy of the driving state recognition of the smart driving device.
  • FIG. 1A is a schematic diagram of an implementation process of a method for identifying a driving state of a smart driving device according to an embodiment of the application;
  • FIG. 1B is a schematic diagram of another implementation process of the method for identifying the driving state of a smart driving device according to an embodiment of the application;
  • FIG. 1C is a schematic diagram of another implementation process of the method for identifying the driving state of a smart driving device according to an embodiment of the present application
  • 2A is a schematic diagram of another implementation process of the method for identifying the driving state of the smart driving device according to the embodiment of the present application;
  • 2B is a schematic diagram of another implementation process of the method for identifying the driving state of the smart driving device according to the embodiment of the application;
  • FIG. 2C is a scene diagram of a smart driving device according to an embodiment of the application.
  • 2D is a schematic diagram of another implementation process of the neural network training method according to the embodiment of the application
  • FIG. 3 is a schematic diagram of another implementation process of the neural network training method according to the embodiment of the application.
  • FIG. 4 is a schematic diagram of the composition structure of the device for identifying the driving state of the smart driving device according to an embodiment of the application;
  • FIG. 5 is a schematic diagram of the composition structure of a computer device according to an embodiment of the application.
  • This embodiment proposes a method for identifying the driving state of a smart driving device to be applied to a computer device.
  • the computer device may include a smart driving device or an unsmart driving device.
  • the functions implemented by this method can be called by a processor in the computer device.
  • It can be realized by program code.
  • the program code can be stored in a computer storage medium. It can be seen that the computer device at least includes a processor and a storage medium.
  • FIG. 1A is a schematic diagram of the implementation process of the method for identifying the driving state of a smart driving device according to an embodiment of the application, as shown in FIG. 1A, and will be described in conjunction with the method shown in FIG. 1A:
  • Step S101 Determine the main body orientation of the smart driving device according to the image to be processed including the smart driving device.
  • the smart driving device includes: smart driving devices with various functions, smart driving devices with various rounds, etc., robots, aircraft, blind guides, smart home devices or smart toys, etc. .
  • the image to be processed may be a continuous multi-frame image.
  • the smart driving device is a vehicle, and the image to be processed may be, within 1 second (s) of the vehicle driving, the continuous multi-frame image that contains the vehicle is also collected. It can be non-consecutive frames containing images of vehicles.
  • the smart driving device is a vehicle as an example for description.
  • the main orientation of the smart driving device includes: the direction facing the acquisition device of the image to be processed, which can be understood as the head of the vehicle presented by the image to be processed, that is, the user can see the head of the vehicle through the image to be processed; or,
  • the direction facing away from the acquisition device of the to-be-processed image can be understood as the to-be-processed image presents the rear of the vehicle, that is, the user can see the rear of the vehicle through the to-be-processed image.
  • Step S102 Determine the state of the first driving state indicator light included in the smart driving device according to the image to be processed.
  • the main body orientation of the vehicle is classified; the first driving state indicator is used to indicate that the smart driving device is in one of the following states: braking state, steering state, backward state, or abnormal state, etc.
  • the first driving status indicator light when the first driving status indicator light is located at the front of the vehicle, the first driving status indicator light may be a turn signal, etc., when the turn signal is on, it is determined that the vehicle is about to turn or is in a turn.
  • the first driving indicator light can be a brake light, a reverse light or a turn signal, etc.
  • the driving status of the vehicle can be determined according to the state of the vehicle light. For example, when the reversing light is on, it means the vehicle is in the reverse state; when the brake light is on, it means the vehicle is in the braking state; when the light is on, it means the vehicle is in the driving state; when the outline light is on, the vehicle is in the driving state.
  • Step S103 Determine the driving state of the smart driving device according to the orientation of the main body and the state of the first driving state indicator.
  • the step S103 includes the following two situations: First, in response to the orientation of the subject facing the direction of the acquiring device of the image to be processed, according to the setting in the front of the smart driving device The state of the first driving state indicator light determines the driving state of the smart driving device.
  • the subject's orientation is the direction facing the acquisition device of the image to be processed, indicating that the image to be processed is the head of the smart driving device. Taking a vehicle as an example, what can be seen from the image to be processed It is the light located at the head of the vehicle, such as turn signal, position light or illuminator.
  • the driving state of the vehicle Based on the lights at the front of the vehicle, determine the driving state of the vehicle. For example, the turning lights of the vehicle are dimming left and bright, indicating that the vehicle is about to or is turning to the right.
  • the state of the first driving status indicator provided at the rear of the smart driving device is determined to determine the Driving state.
  • the subject's orientation is the direction facing away from the acquisition device of the image to be processed. It can be understood that the image to be processed is the tail of the smart driving device. Taking a vehicle as an example, it can be seen from the image to be processed.
  • the lights at the rear of the vehicle such as turn signals, brake lights, or reversing lights. Based on the lights at the rear of the vehicle, determine the driving state of the vehicle. For example, if the brake light of the vehicle is on, it indicates that the vehicle is braking, that is, the brake pedal of the vehicle is depressed.
  • the two recognition results are combined to determine the driving state of the smart driving device, thereby reducing the difficulty of identifying the driving state of the smart driving device and improving the accuracy of the driving state recognition of the smart driving device.
  • the embodiment of the application provides a method for identifying the driving state of a smart driving device.
  • the smart driving device is a vehicle as an example.
  • FIG. 1B is another example of the method for identifying the driving state of the smart driving device in the embodiment of this application.
  • Step S121 Determine the subject orientation of the smart driving device according to the image to be processed including the smart driving device. In order to determine the subject's orientation more quickly and accurately, step S121 can also be implemented through the following steps:
  • the first step is to determine the first image area occupied by the main body of the smart driving device in the image to be processed.
  • step S121 can be implemented by a neural network. In this case, first perform feature extraction on the image to be processed, and then determine the partial feature map of the main body of the intelligent driving device, and finally analyze the main body of the intelligent driving device based on the partial feature map. Orientation to judge.
  • the second step is to determine the subject orientation of the smart driving device according to the image in the first image area.
  • the orientation of the main body of the smart driving device is determined in part of the feature map, so that only the part of the feature map containing the main body of the smart driving device is used to determine the main body orientation. Accurately determine the subject's orientation.
  • Step S122 Determine the state of the second driving state indicator light according to the image to be processed.
  • the second driving state indicator light is used to indicate whether the smart driving device is in a braking state, such as a high-position brake light of a vehicle.
  • the second driving status indicator light includes at least one of the following: bright, dark, or none. Wherein, none means that the second driving state indicator is not detected in the image to be processed.
  • the darkness and absence of the second driving state indicator light are collectively referred to as darkness.
  • step S122 can be implemented by a neural network. In this case, first perform feature extraction on the image to be processed to obtain the feature map; then, perform the state of the second driving state indicator light on the feature map.
  • step S122 can also be implemented by the following steps: the first step is to determine that the second driving state indicator light of the smart driving device is in the image to be processed The third image area occupied by.
  • step S122 can be implemented by a neural network.
  • the second step is to determine the state of the second driving state indicator light based on the image in the third image area.
  • the state of the second driving status indicator light of the smart driving device is determined in a part of the characteristic map, so that only the part of the characteristic map containing the state of the second driving status indicator light of the smart driving device is judged. The state of the second driving state indicator light not only reduces the amount of calculation, but also can more accurately determine the state of the second driving state indicator light.
  • Step S123 in response to the state of the second driving state indicator being dark, determine the state of the first driving state indicator included in the smart driving device according to the image to be processed.
  • the state of the second driving status indicator is dark, including two situations: the second driving status indicator is not detected or the second driving status indicator is dark, then continue to determine the first driving status The state of the indicator light, and then based on the state of the first driving state indicator light, the driving state of the smart driving device is determined. For example, if the high-position brake light of the vehicle is not detected, it means that the head of the vehicle is shown in the image to be processed or the vehicle does not have high-position brake light, so continue to check the vehicle's first driving status indicator to determine whether the vehicle is turning or going straight, etc.
  • step S123 may also be implemented by the following steps: the first step is to determine that the first driving state indicator light of the smart driving device is in the image to be processed The second image area occupied by.
  • step S123 can be implemented by a neural network. At this time, feature extraction is performed on the image to be processed first, and then a partial feature map containing the first driving status indicator light of the intelligent driving device is determined, and finally based on the partial feature map Judge the state of the first driving status indicator light of the smart driving device.
  • the second step is to determine the state of the first driving state indicator light according to the image in the second image area.
  • the state of the first driving status indicator light of the smart driving device is determined in a partial characteristic map, so that the judgment is made only from the partial characteristic map that contains the state of the first driving status indicator light of the smart driving device.
  • the state of the first driving state indicator light not only reduces the amount of calculation, but also enables more accurate judgment of the state of the first driving state indicator light.
  • the image to be processed in response to the subject's orientation being the subject facing forward, is input to the first branch of the neural network to obtain the The first driving status indicator; in response to the subject's target orientation being the subject facing backwards, the image to be processed is input to the second branch of the neural network to obtain the first driving status indicator; for example, the subject's target orientation is facing
  • the first branch of the neural network for example, the classifier
  • the branch classifies the left and right turn signals in front of the vehicle; the subject is facing backwards, indicating that the two left and right turn signals at the rear of the vehicle need to be classified, and the image to be processed containing the left and right turn signals behind the vehicle is input to the nerve
  • the second branch of the network classifies the left and right turn signals behind the vehicle.
  • the turn signal includes the lights on the left and right sides of the front or rear of the vehicle.
  • the lights on the left and right sides of the front or rear of the vehicle displayed in the same image to be processed are used as a group, then the first driving state indicator light includes The following multiple combinations: (left turn signal light, right turn light light), (left turn light light, right turn light dark), (left turn light dark, right turn light light) and (left turn light dark, The right turn signal is dim).
  • Step S124 Determine the driving state of the smart driving device according to the orientation of the main body and the state of the first driving state indicator.
  • Step S125 in response to the state of the second driving state indicator being on, it is determined that the smart driving device is in a braking state.
  • the high-position brake light of the vehicle is on, indicating that the vehicle is under braking. There is no need to detect the vehicle's first driving status indicator.
  • the smart driving device by detecting the second driving state indicator light of the smart driving device, it can quickly determine whether the smart driving device is in the braking state, and if not, continue to detect the first driving state indicator light of the smart driving device. So as to accurately predict the driving state of the vehicle.
  • the embodiment of the present application provides a method for identifying the driving state of a smart driving device.
  • the smart driving device is a vehicle as an example.
  • the image to be processed is a continuous multi-frame image to be processed.
  • Figure 1C is the application Another implementation flow diagram of the method for recognizing the driving state of the smart driving device according to the embodiment is shown in FIG. 1C, which is described in conjunction with the method shown in FIG. 1C:
  • Step S131 Determine the subject orientation of the smart driving device according to each frame of the image to be processed in the continuous multiple frames of image to be processed.
  • step S131 can be implemented by a neural network.
  • feature extraction is performed on each frame of the continuous multiple frames of images to be processed, and then for each frame of image to be processed, based on the feature map Determine the subject orientation in the frame to be processed.
  • Step S132 Determine the main body orientation of the smart driving device according to the main body orientation of the smart driving device determined from each frame of image to be processed.
  • the vehicle is turning around.
  • the main body of the vehicle faces the direction of the acquisition device for the to-be-processed image, but it has successfully turned around in the subsequent frames of the to-be-processed image
  • the main body of the vehicle is facing away from the acquisition device of the image to be processed, so the final determination of the vehicle's main direction is the direction away from the acquisition device of the image to be processed, so that misjudgment of the subject's orientation can be avoided.
  • Step S133 Determine the state of the first driving state indicator light according to each frame of the to-be-processed image in the continuous multiple frames of to-be-processed images.
  • the state of the first driving state indicator in the frame of the image to be processed is determined based on the characteristic map.
  • Step S134 Determine the state of the first driving state indicator light according to the state of the first driving state indicator light determined by each frame of the image to be processed.
  • the vehicle is malfunctioning and it is flashing double flashes.
  • the state of the vehicle's first driving status indicator light In the previous frame of image to be processed, the state of the vehicle's first driving status indicator light. If the judgment is made based on this frame of image only, an error will occur. Judgment phenomenon; in this way, based on the state of the first driving status indicator for each frame of the continuous multiple frames to be processed, this misjudgment phenomenon can be avoided, so as to more accurately determine the state of the first driving status indicator .
  • Step S135 Determine the driving state of the smart driving device according to the orientation of the main body and the state of the first driving state indicator.
  • the orientation of the main body of the smart driving device and the state of the first driving state indicator are determined, and then, based on this, the driving state of the smart driving device is predicted, which avoids The misjudgment of the main body orientation and the state of the first driving state indicator improves the accuracy of predicting the driving state of the smart driving device.
  • the embodiment of the present application provides a method for identifying the driving state of a smart driving device.
  • the method for identifying the driving state of the smart driving device is implemented by a neural network.
  • FIG. 2A is an example of the method for identifying the driving state of the smart driving device according to an embodiment of this application. Another schematic diagram of the implementation process, as shown in Figure 2A, is described in conjunction with the method shown in Figure 2A:
  • Step S201 extract a feature map from the image to be processed by using the neural network.
  • the image to be processed is input into a residual network (ResNet network), and feature extraction is performed on the image to be processed to obtain a feature map of the image to be processed.
  • ResNet network residual network
  • Step S202 The neural network determines the subject orientation of the smart driving device according to the extracted feature map.
  • the feature maps of multiple images to be processed are input into the first branch of the neural network to obtain the confidence of the orientation of each subject, and the subject with the confidence greater than the preset confidence threshold is used as the smart driving device The subject is facing.
  • Step S203 In response to the subject's orientation facing the direction of the acquisition device for the image to be processed, use the first branch in the neural network to determine the first branch set at the front of the smart driving device according to the feature map.
  • the state of the driving state indicator light, and the driving state of the smart driving device is determined according to the determined state of the first driving state indicator light provided in the front of the smart driving device.
  • the first branch of the neural network is used to classify the state of the first driving status indicator light on the front of the intelligent driving device.
  • the feature maps of consecutive multiple frames of the image to be processed into the first branch of the neural network to obtain each possible first driving state indication
  • the confidence of the state of the lamp for example, the confidence that the state of the first driving state indicator is (left dark and right dark), right dark and left bright, or left dark and right bright.
  • the state of the first driving state indicator light whose confidence is greater than the preset confidence threshold is used as the state of the first driving state indicator light of the smart driving device.
  • the state of the first driving state indicator light with a greater degree of confidence indicates that the state of the first driving state indicator light is more likely to be the state of the first driving state indicator light, so the confidence level is selected.
  • the state of the first driving state indicator light that is greater than the preset reliability threshold is used as the first vehicle lamp target state to ensure the accuracy of the classification result obtained by the first branch.
  • Step S204 In response to the subject's orientation facing away from the acquisition device of the image to be processed, the second branch in the neural network is used to determine the first set at the rear of the smart driving device according to the feature map. A state of a driving state indicator, and the driving state of the smart driving device is determined according to the determined state of the first driving state indicator provided at the rear of the smart driving device.
  • the second branch of the neural network is used to classify the state of the first driving status indicator light at the rear of the intelligent driving device.
  • the subject's orientation is the direction away from the acquisition device of the image to be processed, indicating that the rear of the smart driving device is presented in the image to be processed, for example, the rear of the vehicle, then the rear of the smart driving device can be acquired in the image to be processed
  • the first driving status indicator light is the turn signal on the left and right sides of the rear of the vehicle.
  • the state of the first driving state indicator light whose confidence is greater than the preset confidence threshold is used as the state of the first driving state indicator light of the smart driving device.
  • the neural network is used to first perform feature extraction on the image to be processed, and then the neural network determines the confidence of each possible subject orientation and the state of each possible first driving state indicator based on the feature map.
  • the subject with greater confidence as the state of the main body of the smart driving device and the state of the first driving state indicator
  • the driving status of the smart driving device is used to first perform feature extraction on the image to be processed.
  • the task of identifying the driving state of the smart driving device By subdividing the task of identifying the driving state of the smart driving device into multiple sub-tasks, first the identification of the main body orientation of the smart driving device and the state of the first driving status indicator on the smart driving device are identified, and then the two The recognition results are combined to determine the driving state of the smart driving device, thereby reducing the difficulty of the task of identifying the driving state of the smart driving device and improving the accuracy of the driving state recognition of the smart driving device.
  • the embodiment of the application provides a method for identifying the driving state of a smart driving device.
  • the method for identifying the driving state of the smart driving device is implemented by a neural network.
  • FIG. 2B is an example of the method for identifying the driving state of the smart driving device according to an embodiment of the application. Another schematic diagram of the implementation process, as shown in Figure 2B, is described in conjunction with the method shown in Figure 2B:
  • Step S221 Extract a feature map from the image to be processed by using the neural network.
  • the image to be processed is input into a residual network (ResNet network), and feature extraction is performed on the image to be processed to obtain a feature map of the image to be processed.
  • ResNet network residual network
  • Step S222 The neural network determines the subject orientation of the smart driving device according to the extracted feature map.
  • the feature maps of multiple images to be processed are input into the first branch of the neural network to obtain the confidence of the orientation of each subject, and the subject with the confidence greater than the preset confidence threshold is used as the smart driving device The subject is facing.
  • the image 21 presents the rear of the vehicle 22, and the main body orientation of the vehicle 22 in the image 21 is determined to be backward, that is, the main body orientation is back to the acquisition device of the image to be processed.
  • Step S223 The neural network determines the state of the second driving state indicator light according to the extracted feature map.
  • the second driving state indicator light may be a high-position brake light of the smart driving device. Input the feature maps of consecutive multiple frames of images to be processed into the neural network to obtain the confidence of the state of each possible second driving state indicator, for example, the confidence that the state of the second driving state indicator is bright or dark degree. Then, the state of the second driving state indicator light whose confidence is greater than the preset confidence threshold is used as the state of the second driving state indicator light of the smart driving device. Therefore, the accuracy of recognizing the state of the second driving state indicator is ensured.
  • Step S224 in response to the subject's orientation facing the direction of the acquisition device for the image to be processed and the state of the second driving status indicator is dark, use the first branch in the neural network according to the feature map Determine the state of the first driving status indicator light provided in the front of the smart driving device, and determine the driving of the smart driving device according to the determined state of the first driving status indicator light provided in the front of the smart driving device status.
  • the main body faces the direction of the acquisition device of the image to be processed (that is, the main body faces forward) and the state of the second driving status indicator is dark
  • the feature map is input to the neural network
  • the first branch is to obtain multiple possible confidence levels of the state of the first driving state indicator light on the head of the vehicle, and then use the state of the first driving state indicator light with the greater confidence level as the state of the first driving state indicator light.
  • Step S225 in response to the subject's orientation being a direction away from the acquisition device of the image to be processed and the state of the second driving status indicator is dark, use the second branch of the neural network according to the feature
  • the figure determines the state of the first driving state indicator light provided at the rear of the smart driving device, and determines the state of the smart driving device according to the determined state of the first driving state indicator light provided at the rear of the smart driving device Driving state.
  • the main body faces the direction away from the acquisition device of the image to be processed (that is, the main body faces backward) and the state of the second driving state indicator is dark
  • the feature map is input to the neural network
  • the second branch is to obtain a plurality of possible confidence levels of the state of the first driving state indicator light at the rear of the vehicle, and then use the state of the first driving state indicator light with the greater confidence level as the state of the first driving state indicator light.
  • Step S226 In response to the state of the second driving state indicator being on, it is determined that the smart driving device is in a braking state.
  • a neural network is used to classify the orientation of the main body of the smart driving device and the states of multiple indicator lights in detail, which ensures the accuracy of recognizing the orientation of the main body and the status of the indicator lights, thereby ensuring that the intelligent recognition is based on this The accuracy of the driving state of the driving equipment.
  • the neural network is obtained by training using the following steps, as shown in Fig. 2D, and the following description is given in conjunction with Fig. 2D:
  • Step S231 Obtain a sample image containing the smart driving device.
  • the smart driving device is used as a vehicle for description. Acquire multiple sample images containing vehicles, for example, sample images containing vehicle patterns.
  • Step S232 Determine the main body orientation of the smart driving device according to the sample image containing the smart driving device.
  • the main body orientation of the smart driving device is determined, and the feature map is input into the branch of the main body facing the neural network to obtain the smart driving device.
  • the state of the first driving state indicator light of the driving device for example, the main body is facing the direction of the acquisition device of the sample image, and the feature map is input into the first branch to obtain the first driving state indication at the front of the smart driving device
  • the status of the lights for example, is the status of the turn signals on the left and right sides of the front of the vehicle.
  • the main body is facing away from the acquisition device of the sample image.
  • the state of the first driving status indicator at the rear of the smart driving device is obtained, for example, the left and right sides of the vehicle are obtained.
  • the state of the turn signals on both sides. In this way, the classification task is more refined for different subject orientations to train different branches, so as to ensure the accuracy of the classification of the state of the first driving state indicator.
  • Step S233 In response to the main body's orientation facing the direction of the acquisition device of the sample image, use the first branch in the neural network to determine the state of the first driving status indicator provided in the front of the intelligent driving device , And determine the driving state of the smart driving device according to the determined state of the first driving state indicator light provided in the front of the smart driving device.
  • Step S234 In response to the subject's orientation facing away from the acquisition device of the sample image, use the second branch in the neural network to determine the position of the first driving status indicator light provided at the rear of the intelligent driving device State, and determine the driving state of the smart driving device according to the determined state of the first driving state indicator light provided at the rear of the smart driving device.
  • Step S235 Adjust network parameter values of the neural network according to the determined main body orientation, the marked main body orientation, the determined state of the first driving state indicator, and the marked state of the first driving state indicator.
  • the state of the first driving status indicator on the front of the smart driving device and the marked first The state of the driving state indicator light determines the preset loss function of the driving state, and uses the loss function to adjust the network parameter values of the first branch of the neural network, so that the adjusted first branch can accurately predict the intelligent driving device The state of the first driving status indicator on the front.
  • the state of the first driving status indicator light at the rear of the smart driving device and the marked state of the first driving status indicator light at the rear are adopted, Determine the preset loss function of the driving state, use the loss function to adjust the network parameter values of the second branch of the neural network, so that the adjusted first branch can accurately predict the first driving of the rear of the intelligent driving device.
  • the embodiment of the application provides a method for recognizing the driving state of a smart driving device.
  • a smart driving device as an example of a vehicle
  • a deep learning framework is used to recognize the attributes of the car lights; then, a large amount of training data is used to make the trained nerves
  • the network is more robust and can perform well in a variety of application scenarios.
  • vehicle lamp attribute recognition generally categorizes all categories of pictures, which are divided into brake light recognition and turn signal recognition; however, the embodiment of this application processes small tasks by subdividing tasks, and first performs smart driving Identify the attributes of the equipment, and through different branch training, can realize the sub-classification and recognition of the attributes of the lights; in addition, the position of the lights is judged by the key points, and the visibility information of the key points is used to locate the lights more accurately, so that the lights The accuracy of attribute judgment is higher.
  • FIG. 3 is a schematic diagram of another implementation process of the neural network training method according to the embodiment of the application. As shown in FIG. 3, the following description will be made in conjunction with FIG. 3:
  • Step S301 Input a sample image containing the intelligent driving device into the neural network to obtain a feature map of the sample image.
  • Step S302 Input the characteristic map into the neural network respectively to obtain the main body orientation and the state of the second driving state indicator.
  • the key point information of the vehicle body is used to obtain the position of the vehicle body in the feature map (the first image area occupied by the vehicle body in the sample image) according to the key point information, and this part of the feature The map is input into the neural network to obtain the subject's orientation; the key point information of the vehicle's second driving state indicator is used to obtain the position of the second driving state indicator in the feature map (the vehicle's second driving state The third image area occupied by the indicator light in the sample image), and this part of the feature map is input into the neural network to obtain the state of the second driving state indicator light.
  • Step S303 Determine the loss corresponding to the subject orientation output by the neural network and the loss corresponding to the state of the second driving state indicator according to the marked subject orientation and the marked state of the second driving state indicator light.
  • the loss corresponding to the subject orientation is a two-class cross-entropy loss.
  • There are two states of the second driving state indicator light for example, bright and dark (where dark includes two situations where the second driving state indicator light is off and there is no second driving state indicator light), so the second driving state indicator
  • the loss corresponding to the state of the lamp is the two-class cross-entropy loss.
  • Step S304 using the loss corresponding to the subject's orientation and the loss corresponding to the state of the second driving state indicator to adjust the network parameters of the neural network.
  • Step S305 in response to the subject's orientation facing the direction of the acquisition device for the sample image, and the second driving status indicator is dark, the feature map is input to the first branch of the neural network to obtain the first driving of the front of the vehicle The state of the status indicator.
  • the key point information of the first driving state indicator light on the front of the vehicle is used to obtain the location of the first driving state indicator light on the front of the vehicle in the feature map according to the key point information (that is, the front of the vehicle).
  • This part of the feature map is input into the neural network to obtain the state of the first driving state indicator light in the front of the vehicle.
  • Step S306 Adjust the parameters of the first branch network based on the loss corresponding to the state of the first driving state indicator light at the front.
  • Step S307 in response to the subject's orientation facing away from the acquisition device of the sample image, and the second driving status indicator is dark, the feature map is input to the second branch of the neural network to obtain the first The state of the driving status indicator.
  • the key point information of the first driving status indicator light at the rear of the vehicle is used to obtain the possible location of the first driving status indicator light at the rear of the vehicle in the feature map according to the key point information (that is, the vehicle The second image area occupied by the first driving state indicator light at the rear in the sample image), and this part of the feature map is input into the neural network to obtain the state of the first driving state indicator light at the rear of the vehicle.
  • Step S308 Adjust the network parameters of the second branch based on the loss corresponding to the state of the first driving status indicator light at the rear.
  • the loss corresponding to the state of the first driving status indicator is a multi-class cross entropy loss. Based on this loss, the network parameters such as the weight value of the first branch and the second branch of the neural network are adjusted respectively, so that the first branch and the second branch of the adjusted neural network can accurately classify the vehicle indicator lights. Degree higher.
  • the vehicle direction classifier is combined with the lamp attribute classifier to further subdivide the attributes of the vehicle itself to assist in the identification of the lamp attributes.
  • the attribute recognition of tail light and turn signal is divided into single frame vehicle light recognition and multi-frame attribute joint discrimination. By improving the recognition accuracy of a single frame, the process of vehicle attribute recognition is simplified. Incorporate the auxiliary judgment by adding key points and their visibility information to more accurately locate the position of the car lights, thereby making the classification more accurate.
  • FIG. 4 is a schematic diagram of the structure of the device for identifying the driving state of the smart driving device in an embodiment of the application.
  • the driving state of the smart driving device is recognized
  • the status device 400 includes: the device includes: a first determining module 401, configured to determine the subject orientation of the smart driving device according to the image to be processed; the second determining module 402, configured to determine the subject orientation of the smart driving device according to the The image determines the state of the first driving state indicator light included in the smart driving device; the third determining module 403 is configured to determine the state of the smart driving device according to the orientation of the subject and the state of the first driving state indicator light Driving state.
  • the third determining module 403 includes: a first determining sub-module configured to respond to the orientation of the main body facing the direction of the acquisition device of the image to be processed, according to the device set in the smart driving device The state of the first driving state indicator at the front determines the driving state of the smart driving device.
  • the third determining module 403 includes: a second determining sub-module configured to respond to the direction of the subject facing away from the acquisition device of the image to be processed, according to the setting in the smart driving The state of the first driving state indicator at the rear of the device determines the driving state of the smart driving device.
  • the smart driving device further includes a second driving state indicator light, and the second driving state indicator light is used to indicate whether the smart driving device is in a braking state;
  • the device further includes: a fourth determination Module, configured to determine the state of the second driving state indicator light according to the to-be-processed image before determining the state of the first driving state indicator light included in the smart driving device according to the to-be-processed image;
  • the second determining module 402 includes: a third determining sub-module configured to determine the first driving state indicator included in the smart driving device according to the to-be-processed image in response to the state of the second driving state indicator being dark status.
  • the device further includes: a fifth determining module configured to respond to the state of the second driving state indicator light after determining the state of the second driving state indicator light according to the image to be processed If it is on, it is determined that the smart driving device is in a braking state.
  • the image to be processed is a continuous multi-frame image to be processed;
  • the first determining module 401 includes: a fourth determining sub-module configured to be processed according to each frame of the continuous multi-frame image to be processed The image determines the main body orientation of the smart driving device;
  • the fifth determining sub-module is configured to determine the main body orientation of the smart driving device according to the main body orientation of the smart driving device determined from each frame of the image to be processed;
  • second The determining module 402 includes: a sixth determining sub-module configured to determine the state of the first driving status indicator light according to each frame of the to-be-processed image in the continuous multiple frames of images to be processed; a seventh determining sub-module, configured To determine the state of the first driving state indicator light based on the state of the first driving state indicator light determined from each frame of the image to be processed.
  • the first determining module 401 includes: an eighth determining sub-module configured to determine the first image area occupied by the main body of the intelligent driving device in the image to be processed; and a ninth determining sub-module And configured to determine the subject orientation of the smart driving device according to the image in the first image area.
  • the second determining module 402 includes: a tenth determining sub-module configured to determine the second image area occupied by the first driving status indicator light of the smart driving device in the image to be processed;
  • the eleventh determining sub-module is configured to determine the state of the first driving state indicator light according to the image in the second image area.
  • the fourth determining module includes: a twelfth determining sub-module configured to determine the third image area occupied by the second driving status indicator light of the smart driving device in the image to be processed;
  • the thirteenth determining sub-module is configured to determine the state of the second driving state indicator light according to the image in the third image area.
  • the method for recognizing the driving state of the intelligent driving equipment is implemented by a neural network;
  • the first determining module includes: a first extraction sub-module configured to use the neural network to extract the image from the image to be processed Extracting a feature map; a fourteenth determining sub-module configured to use the neural network to determine the subject orientation of the smart driving device according to the extracted feature map;
  • a third determining module 403 includes: a fifteenth determining sub-module, It is configured to, in response to the subject's orientation facing the direction of the acquisition device for the image to be processed, use the first branch in the neural network to determine the first driving set at the front of the smart driving device according to the feature map The state of the status indicator, and determine the driving status of the smart driving device according to the determined state of the first driving status indicator provided in the front of the smart driving device; the sixteenth determining sub-module is configured to respond to all The orientation of the main body is the direction facing away from the acquisition device of the image to be processed, and the second branch in the neural network
  • the fourth determining module includes: a seventeenth determining sub-module configured to use the neural network to determine the state of the second driving status indicator light according to the extracted feature map; and an eighteenth determining sub-module Module, configured to determine that the smart driving device is in a braking state in response to the state of the second driving status indicator being on;
  • the fifteenth determining sub-module includes: a first determining unit configured to respond to The orientation of the main body is the direction facing the acquisition device of the image to be processed and the state of the second driving status indicator is dark, and the first branch in the neural network is used to determine the setting in the The state of the first driving state indicator light on the front of the smart driving device, and determining the driving state of the smart driving device according to the determined state of the first driving state indicator light provided in the front of the smart driving device;
  • the sixteen determination sub-module includes: a second determination unit configured to respond to the direction of the main body being facing away from the acquisition device of the image to be processed and the state of the second driving state indicator being
  • the device further includes a training module configured to train the neural network
  • the training module includes: a nineteenth determining sub-module configured to determine the The orientation of the main body of the intelligent driving device; the twentieth determining sub-module is configured to, in response to the orientation of the main body facing the direction of the acquisition device of the sample image, use the first branch in the neural network to determine the The state of the first driving state indicator light on the front of the driving device, and the driving state of the smart driving device is determined according to the determined state of the first driving state indicator light provided on the front of the smart driving device; twenty-first The determining sub-module is configured to determine the first driving state set at the rear of the intelligent driving device by using the second branch in the neural network in response to the direction of the main body facing away from the acquisition device of the sample image The state of the indicator light, and determine the driving state of the smart driving device according to the determined state of the first driving state indicator light arranged at the rear of the smart driving device; the first adjustment submodule is configured to determine the driving state of the
  • the computer software product is stored in a storage medium and includes several instructions for This allows an instant messaging device (which may be a terminal, a server, etc.) to execute all or part of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read only memory (Read Only Memory, ROM), magnetic disk or optical disk and other media that can store program codes. In this way, the embodiments of the present application are not limited to any specific hardware and software combination.
  • an embodiment of the present application further provides a computer storage medium with computer executable instructions stored on the computer storage medium. After the computer executable instruction is executed, it can realize the recognition of the intelligent driving device provided by the embodiment of the present application. Steps in the method of driving state.
  • an embodiment of the present application further provides a computer device, the computer device includes a memory and a processor, the memory stores computer executable instructions, and when the processor runs the computer executable instructions on the memory The steps in the method for identifying the driving state of the smart driving device provided in the embodiments of the present application can be realized.
  • an embodiment of the present application provides a computer device.
  • FIG. 5 is a schematic diagram of the composition structure of the computer device in an embodiment of the application. As shown in FIG.
  • the hardware entity of the computer device 500 includes: a processor 501, a communication interface 502, and The memory 503, in which the processor 501 generally controls the overall operation of the computer device 500.
  • the communication interface 502 can enable the computer device to communicate with other terminals or servers via a network.
  • the memory 503 is configured to store instructions and applications executable by the processor 501, and can also cache data to be processed or processed by the processor 501 and each module in the computer device 500 (for example, image data, audio data, voice communication data, and Video communication data) can be realized by flash memory (FLASH) or random access memory (Random Access Memory, RAM).
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, such as: multiple units or components can be combined, or It can be integrated into another system, or some features can be ignored or not implemented.
  • the coupling, or direct coupling, or communication connection between the various components shown or discussed can be through some interfaces, indirect coupling or communication connection between devices or units, and can be electrical, mechanical or other of.
  • the units described above as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units; they may be located in one place or distributed on multiple network units; Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • the functional units in the embodiments of the present application can all be integrated into one processing unit, or each unit can be individually used as a unit, or two or more units can be integrated into one unit;
  • the unit can be realized by hardware driving, or by hardware plus software functional unit.
  • the foregoing program can be stored in a computer readable storage medium.
  • the execution includes The steps of the foregoing method embodiment; and the foregoing storage medium includes: various media that can store program codes, such as a mobile storage device, a read only memory (Read Only Memory, ROM), a magnetic disk, or an optical disk.
  • ROM Read Only Memory
  • the above-mentioned integrated unit of the present application is implemented as a software function module and sold or used as an independent product, it may also be stored in a computer readable storage medium.
  • the computer software product is stored in a storage medium and includes several instructions for A computer device (which may be a personal computer, a server, or a network device, etc.) executes all or part of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: removable storage devices, ROMs, magnetic disks or optical disks and other media that can store program codes.

Abstract

一种识别智能行驶设备的行驶状态的方法及装置、设备,其中,该方法包括:根据包含智能行驶设备的待处理图像确定所述智能行驶设备的主体朝向(S101);根据所述待处理图像确定所述智能行驶设备包括的第一行驶状态指示灯的状态(S102);根据所述主体朝向以及所述第一行驶状态指示灯的状态,确定所述智能行驶设备的行驶状态(S103)。

Description

识别智能行驶设备的行驶状态的方法及装置、设备
相关申请的交叉引用
本申请基于申请号为201910702893.7、申请日为2019年7月31日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此以全文引入的方式引入本申请。
技术领域
本申请实施例涉及自动驾驶技术领域,涉及但不限于识别智能行驶设备的行驶状态的方法及装置、设备。
背景技术
车灯状态识别是自动驾驶中的一个部分,通过车灯状态的识别,可以判别周围的智能行驶设备可能的状态,如左右转向、刹车等。这对于自动驾驶的决策起着辅助作用。
发明内容
有鉴于此,本申请实施例提供识别智能行驶设备的行驶状态的方法及装置、设备。
本申请实施例的技术方案是这样实现的:
本申请实施例提供一种识别智能行驶设备的行驶状态的方法,所述方法包括:根据包含智能行驶设备的待处理图像确定所述智能行驶设备的主体朝向;根据所述待处理图像确定所述智能行驶设备包括的第一行驶状态指示灯的状态;根据所述主体朝向以及所述第一行驶状态指示灯的状态,确定所述智能行驶设备的行驶状态。
本申请实施例提供一种识别智能行驶设备的行驶状态的装置,所述装置包括:第一确定模块,配置为根据包含智能行驶设备的待处理图像确定所述智能行驶设备的主体朝向;第二确定模块,配置为根据所述待处理图像确定所述智能行驶设备包括的第一行驶状态指示灯的状态;第三确定模块,配置为根据所述主体朝向以及所述第一行驶状态指示灯的状态,确定所述智能行驶设备的行驶状态。
本申请实施例提供一种计算机存储介质,所述计算机存储介质上存储有计算机可执行指令,该计算机可执行指令被执行后,能够实现本申请实施例提供的识别智能行驶设备的行驶状态的方法中的步骤。本申请实施例提供一种计算机设备,所述计算机设备包括存储器和处理器,所述存储器上存储有计算机可执行指令,所述处理器运行所述存储器上的计算机可执行指令时可实现本申请实施例提供的识别智能行驶设备的行驶状态的方法中的步骤。
本申请实施例一种计算程序产品,其中,所述计算机程序产品包括计算机可执行指令,该计算机可执行指令被执行后,能够实现本申请实施例提供的识别智能行驶设备的行驶状态的方法中的步骤。
本申请实施例中,通过将识别智能行驶设备的行驶状态的任务细分为多个子任务, 首先对智能行驶设备的主体朝向的识别以及对智能行驶设备上的第一行驶状态指示灯的状态的识别,然后将两个识别结果结合起来确定智能行驶设备的行驶状态,从而降低识别智能行驶设备的行驶状态的任务的难度,以提高智能行驶设备的行驶状态识别的准确度。
附图说明
图1A为本申请实施例识别智能行驶设备的行驶状态的方法的实现流程示意图;
图1B为本申请实施例识别智能行驶设备的行驶状态的方法的又一实现流程示意图;
图1C为本申请实施例识别智能行驶设备的行驶状态的方法的另一实现流程示意图;
图2A为本申请实施例识别智能行驶设备的行驶状态的方法的另一实现流程示意图;
图2B为本申请实施例识别智能行驶设备的行驶状态的方法的再一实现流程示意图;
图2C为本申请实施例智能行驶设备的场景图;
图2D为本申请实施例神经网络训练方法的另一实现流程示意图
图3为本申请实施例神经网络训练方法的再一实现流程示意图;
图4为本申请实施例识别智能行驶设备的行驶状态装置组成结构示意图;
图5为本申请实施例计算机设备的组成结构示意图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对发明的具体技术方案做进一步详细描述。以下实施例用于说明本申请,但不用来限制本申请的范围。
本实施例提出一种识别智能行驶设备的行驶状态的方法应用于计算机设备,所述计算机设备可包括智能行驶设备或不智能行驶设备,该方法所实现的功能可以通过计算机设备中的处理器调用程序代码来实现,当然程序代码可以保存在计算机存储介质中,可见,该计算机设备至少包括处理器和存储介质。
图1A为本申请实施例识别智能行驶设备的行驶状态的方法的实现流程示意图,如图1A所示,结合如图1A所示方法进行说明:
步骤S101,根据包含智能行驶设备的待处理图像确定所述智能行驶设备的主体朝向。在一些可能的实现方式中,所述智能行驶设备包括:各种各样功能的智能行驶设备、各种轮数的智能行驶设备等、机器人、飞行器、导盲器、智能家居设备或智能玩具等。所述待处理图像可以是连续的多帧图像,比如,智能行驶设备为车辆,待处理图像可以是,在车辆行驶的1秒(s)内,采集的连续的多帧包含车辆的图像,也可以是非连续的多帧包含车辆的图像。在本申请实施例中,以所述智能行驶设备为车辆为例来说明。智能行驶设备的主体朝向包括:面向所述待处理图像的获取设备的方向,可以理解为该待处理图像呈现的是车辆头部,即用户通过该待处理图像可以看到车辆头部;或者,背向所述待处理图像的获取设备的方向,可以理解为该待处理图像呈现的是车辆尾部,即用户通过该待处理图像可以看到车尾。
步骤S102,根据所述待处理图像确定所述智能行驶设备包括的第一行驶状态指示灯的状态。对该车辆的主体朝向进行分类;第一行驶状态指示灯用于指示所述智能行驶设备处于以下状态中的一种:制动状态、转向状态、后退状态、或非正常状态等。在一个具体例子中,在第一行驶状态指示灯位于车辆的前部的情况下,那么第一行驶状态指示灯可以是转向灯等,在转向灯亮的情况下,确定该车辆即将转向或处于转向过程;在第一行驶状态指示灯位于车辆的后部的情况下,第一行驶指示灯可以是刹车灯、倒车灯或者转向灯等,根据车灯亮的状态,即可确定车辆的行驶状态,比如,倒车灯亮的情况下,说明车辆处于倒车状态;刹车灯亮的情况下,说明车辆处于制动状态;照明灯亮表示车辆处于行驶状态;示廓灯亮表示车辆处于行驶状态。
步骤S103,根据所述主体朝向以及所述第一行驶状态指示灯的状态,确定所述智能行驶设备的行驶状态。在一些可能的实现方式中,所述步骤S103包括以下两种情况:一是,响应于所述主体朝向为面向所述待处理图像的获取设备的方向,根据设置在所述智能行驶设备前部的第一行驶状态指示灯的状态,确定所述智能行驶设备的行驶状态。在一个具体例子中,主体朝向为面向所述待处理图像的获取设备的方向,说明该待处理图像呈现的是智能行驶设备的头部,以车辆为例,从待处理图像上能够看到的是位于车辆头部的灯,比如,转向灯、示廓灯或者照明灯等。基于车辆前部的灯,确定车辆的行驶状态,比如,车辆的转向灯是左暗右亮,说明车辆即将或者正在向右转向。二是,响应于所述主体朝向为背向所述待处理图像的获取设备的方向,根据设置在所述智能行驶设备后部的第一行驶状态指示灯的状态,确定所述智能行驶设备的行驶状态。在一个具体例子中,主体朝向为背向所述待处理图像的获取设备的方向,可以理解为,待处理图像中呈现的是智能行驶设备的尾部,以车辆为例,从待处理图像上能够看到的是位于车辆尾的灯,比如,转向灯、刹车灯或倒车灯等。基于车辆尾部的灯,确定车辆的行驶状态,比如,车辆的刹车灯是亮,说明车辆正处于制动状态,即车辆的制动踏板被踩下。
在本申请实施例中,通过将识别智能行驶设备的行驶状态的任务细分为多个子任务,首先对智能行驶设备的主体朝向的识别以及对智能行驶设备上的第一行驶状态指示灯的状态的识别,然后将两个识别结果结合起来来确定智能行驶设备的行驶状态,从而降低识别智能行驶设备的行驶状态的任务的难度,提高智能行驶设备的行驶状态识别的准确度。
本申请实施例提供一种识别智能行驶设备的行驶状态的方法,在本申请实施例中以智能行驶设备为车辆为例,图1B为本申请实施例识别智能行驶设备的行驶状态的方法的又一实现流程示意图,如图1B所示,结合如图1B所示方法进行说明:
步骤S121,根据包含智能行驶设备的待处理图像确定所述智能行驶设备的主体朝向。为了能够更快速且更加准确的确定主体朝向,步骤S121还可以通过以下步骤实现:
第一步,确定所述智能行驶设备的主体在所述待处理图像中占据的第一图像区域。在一些可能的实现方式中,步骤S121可以通过神经网络实现,此时首先对待处理图像进行特征提取,然后确定包含智能行驶设备的主体的部分特征图,最后基于部分特征图 对智能行驶设备的主体朝向进行判断。
第二步,根据所述第一图像区域中的图像,确定所述智能行驶设备的主体朝向。在一些可能的实现方式中,在部分特征图中确定智能行驶设备的主体朝向,这样仅从包含智能行驶设备的主体的部分特征图中,判断主体朝向,既减小了计算量,还能够更加准确的判断出主体朝向。
步骤S122,根据所述待处理图像确定所述第二行驶状态指示灯的状态。在一些可能的实现方式中,所述第二行驶状态指示灯用于指示所述智能行驶设备是否处于制动状态,比如车辆的高位刹车灯。第二行驶状态指示灯至少包括以下一种:亮、暗或无。其中,无表示该待处理图像中没有检测到第二行驶状态指示灯。在本申请实施例中,将第二行驶状态指示灯的暗和无统称为暗。在一些可能的实现方式中,步骤S122可以通过神经网络实现,此时首先对该待处理图像进行特征提取,得到所述特征图;然后,基于该特征图对第二行驶状态指示灯的状态进行分类。上述步骤S121和步骤S122之间没有先后顺序关系。在步骤S122之后,第二行驶状态指示灯的状态为暗,进入步骤S123,第二行驶状态指示灯的状态为亮,进入步骤S125。为了能够更快速且更加准确的确定第二行驶状态指示灯的状态,步骤S122还可以通过以下步骤实现:第一步,确定所述智能行驶设备的第二行驶状态指示灯在所述待处理图像中占据的第三图像区域。在一些可能的实现方式中,步骤S122可以通过神经网络来实现,此时首先对待处理图像进行特征提取,然后确定包含智能行驶设备的第二行驶状态指示灯的部分特征图,最后基于部分特征图对智能行驶设备的第二行驶状态指示灯的状态进行判断。第二步,根据所述第三图像区域中的图像,确定所述第二行驶状态指示灯的状态。在一些可能的实现方式中,在部分特征图中确定智能行驶设备的第二行驶状态指示灯的状态,这样仅从包含智能行驶设备的第二行驶状态指示灯的状态的部分特征图中,判断第二行驶状态指示灯的状态,既减小了计算量,还能够更加准确的判断出第二行驶状态指示灯的状态。
步骤S123,响应于所述第二行驶状态指示灯的状态为暗,根据所述待处理图像确定所述智能行驶设备包括的第一行驶状态指示灯的状态。在一些可能的实现方式中,第二行驶状态指示灯的状态为暗,包括两种情况:没有检测到第二行驶状态指示灯或者第二行驶状态指示灯为暗,那么继续确定第一行驶状态指示灯的状态,然后基于第一行驶状态指示灯的状态,确定智能行驶设备的行驶状态。比如,没有检测到车辆的高位刹车灯,说明待处理图像中呈现的是车辆的头部或者车辆没有高位刹车灯,所以继续检测车辆的第一行驶状态指示灯,以确定车辆是转向还是直行等。为了能够更快速且更加准确的确定第一行驶状态指示灯的状态,步骤S123还可以通过以下步骤实现:第一步,确定所述智能行驶设备的第一行驶状态指示灯在所述待处理图像中占据的第二图像区域。在一些可能的实现方式中,步骤S123可以通过神经网络来实现,此时首先对待处理图像进行特征提取,然后确定包含智能行驶设备的第一行驶状态指示灯的部分特征图,最后基于部分特征图对智能行驶设备的第一行驶状态指示灯的状态进行判断。第二步,根据所述第二图像区域中的图像,确定所述第一行驶状态指示灯的状态。在一些可能的实 现方式中,在部分特征图中确定智能行驶设备的第一行驶状态指示灯的状态,这样仅从包含智能行驶设备的第一行驶状态指示灯的状态的部分特征图中,判断第一行驶状态指示灯的状态,既减小了计算量,还能够更加准确的判断出第一行驶状态指示灯的状态。在确定第二行驶状态指示灯的状态为暗的情况下,在一个具体例子中,响应于所述主体朝向为主体朝前,将所述待处理图像输入神经网络的第一分支,得到所述第一行驶状态指示灯;响应于所述主体目标朝向为主体朝后,将所述待处理图像输入神经网络的第二分支,得到所述第一行驶状态指示灯;比如,主体目标朝向为朝前,说明需要对车辆前面的左右两个转向灯进行分类,将包含车辆前面的左右两个转向灯的待处理图像输入神经网络的第一分支(比如,分类器),即神经网络的第一分支对车辆前面的左右两个转向灯进行分类;主体目标朝向为朝后,说明需要对车辆后面的左右两个转向灯进行分类,将包含车辆后面的左右两个转向灯的待处理图像输入神经网络的第二分支,即第二分支对车辆后面的左右两个转向灯进行分类。转向灯包括车头或者车尾左右两侧的灯,在本申请实施例中,将同一张待处理图像中显示的车头或车尾左右两侧的灯作为一组,那么第一行驶状态指示灯包括以下多种组合:(左侧转向灯亮,右侧转向灯亮)、(左侧转向灯亮,右侧转向灯暗)、(左侧转向灯暗,右侧转向灯亮)和(左侧转向灯暗,右侧转向灯暗)。
步骤S124,根据所述主体朝向以及所述第一行驶状态指示灯的状态,确定所述智能行驶设备的行驶状态。
步骤S125,响应于所述第二行驶状态指示灯的状态为亮,确定所述智能行驶设备处于制动状态。在一个具体例子中,车辆的高位刹车灯为亮,说明车辆处于制动状态。不需要再检测车辆的第一行驶状态指示灯。
在本申请实施例中,通过检测智能行驶设备的第二行驶状态指示灯,可快速确定智能行驶设备是否处于制动状态,如果不是,则即继续检测智能行驶设备的第一行驶状态指示灯,从而精准的预测车辆的行驶状态。
本申请实施例提供一种识别智能行驶设备的行驶状态的方法,在本申请实施例中以智能行驶设备为车辆为例,所述待处理图像为连续多帧待处理图像,图1C为本申请实施例识别智能行驶设备的行驶状态的方法的另一实现流程示意图,如图1C所示,结合如图1C所示方法进行说明:
步骤S131,根据所述连续多帧待处理图像中的每一帧待处理图像确定所述智能行驶设备的主体朝向。在一些可能的实现方式中,步骤S131可以通过神经网络来实现,此时对连续多帧待处理图像中的每一帧图像均进行特征提取,然后针对每一帧待处理图像,基于该特征图确定出该帧待处理图像中的主体朝向。
步骤S132,根据由每一帧待处理图像确定的所述智能行驶设备的主体朝向,确定所述智能行驶设备的主体朝向。在一个具体例子中,比如,车辆正在调头,前面一帧待处理图像中,车辆的主体朝向为面向待处理图像的获取设备的方向,但是后面已经调头成功,所有后面的多帧待处理图像中车辆的主体朝向为背向待处理图像的获取设备的方 向,所以最终确定车辆的主体朝向为背向待处理图像的获取设备的方向,从而能够避免对主体朝向的误判。
步骤S133,根据所述连续多帧待处理图像中的每一帧待处理图像确定所述第一行驶状态指示灯的状态。在一些可能的实现方式中,针对每一帧待处理图像,基于该特征图确定出该帧待处理图像中的第一行驶状态指示灯的状态。
步骤S134,根据由每一帧待处理图像确定的所述第一行驶状态指示灯的状态,确定所述第一行驶状态指示灯的状态。在一个具体例子中,比如,车辆出现故障,正在打双闪,前面一帧待处理图像中,车辆的第一行驶状态指示灯的状态,如果仅基于这一帧图像进行判断,就会出现误判的现象;这样基于连续的多帧待处理图像的每一帧的第一行驶状态指示灯的状态,能够避免这种误判的现象,从而更加准确的判断出第一行驶状态指示灯的状态。
步骤S135,根据所述主体朝向以及所述第一行驶状态指示灯的状态,确定所述智能行驶设备的行驶状态。在本申请实施例中,基于连续的多帧待处理图像,判断出智能行驶设备的主体朝向和第一行驶状态指示灯的状态,然后,基于此,预测智能行驶设备的行驶状态,避免了对主体朝向和第一行驶状态指示灯的状态的误判,提高了预测智能行驶设备的行驶状态的准确度。
本申请实施例提供一种识别智能行驶设备的行驶状态的方法,所述识别智能行驶设备的行驶状态的方法由神经网络实现,图2A为本申请实施例识别智能行驶设备的行驶状态的方法的另一实现流程示意图,如图2A所示,结合如图2A所示方法进行说明:
步骤S201,利用所述神经网络从所述待处理图像中提取特征图。在一个具体例子中,将待处理图像输入残差网络(ResNet网络)中,对该待处理图像进行特征提取,得到该待处理图像的特征图。
步骤S202,所述神经网络根据提取到的特征图确定所述智能行驶设备的主体朝向。在一个具体例子中,将多个待处理图像的特征图输入所述神经网络第一分支,得到每一主体朝向的置信度,将置信度大于预设置信度阈值的主体朝向,作为智能行驶设备的主体朝向。
步骤S203,响应于所述主体朝向为面向所述待处理图像的获取设备的方向,利用所述神经网络中的第一分支根据所述特征图确定设置在所述智能行驶设备前部的第一行驶状态指示灯的状态,并根据确定的设置在所述智能行驶设备前部的第一行驶状态指示灯的状态确定所述智能行驶设备的行驶状态。在一些可能的实现方式中,所述神经网络的第一分支,用于对智能行驶设备前部的第一行驶状态指示灯的状态进行分类。在主体朝向为面向所述待处理图像的获取设备的方向的情况下,将连续的多帧待处理图像的特征图输入该神经网络的第一分支,以得到每一个可能的第一行驶状态指示灯的状态的置信度,比如,第一行驶状态指示灯的状态为(左暗右暗)、右暗左亮或者左暗右亮等的置信度。然后,将置信度大于预设置信度阈值的第一行驶状态指示灯的状态,作为智能行驶设备的第一行驶状态指示灯的状态。在一个具体例子中,置信度较大的第一行驶 状态指示灯的状态,表明该第一行驶状态指示灯的状态是真实的第一行驶状态指示灯的状态的概率较大,这样选择置信度大于预设置信度阈值的第一行驶状态指示灯的状态,作为第一车灯目标状态,保证了第一分支得到的分类结果的准确度。
步骤S204,响应于所述主体朝向为背向所述待处理图像的获取设备的方向,利用所述神经网络中的第二分支根据所述特征图确定设置在所述智能行驶设备后部的第一行驶状态指示灯的状态,并根据确定的设置在所述智能行驶设备后部的第一行驶状态指示灯的状态确定所述智能行驶设备的行驶状态。在一些可能的实现方式中,所述神经网络的第二分支,用于对智能行驶设备后部的第一行驶状态指示灯的状态进行分类。主体朝向为背向所述待处理图像的获取设备的方向,说明待处理图像中呈现的是智能行驶设备的尾部,比如,车辆的尾部,那么在待处理图像中可以获取到智能行驶设备后部的第一行驶状态指示灯,即车辆尾部的左右两侧的转向灯。将连续的多帧待处理图像的特征图输入该神经网络的第二分支,以得到每一个可能的第一行驶状态指示灯的状态的置信度,比如,第一行驶状态指示灯的状态为(左暗右暗)、右暗左亮或者左暗右亮等的置信度。然后,将置信度大于预设置信度阈值的第一行驶状态指示灯的状态,作为智能行驶设备的第一行驶状态指示灯的状态。
在本申请实施例中,采用神经网络首先对待处理图像进行特征提取,然后,神经网络基于特征图,确定每一可能的主体朝向的置信度和每一可能的第一行驶状态指示灯的状态,将置信度较大的作为智能行驶设备的主体朝向和第一行驶状态指示灯的状态,最后,基于这样置信度较大的主体朝向和置信度较大的第一行驶状态指示灯的状态,识别智能行驶设备的行驶状态。通过将识别智能行驶设备的行驶状态的任务细分为多个子任务,首先对智能行驶设备的主体朝向的识别以及对智能行驶设备上的第一行驶状态指示灯的状态的识别,然后将两个识别结果结合起来来确定智能行驶设备的行驶状态,从而降低识别智能行驶设备的行驶状态的任务的难度提高了智能行驶设备的行驶状态识别的准确度。
本申请实施例提供一种识别智能行驶设备的行驶状态的方法,所述识别智能行驶设备的行驶状态的方法由神经网络实现,图2B为本申请实施例识别智能行驶设备的行驶状态的方法的再一实现流程示意图,如图2B所示,结合如图2B所示方法进行说明:
步骤S221,利用所述神经网络从所述待处理图像中提取特征图。在一个具体例子中,将待处理图像输入残差网络(ResNet网络)中,对该待处理图像进行特征提取,得到该待处理图像的特征图。
步骤S222,所述神经网络根据提取到的特征图确定所述智能行驶设备的主体朝向。在一个具体例子中,将多个待处理图像的特征图输入所述神经网络第一分支,得到每一主体朝向的置信度,将置信度大于预设置信度阈值的主体朝向,作为智能行驶设备的主体朝向。如图2C所示,图像21呈现的是车辆22的尾部,将图像21中的车辆22的主体朝向确定为朝后,即主体朝向为背向待处理图像的获取设备。
步骤S223,神经网络根据提取到的特征图确定所述第二行驶状态指示灯的状态。 在一些可能的实现方式中,第二行驶状态指示灯可以是智能行驶设备的高位刹车灯。将连续的多帧待处理图像的特征图输入该神经网络,以得到每一个可能的第二行驶状态指示灯的状态的置信度,比如,第二行驶状态指示灯的状态为亮或暗的置信度。然后,将置信度大于预设置信度阈值的第二行驶状态指示灯的状态,作为智能行驶设备的第二行驶状态指示灯的状态。从而保证了对第二行驶状态指示灯的状态进行识别的准确度。
步骤S224,响应于所述主体朝向为面向所述待处理图像的获取设备的方向且所述第二行驶状态指示灯的状态为暗,利用所述神经网络中的第一分支根据所述特征图确定设置在所述智能行驶设备前部的第一行驶状态指示灯的状态,并根据确定的设置在所述智能行驶设备前部的第一行驶状态指示灯的状态确定所述智能行驶设备的行驶状态。在一些可能的实现方式中,主体朝向为面向所述待处理图像的获取设备的方向(即主体朝向朝前)且所述第二行驶状态指示灯的状态为暗,将特征图输入神经网络的第一分支,以得到多个可能的车辆头部的第一行驶状态指示灯的状态的置信度,然后将置信度较大的作为第一行驶状态指示灯的状态。
步骤S225,响应于所述主体朝向为背向所述待处理图像的获取设备的方向且所述第二行驶状态指示灯的状态为暗,利用所述神经网络中的第二分支根据所述特征图确定设置在所述智能行驶设备后部的第一行驶状态指示灯的状态,并根据确定的设置在所述智能行驶设备后部的第一行驶状态指示灯的状态确定所述智能行驶设备的行驶状态。在一些可能的实现方式中,主体朝向为背向所述待处理图像的获取设备的方向(即主体朝向朝后)且所述第二行驶状态指示灯的状态为暗,将特征图输入神经网络的第二分支,以得到多个可能的车辆后部第一行驶状态指示灯的状态的置信度,然后将置信度较大的作为第一行驶状态指示灯的状态。
步骤S226,响应于所述第二行驶状态指示灯的状态为亮,确定所述智能行驶设备处于制动状态。在本申请实施例中,采用神经网络对智能行驶设备的主体朝向和多个指示灯的状态进行详细分类,保证了对主体朝向和指示灯状态进行识别的准确度,从而保证了基于此识别智能行驶设备的行驶状态的准确度。
结合以上步骤,所述述神经网络采用以下步骤训练得到,如图2D所示,结合图2D进行如下说明:
步骤S231,获取包含智能行驶设备的样本图像。在一些可能的实现方式中,以所述智能行驶设备为车辆进行说明。获取多张包含车辆的样本图像,比如,包含车辆图案的样本图像。
步骤S232,根据包含智能行驶设备的样本图像确定所述智能行驶设备的主体朝向。在一些可能的实现方式中,按照样本图像中表明所述智能行驶设备主体朝向的标签信息,确定该智能行驶设备的主体朝向,将特征图输入该主体朝向神经网络的分支中,以得到该智能行驶设备的第一行驶状态指示灯的状态,比如,主体朝向为面向所述样本图像的获取设备的方向,将特征图输入第一分支中,以得到智能行驶设备前部的第一行驶状态指示灯的状态,比如,得到车辆前部左右两侧转向灯的状态。主体朝向为背向所述样本 图像的获取设备的方向,将特征图输入后向第二分支中,以得到智能行驶设备后部的第一行驶状态指示灯的状态,比如,得到车辆后部左右两侧转向灯的状态。如此,针对不同的主体朝向训练不同的分支,更加细化分类任务,从而保证对第一行驶状态指示灯的状态的分类的准确度。
步骤S233,响应于所述主体朝向为面向所述样本图像的获取设备的方向,利用所述神经网络中的第一分支确定设置在所述智能行驶设备前部的第一行驶状态指示灯的状态,并根据确定的设置在所述智能行驶设备前部的第一行驶状态指示灯的状态确定所述智能行驶设备的行驶状态。
步骤S234,响应于所述主体朝向为背向所述样本图像的获取设备的方向,利用所述神经网络中的第二分支确定设置在所述智能行驶设备后部的第一行驶状态指示灯的状态,并根据确定的设置在所述智能行驶设备后部的第一行驶状态指示灯的状态确定所述智能行驶设备的行驶状态。
步骤S235,根据确定的所述主体朝向、标注的所述主体朝向、确定的第一行驶状态指示灯的状态、标注的第一行驶状态指示灯的状态调整神经网络的网络参数值。在一些可能的实现方式中,响应于所述主体朝向为面向所述样本图像的获取设备的方向,采用智能行驶设备的前部的第一行驶状态指示灯的状态和标注的前部的第一行驶状态指示灯的状态,确定行驶状态预设的损失函数,采用该损失函数,对神经网络的第一分支的网络参数值进行调整,使得调整后的第一分支能够准确的预测出智能行驶设备的前部的第一行驶状态指示灯的状态。响应于所述主体朝向为背向所述样本图像的获取设备的方向,采用智能行驶设备的后部的第一行驶状态指示灯的状态和标注的后部的第一行驶状态指示灯的状态,确定行驶状态预设的损失函数,采用该损失函数,对神经网络的第二分支的网络参数值进行调整,使得调整后的第一分支能够准确的预测出智能行驶设备的后部的第一行驶状态指示灯的状态。
本申请实施例提供一种识别智能行驶设备的行驶状态的方法,以智能行驶设备为车辆为例,首先,利用深度学习框架识别车灯属性;然后,通过使用大量训练数据,使训练得到的神经网络鲁棒性更好,可以在多种应用场景下表现良好。在相关技术中,车灯属性识别比较笼统的对所有类别图片进行分类,分为刹车灯识别与转向灯识别;而本申请实施例通过将任务细分,对小任务进行处理,首先对智能行驶设备的属性进行识别,并通过不同分支训练,可以实现车灯属性的细分类识别;另通过关键点判断车灯位置,并利用关键点可见性信息得到较准确定位车灯位置,从而使得车灯属性判断准确率更高。
图3为本申请实施例神经网络训练方法的再一实现流程示意图,如图3所示,结合图3进行以下说明:
步骤S301,将包含智能行驶设备的样本图像输入神经网络,得到该样本图像的特征图。
步骤S302,将该特征图分别输入神经网络,得到主体朝向和第二行驶状态指示灯的状态。在一些可能的实现方式中,利用车辆主体的关键点信息,根据该关键点信息得 到特征图中车辆主体所在的位置(车辆主体在该样本图像中占据的第一图像区域),将这部分特征图输入神经网络中,以得到主体朝向;利用车辆的第二行驶状态指示灯的关键点信息,根据该关键点信息得到特征图中第二行驶状态指示灯所在的位置(车辆的第二行驶状态指示灯在该样本图像中占据的第三图像区域),将这部分特征图输入神经网络中,以得到第二行驶状态指示灯的状态。
步骤S303,根据标注的主体朝向和标注的第二行驶状态指示灯的状态,确定神经网络输出的主体朝向对应的损失和第二行驶状态指示灯的状态对应的损失。在一些可能的实现方式中,由于主体朝向为两个,所以主体朝向对应的损失为二分类交叉熵损失。第二行驶状态指示灯的状态为两种,比如,亮和暗(其中,暗包括了第二行驶状态指示灯不亮以及没有第二行驶状态指示灯两种情况),所以第二行驶状态指示灯的状态对应的损失为二分类交叉熵损失。
步骤S304,采用主体朝向对应的损失和第二行驶状态指示灯的状态对应的损失对神经网络的网络参数进行调整。
步骤S305,响应于主体朝向为面向所述样本图像的获取设备的方向,且第二行驶状态指示灯的状态为暗,将特征图输入神经网络的第一分支,得到车辆前部的第一行驶状态指示灯的状态。在一些可能的实现方式中,利用车辆前部的第一行驶状态指示灯的关键点信息,根据关键点信息得到特征图中车辆前部的第一行驶状态指示灯的所在的位置(即车辆前部的第一行驶状态指示灯在该样本图像中占据的第二图像区域),将这部分特征图输入神经网络中,以得到车辆前部的第一行驶状态指示灯的状态。
步骤S306,基于前部的第一行驶状态指示灯的状态对应的损失,对第一分支网络参数进行调整。
步骤S307,响应于主体朝向为背向所述样本图像的获取设备的方向,且第二行驶状态指示灯的状态为暗,将特征图输入神经网络的第二分支,得到车辆后部的第一行驶状态指示灯的状态。在一些可能的实现方式中,利用车辆后部的第一行驶状态指示灯的关键点信息,根据关键点信息得到特征图中车辆后部的第一行驶状态指示灯的可能所在的位置(即车辆后部的第一行驶状态指示灯在该样本图像中占据的第二图像区域),将这部分特征图输入神经网络中,以得到车辆后部的第一行驶状态指示灯的状态。
步骤S308,基于后部的第一行驶状态指示灯的状态对应的损失,对第二分支的网络参数进行调整。在一些可能的实现方式中,由于第一行驶状态指示灯的状态有多种可能的状态,比如,(左侧转向灯亮,右侧转向灯亮)、(左侧转向灯亮,右侧转向灯暗)、(左侧转向灯暗,右侧转向灯亮)和(左侧转向灯暗,右侧转向灯暗)等,所以,第一行驶状态指示灯的状态对应的损失为多分类交叉熵损失。基于这个损失,分别对神经网络的第一分支和第二分支的如权重值等网络参数进行调整,以使调整后的神经网络的第一分支和第二分支对车辆的指示灯进行分类的准确度更高。
在本申请实施例中,利用车辆方向分类器与车灯属性分类器结合,将车辆自身属性更加细分,用来辅助车灯属性识别。将尾灯与转向灯的属性识别分成单帧车灯识别与多 帧属性联合判别。通过提高单帧的识别准确率,简化了车辆属性识别的流程。并入通过加入关键点及其可见性信息进行辅助判断,更准确的定位车灯位置,从而使得分类更准确。
本申请实施例提供一种识别智能行驶设备的行驶状态的装置,图4为本申请实施例识别智能行驶设备的行驶状态装置组成结构示意图,如图4所示,所述识别智能行驶设备的行驶状态装置400包括:所述装置包括:第一确定模块401,配置为根据包含智能行驶设备的待处理图像确定所述智能行驶设备的主体朝向;第二确定模块402,配置为根据所述待处理图像确定所述智能行驶设备包括的第一行驶状态指示灯的状态;第三确定模块403,配置为根据所述主体朝向以及所述第一行驶状态指示灯的状态,确定所述智能行驶设备的行驶状态。
在上述装置中,所述第三确定模块403,包括:第一确定子模块,配置为响应于所述主体朝向为面向所述待处理图像的获取设备的方向,根据设置在所述智能行驶设备前部的第一行驶状态指示灯的状态,确定所述智能行驶设备的行驶状态。
在上述装置中,所述第三确定模块403,包括:第二确定子模块,配置为响应于所述主体朝向为背向所述待处理图像的获取设备的方向,根据设置在所述智能行驶设备后部的第一行驶状态指示灯的状态,确定所述智能行驶设备的行驶状态。
在上述装置中,所述智能行驶设备还包括第二行驶状态指示灯,所述第二行驶状态指示灯用于指示所述智能行驶设备是否处于制动状态;所述装置还包括:第四确定模块,配置为在根据所述待处理图像确定所述智能行驶设备包括的第一行驶状态指示灯的状态之前,根据所述待处理图像确定所述第二行驶状态指示灯的状态;所述第二确定模块402,包括:第三确定子模块,配置为响应于所述第二行驶状态指示灯的状态为暗,根据所述待处理图像确定所述智能行驶设备包括的第一行驶状态指示灯的状态。
在上述装置中,所述装置还包括:第五确定模块,配置为在根据所述待处理图像确定所述第二行驶状态指示灯的状态之后,响应于所述第二行驶状态指示灯的状态为亮,确定所述智能行驶设备处于制动状态。
在上述装置中,所述待处理图像为连续多帧待处理图像;第一确定模块401,包括:第四确定子模块,配置为根据所述连续多帧待处理图像中的每一帧待处理图像确定所述智能行驶设备的主体朝向;第五确定子模块,配置为根据由每一帧待处理图像确定的所述智能行驶设备的主体朝向,确定所述智能行驶设备的主体朝向;第二确定模块402,包括:第六确定子模块,配置为根据所述连续多帧待处理图像中的每一帧待处理图像确定所述第一行驶状态指示灯的状态;第七确定子模块,配置为根据由每一帧待处理图像确定的所述第一行驶状态指示灯的状态,确定所述第一行驶状态指示灯的状态。
在上述装置中,所述第一确定模块401,包括:第八确定子模块,配置为确定所述智能行驶设备的主体在所述待处理图像中占据的第一图像区域;第九确定子模块,配置为根据所述第一图像区域中的图像,确定所述智能行驶设备的主体朝向。
在上述装置中,所述第二确定模块402,包括:第十确定子模块,配置为确定所述 智能行驶设备的第一行驶状态指示灯在所述待处理图像中占据的第二图像区域;第十一确定子模块,配置为根据所述第二图像区域中的图像,确定所述第一行驶状态指示灯的状态。
在上述装置中,所述第四确定模块,包括:第十二确定子模块,配置为确定所述智能行驶设备的第二行驶状态指示灯在所述待处理图像中占据的第三图像区域;第十三确定子模块,配置为根据所述第三图像区域中的图像,确定所述第二行驶状态指示灯的状态。
在上述装置中,所述识别智能行驶设备的行驶状态的方法由神经网络实现;所述第一确定模块,包括:第一提取子模块,配置为利用所述神经网络从所述待处理图像中提取特征图;第十四确定子模块,配置为利用所述神经网络,根据提取到的特征图确定所述智能行驶设备的主体朝向;第三确定模块403,包括:第十五确定子模块,配置为响应于所述主体朝向为面向所述待处理图像的获取设备的方向,利用所述神经网络中的第一分支根据所述特征图确定设置在所述智能行驶设备前部的第一行驶状态指示灯的状态,并根据确定的设置在所述智能行驶设备前部的第一行驶状态指示灯的状态确定所述智能行驶设备的行驶状态;第十六确定子模块,配置为响应于所述主体朝向为背向所述待处理图像的获取设备的方向,利用所述神经网络中的第二分支根据所述特征图确定设置在所述智能行驶设备后部的第一行驶状态指示灯的状态,并根据确定的设置在所述智能行驶设备后部的第一行驶状态指示灯的状态确定所述智能行驶设备的行驶状态。
在上述装置中,第四确定模块,包括:第十七确定子模块,配置为采用所述神经网络,根据提取到的特征图确定所述第二行驶状态指示灯的状态;第十八确定子模块,配置为响应于所述第二行驶状态指示灯的状态为亮,确定所述智能行驶设备处于制动状态;所述第十五确定子模块,包括:第一确定单元,配置为响应于所述主体朝向为面向所述待处理图像的获取设备的方向且所述第二行驶状态指示灯的状态为暗,利用所述神经网络中的第一分支根据所述特征图确定设置在所述智能行驶设备前部的第一行驶状态指示灯的状态,并根据确定的设置在所述智能行驶设备前部的第一行驶状态指示灯的状态确定所述智能行驶设备的行驶状态;所述第十六确定子模块,包括:第二确定单元,配置为响应于所述主体朝向为背向所述待处理图像的获取设备的方向且所述第二行驶状态指示灯的状态为暗,利用所述神经网络中的第二分支根据所述特征图确定设置在所述智能行驶设备后部的第一行驶状态指示灯的状态,并根据确定的设置在所述智能行驶设备后部的第一行驶状态指示灯的状态确定所述智能行驶设备的行驶状态。
在上述装置中,所述装置还包括训练模块,配置为对所述神经网络进行训练,所述训练模块,包括:第十九确定子模块,配置为根据包含智能行驶设备的样本图像确定所述智能行驶设备的主体朝向;第二十确定子模块,配置为响应于所述主体朝向为面向所述样本图像的获取设备的方向,利用所述神经网络中的第一分支确定设置在所述智能行驶设备前部的第一行驶状态指示灯的状态,并根据确定的设置在所述智能行驶设备前部的第一行驶状态指示灯的状态确定所述智能行驶设备的行驶状态;第二十一确定子模块, 配置为响应于所述主体朝向为背向所述样本图像的获取设备的方向,利用所述神经网络中的第二分支确定设置在所述智能行驶设备后部的第一行驶状态指示灯的状态,并根据确定的设置在所述智能行驶设备后部的第一行驶状态指示灯的状态确定所述智能行驶设备的行驶状态;第一调整子模块,配置为根据确定的所述主体朝向、标注的所述主体朝向、确定的第一行驶状态指示灯的状态、标注的第一行驶状态指示灯的状态调整神经网络的网络参数值。
需要说明的是,以上装置实施例的描述,与上述方法实施例的描述是类似的,具有同方法实施例相似的有益效果。对于本申请装置实施例中未披露的技术细节,请参照本申请方法实施例的描述而理解。需要说明的是,本申请实施例中,如果以软件功能模块的形式实现上述的即时通讯方法,并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台即时通讯设备(可以是终端、服务器等)执行本申请各个实施例所述方法的全部或部分。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read Only Memory,ROM)、磁碟或者光盘等各种可以存储程序代码的介质。这样,本申请实施例不限制于任何特定的硬件和软件结合。
相应地,本申请实施例再提供一种计算机存储介质,所述计算机存储介质上存储有计算机可执行指令,该计算机可执行指令被执行后,能够实现本申请实施例提供的识别智能行驶设备的行驶状态的方法中的步骤。相应地,本申请实施例再提供一种计算机设备,所述计算机设备包括存储器和处理器,所述存储器上存储有计算机可执行指令,所述处理器运行所述存储器上的计算机可执行指令时可实现本申请实施例提供的识别智能行驶设备的行驶状态的方法中的步骤。相应地,本申请实施例提供一种计算机设备,图5为本申请实施例计算机设备的组成结构示意图,如图5所示,该计算机设备500的硬件实体包括:处理器501、通信接口502和存储器503,其中处理器501通常控制计算机设备500的总体操作。通信接口502可以使计算机设备通过网络与其他终端或服务器通信。存储器503配置为存储由处理器501可执行的指令和应用,还可以缓存待处理器501以及计算机设备500中各模块待处理或已经处理的数据(例如,图像数据、音频数据、语音通信数据和视频通信数据),可以通过闪存(FLASH)或随机访问存储器(Random Access Memory,RAM)实现。以上识别智能行驶设备的行驶状态的装置、计算机设备和存储介质实施例的描述,与上述方法实施例的描述是类似的,具有同相应方法实施例相似的技术描述和有益效果,限于篇幅,可案件上述方法实施例的记载,故在此不再赘述。对于本申请行驶轨迹的预测装置、计算机设备和存储介质实施例中未披露的技术细节,请参照本申请方法实施例的描述而理解。应理解,说明书通篇中提到的“一个实施例”或“一实施例”意味着与实施例有关的特定特征、结构或特性包括在本申请的至少一个实施例中。因此,在整个说明书各处出现的“在一个实施例中”或“在一实施例中”未必一定指相同的实施例。此外,这些特定的特征、结构或特性可以任意 适合的方式结合在一个或多个实施例中。应理解,在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它行驶的。
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元;既可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。另外,在本申请各实施例中的各功能单元可以全部集成在一个处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的行驶实现,也可以采用硬件加软件功能单元的行驶实现。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、只读存储器(Read Only Memory,ROM)、磁碟或者光盘等各种可以存储程序代码的介质。
或者,本申请上述集成的单元如果以软件功能模块的行驶实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的行驶体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本申请各个实施例所述方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (27)

  1. 一种识别智能行驶设备的行驶状态的方法,其中,包括:
    根据包含智能行驶设备的待处理图像确定所述智能行驶设备的主体朝向;
    根据所述待处理图像确定所述智能行驶设备包括的第一行驶状态指示灯的状态;
    根据所述主体朝向以及所述第一行驶状态指示灯的状态,确定所述智能行驶设备的行驶状态。
  2. 根据权利要求1所述的方法,其中,所述根据所述主体朝向以及所述第一行驶状态指示灯的状态,确定所述智能行驶设备的行驶状态,包括:
    响应于所述主体朝向为面向所述待处理图像的获取设备的方向,根据设置在所述智能行驶设备前部的第一行驶状态指示灯的状态,确定所述智能行驶设备的行驶状态。
  3. 根据权利要求1或2所述的方法,其中,根据所述主体朝向以及所述第一行驶状态指示灯的状态,确定所述智能行驶设备的行驶状态,包括:
    响应于所述主体朝向为背向所述待处理图像的获取设备的方向,根据设置在所述智能行驶设备后部的第一行驶状态指示灯的状态,确定所述智能行驶设备的行驶状态。
  4. 根据权利要求1至3任一所述的方法,其中,所述智能行驶设备还包括第二行驶状态指示灯,所述第二行驶状态指示灯用于指示所述智能行驶设备是否处于制动状态;
    在根据所述待处理图像确定所述智能行驶设备包括的第一行驶状态指示灯的状态之前,所述方法还包括:
    根据所述待处理图像确定所述第二行驶状态指示灯的状态;
    所述根据所述待处理图像确定所述智能行驶设备包括的第一行驶状态指示灯的状态,包括:
    响应于所述第二行驶状态指示灯的状态为暗,根据所述待处理图像确定所述智能行驶设备包括的第一行驶状态指示灯的状态。
  5. 根据权利要求4所述的方法,其中,在根据所述待处理图像确定所述第二行驶状态指示灯的状态之后,所述方法还包括:
    响应于所述第二行驶状态指示灯的状态为亮,确定所述智能行驶设备处于制动状态。
  6. 根据权利要求1至5任一所述的方法,其中,所述待处理图像为连续多帧待处理图像;
    根据所述待处理图像确定所述智能行驶设备的主体朝向,包括:
    根据所述连续多帧待处理图像中的每一帧待处理图像确定所述智能行驶设备的主体朝向;
    根据由每一帧待处理图像确定的所述智能行驶设备的主体朝向,确定所述智能行驶设备的主体朝向;
    根据所述待处理图像确定所述智能行驶设备包括的第一行驶状态指示灯的状态,包括:
    根据所述连续多帧待处理图像中的每一帧待处理图像确定所述第一行驶状态指示灯的状态;
    根据由每一帧待处理图像确定的所述第一行驶状态指示灯的状态,确定所述第一行驶状态指示灯的状态。
  7. 根据权利要求1至6任一所述的方法,其中,所述根据所述待处理图像确定所述智能行驶设备的主体朝向,包括:
    确定所述智能行驶设备的主体在所述待处理图像中占据的第一图像区域;
    根据所述第一图像区域中的图像,确定所述智能行驶设备的主体朝向。
  8. 根据权利要求1至7任一所述的方法,其中,所述根据所述待处理图像确定所述智能行驶设备包括的第一行驶状态指示灯的状态,包括:
    确定所述智能行驶设备的第一行驶状态指示灯在所述待处理图像中占据的第二图像区域;
    根据所述第二图像区域中的图像,确定所述第一行驶状态指示灯的状态。
  9. 根据权利要求4至8任一所述的方法,其中,所述根据所述待处理图像确定所述第二行驶状态指示灯的状态,包括:
    确定所述智能行驶设备的第二行驶状态指示灯在所述待处理图像中占据的第三图像区域;
    根据所述第三图像区域中的图像,确定所述第二行驶状态指示灯的状态。
  10. 根据权利要求5所述的方法,其中,所述识别智能行驶设备的行驶状态的方法由神经网络实现;根据包含智能行驶设备的待处理图像确定所述智能行驶设备的主体朝向,包括:
    利用所述神经网络从所述待处理图像中提取特征图;
    所述神经网络根据提取到的特征图确定所述智能行驶设备的主体朝向;
    根据所述主体朝向以及所述第一行驶状态指示灯的状态,确定所述智能行驶设备的行驶状态,包括:
    响应于所述主体朝向为面向所述待处理图像的获取设备的方向,利用所述神经网络中的第一分支根据所述特征图确定设置在所述智能行驶设备前部的第一行驶状态指示灯的状态,并根据确定的设置在所述智能行驶设备前部的第一行驶状态指示灯的状态确定所述智能行驶设备的行驶状态;
    响应于所述主体朝向为背向所述待处理图像的获取设备的方向,利用所述神经网络中的第二分支根据所述特征图确定设置在所述智能行驶设备后部的第一行驶状态指示灯的状态,并根据确定的设置在所述智能行驶设备后部的第一行驶状态指示灯的状态确定所述智能行驶设备的行驶状态。
  11. 根据权利要求10所述的方法,其中,根据所述待处理图像确定所述第二行驶状态指示灯的状态,包括:
    所述神经网络根据提取到的特征图确定所述第二行驶状态指示灯的状态;
    响应于所述第二行驶状态指示灯的状态为亮,确定所述智能行驶设备处于制动状态;
    响应于所述主体朝向为面向所述待处理图像的获取设备的方向,利用所述神经网络中的第一分支根据所述特征图确定设置在所述智能行驶设备前部的第一行驶状态指示灯的状态,并根据确定的设置在所述智能行驶设备前部的第一行驶状态指示灯的状态确定所述智能行驶设备的行驶状态,包括:
    响应于所述主体朝向为面向所述待处理图像的获取设备的方向且所述第二行驶状态指示灯的状态为暗,利用所述神经网络中的第一分支根据所述特征图确定设置在所述智能行驶设备前部的第一行驶状态指示灯的状态,并根据确定的设置在所述智能行驶设备前部的第一行驶状态指示灯的状态确定所述智能行驶设备的行驶状态;
    响应于所述主体朝向为背向所述待处理图像的获取设备的方向,利用所述神经网络中的第二分支根据所述特征图确定设置在所述智能行驶设备后部的第一行驶状态指示灯的状态,并根据确定的设置在所述智能行驶设备后部的第一行驶状态指示灯的状态确定所述智能行驶设备的行驶状态,包括:
    响应于所述主体朝向为背向所述待处理图像的获取设备的方向且所述第二行驶状态指示灯的状态为暗,利用所述神经网络中的第二分支根据所述特征图确定设置在所述智能行驶设备后部的第一行驶状态指示灯的状态,并根据确定的设置在所述智能行驶设备后部的第一行驶状态指示灯的状态确定所述智能行驶设备的行驶状态。
  12. 根据权利要求10或11所述的方法,其中,所述神经网络采用以下步骤训练得到:
    根据包含智能行驶设备的样本图像确定所述智能行驶设备的主体朝向;
    响应于所述主体朝向为面向所述样本图像的获取设备的方向,利用所述神经网络中的第一分支确定设置在所述智能行驶设备前部的第一行驶状态指示灯的状态,并根据确定的设置在所述智能行驶设备前部的第一行驶状态指示灯的状态确定所述智能行驶设备的行驶状态;
    响应于所述主体朝向为背向所述样本图像的获取设备的方向,利用所述神经网络中的第二分支确定设置在所述智能行驶设备后部的第一行驶状态指示灯的状态,并根据确定的设置在所述智能行驶设备后部的第一行驶状态指示灯的状态确定所述智能行驶设备的行驶状态;
    根据确定的所述主体朝向、标注的所述主体朝向、确定的第一行驶状态指示灯的状态、标注的第一行驶状态指示灯的状态调整神经网络的网络参数值。
  13. 一种识别智能行驶设备的行驶状态的装置,其中,所述装置包括:
    第一确定模块,配置为根据包含智能行驶设备的待处理图像确定所述智能行驶设备的主体朝向;
    第二确定模块,配置为根据所述待处理图像确定所述智能行驶设备包括的第一行驶状态指示灯的状态;
    第三确定模块,配置为根据所述主体朝向以及所述第一行驶状态指示灯的状态,确 定所述智能行驶设备的行驶状态。
  14. 根据权利要求13所述的装置,其中,所述第三确定模块,包括:
    第一确定子模块,配置为响应于所述主体朝向为面向所述待处理图像的获取设备的方向,根据设置在所述智能行驶设备前部的第一行驶状态指示灯的状态,确定所述智能行驶设备的行驶状态。
  15. 根据权利要求13或14所述的装置,其中,所述第三确定模块,包括:
    第二确定子模块,配置为响应于所述主体朝向为背向所述待处理图像的获取设备的方向,根据设置在所述智能行驶设备后部的第一行驶状态指示灯的状态,确定所述智能行驶设备的行驶状态。
  16. 根据权利要求13至15任一所述的装置,其中,所述智能行驶设备还包括第二行驶状态指示灯,所述第二行驶状态指示灯用于指示所述智能行驶设备是否处于制动状态;
    所述装置还包括:
    第四确定模块,配置为在根据所述待处理图像确定所述智能行驶设备包括的第一行驶状态指示灯的状态之前,根据所述待处理图像确定所述第二行驶状态指示灯的状态;
    所述第二确定模块,包括:
    第三确定子模块,配置为响应于所述第二行驶状态指示灯的状态为暗,根据所述待处理图像确定所述智能行驶设备包括的第一行驶状态指示灯的状态。
  17. 根据权利要求16所述的装置,其中,所述装置还包括:
    第五确定模块,配置为在根据所述待处理图像确定所述第二行驶状态指示灯的状态之后,响应于所述第二行驶状态指示灯的状态为亮,确定所述智能行驶设备处于制动状态。
  18. 根据权利要求13至17任一所述的装置,其中,所述待处理图像为连续多帧待处理图像;
    第一确定模块,包括:
    第四确定子模块,配置为根据所述连续多帧待处理图像中的每一帧待处理图像确定所述智能行驶设备的主体朝向;
    第五确定子模块,配置为根据由每一帧待处理图像确定的所述智能行驶设备的主体朝向,确定所述智能行驶设备的主体朝向;
    第二确定模块,包括:
    第六确定子模块,配置为根据所述连续多帧待处理图像中的每一帧待处理图像确定所述第一行驶状态指示灯的状态;
    第七确定子模块,配置为根据由每一帧待处理图像确定的所述第一行驶状态指示灯的状态,确定所述第一行驶状态指示灯的状态。
  19. 根据权利要求13至18任一所述的装置,其中,所述第一确定模块,包括:
    第八确定子模块,配置为确定所述智能行驶设备的主体在所述待处理图像中占据的 第一图像区域;
    第九确定子模块,配置为根据所述第一图像区域中的图像,确定所述智能行驶设备的主体朝向。
  20. 根据权利要求13至19任一所述的装置,其中,所述第二确定模块,包括:
    第十确定子模块,配置为确定所述智能行驶设备的第一行驶状态指示灯在所述待处理图像中占据的第二图像区域;
    第十一确定子模块,配置为根据所述第二图像区域中的图像,确定所述第一行驶状态指示灯的状态。
  21. 根据权利要求16至20任一所述的装置,其中,所述第四确定模块,包括:
    第十二确定子模块,配置为确定所述智能行驶设备的第二行驶状态指示灯在所述待处理图像中占据的第三图像区域;
    第十三确定子模块,配置为根据所述第三图像区域中的图像,确定所述第二行驶状态指示灯的状态。
  22. 根据权利要求17所述的装置,其中,所述识别智能行驶设备的行驶状态的方法由神经网络实现;所述第一确定模块,包括:
    第一提取子模块,配置为利用所述神经网络从所述待处理图像中提取特征图;
    第十四确定子模块,配置为利用所述神经网络,根据提取到的特征图确定所述智能行驶设备的主体朝向;
    第三确定模块,包括:
    第十五确定子模块,配置为响应于所述主体朝向为面向所述待处理图像的获取设备的方向,利用所述神经网络中的第一分支根据所述特征图确定设置在所述智能行驶设备前部的第一行驶状态指示灯的状态,并根据确定的设置在所述智能行驶设备前部的第一行驶状态指示灯的状态确定所述智能行驶设备的行驶状态;
    第十六确定子模块,配置为响应于所述主体朝向为背向所述待处理图像的获取设备的方向,利用所述神经网络中的第二分支根据所述特征图确定设置在所述智能行驶设备后部的第一行驶状态指示灯的状态,并根据确定的设置在所述智能行驶设备后部的第一行驶状态指示灯的状态确定所述智能行驶设备的行驶状态。
  23. 根据权利要求22所述的装置,其中,第四确定模块,包括:
    第十七确定子模块,配置为采用所述神经网络,根据提取到的特征图确定所述第二行驶状态指示灯的状态;
    第十八确定子模块,配置为响应于所述第二行驶状态指示灯的状态为亮,确定所述智能行驶设备处于制动状态;
    所述第十五确定子模块,包括:
    第一确定单元,配置为响应于所述主体朝向为面向所述待处理图像的获取设备的方向且所述第二行驶状态指示灯的状态为暗,利用所述神经网络中的第一分支根据所述特征图确定设置在所述智能行驶设备前部的第一行驶状态指示灯的状态,并根据确定的设 置在所述智能行驶设备前部的第一行驶状态指示灯的状态确定所述智能行驶设备的行驶状态;
    所述第十六确定子模块,包括:
    第二确定单元,配置为响应于所述主体朝向为背向所述待处理图像的获取设备的方向且所述第二行驶状态指示灯的状态为暗,利用所述神经网络中的第二分支根据所述特征图确定设置在所述智能行驶设备后部的第一行驶状态指示灯的状态,并根据确定的设置在所述智能行驶设备后部的第一行驶状态指示灯的状态确定所述智能行驶设备的行驶状态。
  24. 根据权利要求22或23所述的装置,其中,所述装置还包括训练模块,配置为对所述神经网络进行训练,所述训练模块,包括:
    第十九确定子模块,配置为根据包含智能行驶设备的样本图像确定所述智能行驶设备的主体朝向;
    第二十确定子模块,配置为响应于所述主体朝向为面向所述样本图像的获取设备的方向,利用所述神经网络中的第一分支确定设置在所述智能行驶设备前部的第一行驶状态指示灯的状态,并根据确定的设置在所述智能行驶设备前部的第一行驶状态指示灯的状态确定所述智能行驶设备的行驶状态;
    第二十一确定子模块,配置为响应于所述主体朝向为背向所述样本图像的获取设备的方向,利用所述神经网络中的第二分支确定设置在所述智能行驶设备后部的第一行驶状态指示灯的状态,并根据确定的设置在所述智能行驶设备后部的第一行驶状态指示灯的状态确定所述智能行驶设备的行驶状态;
    第一调整子模块,配置为根据确定的所述主体朝向、标注的所述主体朝向、确定的第一行驶状态指示灯的状态、标注的第一行驶状态指示灯的状态调整神经网络的网络参数值。
  25. 一种计算机存储介质,其中,所述计算机存储介质上存储有计算机可执行指令,该计算机可执行指令被执行后,能够实现权利要求1至12任一项所述的方法步骤。
  26. 一种计算机设备,其中,所述计算机设备包括存储器和处理器,所述存储器上存储有计算机可执行指令,所述处理器运行所述存储器上的计算机可执行指令时可实现权利要求1至12任一项所述的方法步骤。
  27. 一种计算程序产品,其中,所述计算机程序产品包括计算机可执行指令,该计算机可执行指令被执行后,能够实现权利要求1至12任一项所述的方法步骤。
PCT/CN2019/121057 2019-07-31 2019-11-26 识别智能行驶设备的行驶状态的方法及装置、设备 WO2021017341A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
KR1020207036574A KR20210015861A (ko) 2019-07-31 2019-11-26 스마트 주행 기기의 주행 상태 인식 방법 및 장치, 기기
JP2020567963A JP7074896B2 (ja) 2019-07-31 2019-11-26 スマート運転機器の走行状態を認識する方法及び装置、機器
SG11202013001RA SG11202013001RA (en) 2019-07-31 2019-11-26 Method and apparatus for identifying travelling state of intelligent driving device, and device
US17/124,940 US20210103746A1 (en) 2019-07-31 2020-12-17 Method and apparatus for identifying travelling state of intelligent driving device, and device
JP2022077972A JP2022105569A (ja) 2019-07-31 2022-05-11 スマート運転機器の走行状態を認識する方法及び装置、機器

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910702893.7 2019-07-31
CN201910702893.7A CN112307833A (zh) 2019-07-31 2019-07-31 识别智能行驶设备的行驶状态的方法及装置、设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/124,940 Continuation US20210103746A1 (en) 2019-07-31 2020-12-17 Method and apparatus for identifying travelling state of intelligent driving device, and device

Publications (1)

Publication Number Publication Date
WO2021017341A1 true WO2021017341A1 (zh) 2021-02-04

Family

ID=74229679

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/121057 WO2021017341A1 (zh) 2019-07-31 2019-11-26 识别智能行驶设备的行驶状态的方法及装置、设备

Country Status (6)

Country Link
US (1) US20210103746A1 (zh)
JP (2) JP7074896B2 (zh)
KR (1) KR20210015861A (zh)
CN (1) CN112307833A (zh)
SG (1) SG11202013001RA (zh)
WO (1) WO2021017341A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7115502B2 (ja) 2020-03-23 2022-08-09 トヨタ自動車株式会社 物体状態識別装置、物体状態識別方法及び物体状態識別用コンピュータプログラムならびに制御装置
JP7388971B2 (ja) 2020-04-06 2023-11-29 トヨタ自動車株式会社 車両制御装置、車両制御方法及び車両制御用コンピュータプログラム
JP7359735B2 (ja) * 2020-04-06 2023-10-11 トヨタ自動車株式会社 物体状態識別装置、物体状態識別方法及び物体状態識別用コンピュータプログラムならびに制御装置
CN114519848A (zh) * 2022-02-09 2022-05-20 商汤集团有限公司 一种运动意图确定方法、装置、设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105975915A (zh) * 2016-04-28 2016-09-28 大连理工大学 一种基于多任务卷积神经网络的前方车辆参数识别方法
CN106094809A (zh) * 2015-04-30 2016-11-09 Lg电子株式会社 车辆驾驶辅助装置
CN106874858A (zh) * 2017-01-19 2017-06-20 博康智能信息技术有限公司北京海淀分公司 一种车辆信息识别方法及装置和一种车辆
CN109345512A (zh) * 2018-09-12 2019-02-15 百度在线网络技术(北京)有限公司 汽车图像的处理方法、装置及可读存储介质

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4151890B2 (ja) * 2002-11-22 2008-09-17 富士重工業株式会社 車両監視装置および車両監視方法
JP2004341801A (ja) * 2003-05-15 2004-12-02 Nissan Motor Co Ltd 追跡車両ランプ検出システムおよび追跡車両ランプ検出方法
JP4830621B2 (ja) * 2006-05-12 2011-12-07 日産自動車株式会社 合流支援装置及び合流支援方法
JP2010249768A (ja) * 2009-04-20 2010-11-04 Toyota Motor Corp 車両検出装置
JP5499011B2 (ja) * 2011-11-17 2014-05-21 富士重工業株式会社 車外環境認識装置および車外環境認識方法
CN102897086B (zh) * 2012-10-12 2017-04-12 福尔达车联网(深圳)有限公司 一种后车行驶信息检测及提示方法及系统
JP6354356B2 (ja) * 2014-06-10 2018-07-11 株式会社デンソー 前方状況判定装置
JP6335037B2 (ja) * 2014-06-19 2018-05-30 株式会社Subaru 車外環境認識装置
US9701239B2 (en) * 2015-11-04 2017-07-11 Zoox, Inc. System of configuring active lighting to indicate directionality of an autonomous vehicle
JP6649178B2 (ja) * 2016-05-24 2020-02-19 株式会社東芝 情報処理装置、および、情報処理方法
US10248874B2 (en) * 2016-11-22 2019-04-02 Ford Global Technologies, Llc Brake light detection
US10614326B2 (en) * 2017-03-06 2020-04-07 Honda Motor Co., Ltd. System and method for vehicle control based on object and color detection
US10061322B1 (en) * 2017-04-06 2018-08-28 GM Global Technology Operations LLC Systems and methods for determining the lighting state of a vehicle
CN107316010A (zh) * 2017-06-13 2017-11-03 武汉理工大学 一种识别前方车辆尾灯及判断其状态的方法
CN108229468B (zh) * 2017-06-28 2020-02-21 北京市商汤科技开发有限公司 车辆外观特征识别及车辆检索方法、装置、存储介质、电子设备
US10474908B2 (en) * 2017-07-06 2019-11-12 GM Global Technology Operations LLC Unified deep convolutional neural net for free-space estimation, object detection and object pose estimation
US10691962B2 (en) * 2017-09-22 2020-06-23 Toyota Motor Engineering & Manufacturing North America, Inc. Systems and methods for rear signal identification using machine learning
CN108357418B (zh) * 2018-01-26 2021-04-02 河北科技大学 一种基于尾灯识别的前车驾驶意图分析方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106094809A (zh) * 2015-04-30 2016-11-09 Lg电子株式会社 车辆驾驶辅助装置
CN105975915A (zh) * 2016-04-28 2016-09-28 大连理工大学 一种基于多任务卷积神经网络的前方车辆参数识别方法
CN106874858A (zh) * 2017-01-19 2017-06-20 博康智能信息技术有限公司北京海淀分公司 一种车辆信息识别方法及装置和一种车辆
CN109345512A (zh) * 2018-09-12 2019-02-15 百度在线网络技术(北京)有限公司 汽车图像的处理方法、装置及可读存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HUO, WEI: "Research on Driving Intention of Preceding Vehicle Based on Machine Vision", CHINA MASTER'S THESES FULL-TEXT DATABASE, INFORMATION SCIENCE AND TECHNOLOGY, 1 December 2018 (2018-12-01), pages 1 - 63, XP055777603, [retrieved on 20210218] *

Also Published As

Publication number Publication date
CN112307833A (zh) 2021-02-02
US20210103746A1 (en) 2021-04-08
SG11202013001RA (en) 2021-03-30
KR20210015861A (ko) 2021-02-10
JP2021534472A (ja) 2021-12-09
JP2022105569A (ja) 2022-07-14
JP7074896B2 (ja) 2022-05-24

Similar Documents

Publication Publication Date Title
US11694430B2 (en) Brake light detection
WO2021017341A1 (zh) 识别智能行驶设备的行驶状态的方法及装置、设备
CN109460699B (zh) 一种基于深度学习的驾驶员安全带佩戴识别方法
WO2020098004A1 (zh) 车道通行状态提醒方法及设备
WO2020253965A1 (en) Control device, system and method for determining perceptual load of a visual and dynamic driving scene in real time
KR20210080459A (ko) 차선 검출방법, 장치, 전자장치 및 가독 저장 매체
WO2019047597A1 (zh) 一种识别光照驾驶场景的方法和装置
CN113022578B (zh) 基于车辆运动信息乘客提醒方法、系统、车辆及存储介质
CN112200142A (zh) 一种识别车道线的方法、装置、设备及存储介质
CN114049677A (zh) 基于驾驶员情绪指数的车辆adas控制方法及系统
US11157754B2 (en) Road marking determining apparatus for automated driving
CN113989772A (zh) 一种交通灯检测方法、装置、车辆和可读存储介质
WO2023151241A1 (zh) 一种运动意图确定方法、装置、设备及存储介质
CN115631482B (zh) 驾驶感知信息采集方法、装置、电子设备和可读介质
CN114419603A (zh) 一种自动驾驶车辆控制方法、系统和自动驾驶车辆
Nine et al. Traffic Light and Back-light Recognition using Deep Learning and Image Processing with Raspberry Pi
CN114842432A (zh) 一种基于深度学习汽车灯光控制方法及系统
US20240043027A1 (en) Adaptive driving style
CN113428176B (zh) 无人车驾驶策略的调整方法、装置、设备和存储介质
KR102485099B1 (ko) 메타 데이터를 이용한 데이터 정제 방법 및 이를 실행하기 위하여 기록매체에 기록된 컴퓨터 프로그램
US20230391366A1 (en) System and method for detecting a perceived level of driver discomfort in an automated vehicle
CN117984894A (zh) 驾驶预警方法、装置、设备和存储介质
CN115187951A (zh) 信号灯识别方法、装置、电子设备及存储介质
CN117372989A (zh) 一种交通指示灯识别方法及相关装置
CN117830718A (zh) 基于深度学习的车辆颜色识别方法、系统、介质及设备

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2020567963

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20207036574

Country of ref document: KR

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19939916

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19939916

Country of ref document: EP

Kind code of ref document: A1