WO2019047656A1 - 用于控制无人驾驶车辆的方法和装置 - Google Patents

用于控制无人驾驶车辆的方法和装置 Download PDF

Info

Publication number
WO2019047656A1
WO2019047656A1 PCT/CN2018/099170 CN2018099170W WO2019047656A1 WO 2019047656 A1 WO2019047656 A1 WO 2019047656A1 CN 2018099170 W CN2018099170 W CN 2018099170W WO 2019047656 A1 WO2019047656 A1 WO 2019047656A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
scene type
feature extraction
model
feature vector
Prior art date
Application number
PCT/CN2018/099170
Other languages
English (en)
French (fr)
Inventor
唐坤
郁浩
闫泳杉
郑超
张云飞
姜雨
Original Assignee
百度在线网络技术(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 百度在线网络技术(北京)有限公司 filed Critical 百度在线网络技术(北京)有限公司
Publication of WO2019047656A1 publication Critical patent/WO2019047656A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Definitions

  • the present application relates to the technical field of motor vehicles, and in particular to the technical field of unmanned vehicles, and more particularly to a method and apparatus for controlling an unmanned vehicle.
  • unmanned vehicles controlled by automatic control systems can bring convenience to people's travel and improve people's quality of life.
  • an embodiment of the present application provides a method for controlling an unmanned vehicle, where the method includes: acquiring an image of an environment to be recognized of an unmanned vehicle; and importing the image to be recognized into a scene recognition model to obtain the above a scene type corresponding to the environment image to be identified, wherein the scene recognition model is used to represent a correspondence between the image to be recognized and the scene type; and selecting and executing the control according to the association relationship between the preset scene type and the control instruction Command to control the above unmanned vehicle.
  • an embodiment of the present application provides an apparatus for controlling an unmanned vehicle, where the apparatus includes: an acquiring unit, configured to acquire an image of an environment to be recognized of an unmanned vehicle; and a determining unit, configured to: The recognition environment image is imported into the scene recognition model, and the scene type corresponding to the environment image to be identified is obtained, wherein the scene recognition model is used to represent a correspondence between the image to be recognized and the scene type; and the execution unit is configured to The relationship between the scene type and the control command, and the control command is selected and executed to control the above-mentioned unmanned vehicle.
  • an embodiment of the present application provides an unmanned vehicle, including: one or more processors; an image collection device, configured to collect an image to be recognized; and a storage device, configured to store one or more programs, when The one or more programs described above are executed by the one or more processors described above such that the one or more processors described above implement the method of the first aspect.
  • an embodiment of the present application provides a computer readable storage medium having stored thereon a computer program, the program being implemented by the processor to implement the method of the first aspect.
  • the method and apparatus for controlling an unmanned vehicle provided by the embodiment of the present application can quickly obtain the current scene type of the unmanned vehicle by using the pre-established scene recognition model to process the image to be recognized, and then according to the scene type and Controlling the relationship between instructions, quickly selecting and executing control commands improves the control efficiency of unmanned vehicles.
  • FIG. 1 is an exemplary system architecture diagram to which the present application can be applied;
  • FIG. 2 is a flow chart of one embodiment of a method for controlling an unmanned vehicle in accordance with the present application
  • FIG. 3 is a schematic diagram of an application scenario of a method for controlling an unmanned vehicle according to the present application
  • FIG. 4 is a flow chart of still another embodiment of a method for controlling an unmanned vehicle in accordance with the present application.
  • FIG. 5 is a schematic structural view of an embodiment of an apparatus for controlling an unmanned vehicle according to the present application.
  • FIG. 6 is a schematic structural diagram of a computer system suitable for implementing an electronic device of an embodiment of the present application.
  • FIG. 1 illustrates an exemplary system architecture 100 of an embodiment of a method for controlling an unmanned vehicle or a device for determining an unmanned vehicle to which the present application may be applied.
  • system architecture 100 can include driverless vehicle 101.
  • a driving control device 1011, a network 1012, and an image capturing device 1013 may be mounted on the driverless vehicle 101.
  • Network 1012 is used to provide a medium for the communication link between driving control device 1011 and image acquisition device 1013.
  • Network 1012 can include various types of connections, such as wired, wireless communication links, fiber optic cables, and the like.
  • the driving control device 1011 is responsible for the intelligent control of the unmanned vehicle.
  • the driving control device 1011 may be a separately set controller, such as a programmable logic controller (PLC), a single chip microcomputer, an industrial control machine, etc., or may have other input/output ports and have operation control functions.
  • PLC programmable logic controller
  • the method for controlling an unmanned vehicle provided by the embodiment of the present application is generally performed by the driving control device 1011. Accordingly, the device for controlling the unmanned vehicle is generally disposed in the driving control device 1011.
  • driving control devices and image capture devices in Figure 1 is merely illustrative. Depending on the needs of the implementation, there may be any number of driving control devices, image acquisition devices. It should be noted that the image acquisition device may not be included in the system architecture. To be identified
  • a flow 200 of one embodiment of a method for controlling an unmanned vehicle in accordance with the present application is illustrated.
  • the above method for controlling an unmanned vehicle includes the following steps:
  • Step 201 Acquire an environment image of the unidentified vehicle to be identified.
  • an electronic device for example, the driving control device shown in FIG. 1 on which the method for controlling an unmanned vehicle runs can acquire an image of the environment to be recognized of the driverless vehicle.
  • the electronic device may acquire an environment image of the unmanned vehicle collected by the image capturing device from the image capturing device in real time through a wired connection manner or a wireless connection manner.
  • the image capture device can be a camera, a video camera, or the like. It should be noted that the image capturing device may be disposed on the unmanned vehicle, or may be disposed at a position. For example, a road image capturing device may be disposed beside the road, and the electronic device may be acquired from the road image collecting device. Identify environmental images.
  • the image to be recognized may be an image of an environment surrounding the unmanned vehicle.
  • the image capture device may transmit the image to be recognized to the electronic device in the form of a single frame image.
  • Step 202 Import the image to be identified into the scene recognition model to obtain a scene type corresponding to the image to be identified.
  • an electronic device for example, the driving control device shown in FIG. 1 on which the method for controlling an unmanned vehicle runs can import the image to be recognized into the scene recognition model to obtain the environment to be identified.
  • the type of scene corresponding to the image can import the image to be recognized into the scene recognition model to obtain the environment to be identified.
  • the scene recognition model is used to represent a correspondence between an environment image to be recognized and a scene type.
  • the scene may be some common or uncommon scenes.
  • a common scenario may be encountering a traffic light, encountering a traffic intersection, and the like.
  • an uncommon scene may be a pedestrian crossing a road, a front chase, or the like.
  • the scene type may be various forms of identification information, such as a scene name, a scene number, and the like.
  • the environment recognition model may be stored locally on the electronic device. It should be noted that the above environment recognition model can be established by other electronic devices.
  • the environment image model to be identified in step 202 may be obtained by using a training set to train a training environment image that is set in association with the scene type. Using the above training set, the initial convolutional neural network or the recurrent neural network is trained to obtain an environmental image model to be identified.
  • Step 203 Select and execute a control instruction according to a relationship between the preset scene type and the control instruction to control the unmanned vehicle.
  • the electronic device for example, the driving control device shown in FIG. 1 on which the method for controlling the driverless vehicle runs may be selected according to the association relationship between the preset scene type and the control command. And execute control commands to control the above-mentioned unmanned vehicle.
  • a plurality of scene types and a plurality of control instructions may be pre-stored in the electronic device, and the association relationship information is stored, where the association relationship information is used to indicate an association relationship between the scene type and the control instruction.
  • the scene type may be associated with a control command, such as the type of scene in which the pedestrian crosses the road, and the associated control commands may control the emergency braking of the driverless vehicle.
  • the scene type may be associated with a plurality of control instructions.
  • the control command may be selected according to the preset control strategy information corresponding to the scenario.
  • the control strategy information may indicate "red light stop, green light line", if the current encounter is a red light, then an instruction indicating the brake is selected; if the current encounter is a green light, then the indication is selected Driving instructions.
  • FIG. 3 is a schematic diagram of an application scenario of a method for controlling an unmanned vehicle according to the present embodiment.
  • the unmanned vehicle 301 is on the road and the pedestrian 302 suddenly traverses the road.
  • the camera of the driverless vehicle can collect images of the environment to be identified, and the images captured by the camera may include pedestrians walking in a hurry.
  • the camera can transmit the acquired environmental image to be recognized 303 to the driving control device 304 of the driverless vehicle.
  • the driving control device is illustrated twice in FIG.
  • the driving control device 304 can acquire the to-be-identified environment image 303 of the driverless vehicle.
  • the driving control device may import the image to be recognized into the scene recognition model to obtain the scene type 305 corresponding to the environment image to be identified.
  • the driving control device can select and execute the control command 306 according to the relationship between the preset scene type and the control command to control the unmanned vehicle. For example, controlling the above-mentioned unmanned vehicle emergency brake.
  • the method provided by the foregoing embodiment of the present application can quickly obtain the current scene type of the unmanned vehicle by using the pre-established scene recognition model to process the image to be recognized, and then according to the correspondence between the scene type and the control instruction. Quickly select and execute control commands to improve the control efficiency of unmanned vehicles.
  • the process 400 of the method for controlling an unmanned vehicle includes the following steps:
  • Step 401 Acquire an environment image of the unidentified vehicle to be identified.
  • an electronic device for example, the driving control device shown in FIG. 1 on which the method for controlling an unmanned vehicle runs can acquire an image of the environment to be recognized of the driverless vehicle.
  • Step 402 Import the acquired environment image to be identified into the pre-trained first feature extraction model to obtain a feature vector to be identified corresponding to the environment image to be identified.
  • the electronic device for example, the driving control device shown in FIG. 1
  • the electronic device on which the method for controlling the driverless vehicle runs can import the acquired environment image to be recognized into the pre-trained first feature extraction.
  • the model obtains a feature vector to be identified corresponding to the environment image to be identified.
  • the first feature extraction model is used to represent a correspondence between the environment image to be identified and the feature vector.
  • the foregoing first feature extraction model may be the above electronic device or other electronic device. If it is another electronic device, it may be sent to the electronic device after the electronic device is established.
  • Step 403 Acquire at least two reference feature vectors.
  • an electronic device (such as the driving control device shown in FIG. 1) on which the method for controlling an unmanned vehicle operates may acquire at least two reference feature vectors.
  • the reference feature vector is associated with the scene type setting.
  • the reference feature vector may be previously stored locally by the electronic device. It should be noted that the reference feature vector may be determined by other electronic devices and sent to the electronic device.
  • the reference feature vector of the at least two reference feature vectors may be obtained from other electronic devices, or may be written by a technician.
  • the reference feature vector of the at least two reference feature vectors may be obtained by acquiring at least two reference environment images, where the reference environment image is associated with the scene type. .
  • the reference environment image is imported into the pre-trained second feature extraction model to obtain a reference feature vector corresponding to the reference environment image, wherein the second feature extraction is performed.
  • the model is used to characterize the correspondence between the reference environment image and the reference feature vector.
  • the second feature extraction model may be the same as or different from the first feature extraction model.
  • Step 404 determining the similarity between the feature vector and each of the reference feature vectors.
  • an electronic device (such as the driving control device shown in FIG. 1) on which the method for controlling an unmanned vehicle operates may determine the similarity of the above-described feature vector to each of the reference feature vectors.
  • Step 405 Determine, according to the determined similarity, a scene type corresponding to the environment image.
  • an electronic device for example, the driving control device shown in FIG. 1 on which the method for controlling an unmanned vehicle runs may determine a scene type corresponding to the environment image according to the determined similarity.
  • the maximum similarity may be selected, and the scene type associated with the reference feature vector corresponding to the maximum similarity is determined as the scene type associated with the environment image.
  • the number of scene types of the reference feature vector may be smaller than the reference feature vector. That is, there may be three reference feature vectors, and two of the three vectors have the same scene type.
  • the weighted average of the similarity between the reference feature information and the feature vector to be identified may be calculated, and the result of the weighted average is used as the probability that the image to be identified belongs to the scene type. Select the scene type with the highest probability as the recognition image belongs to the scene type.
  • Step 406 Select and execute a control instruction according to a relationship between the preset scene type and the control instruction to control the unmanned vehicle.
  • the electronic device for example, the driving control device shown in FIG. 1 on which the method for controlling the driverless vehicle runs may be selected according to the association relationship between the preset scene type and the control command. And execute control commands to control the above-mentioned unmanned vehicle.
  • the flow 400 of the method for controlling an unmanned vehicle in the present embodiment highlights the use between the feature vector of the environment image and the reference feature vector as compared to the embodiment corresponding to FIG.
  • the similarity determines the type of scene.
  • the solution described in this embodiment can introduce a reference sample to determine the type of scene, thereby improving the accuracy of controlling the unmanned vehicle, thereby improving the efficiency of controlling the driverless car.
  • the foregoing second feature extraction model may be established by: acquiring an initial long-term and short-term memory network model and a training set, where the training set includes training for setting the association with the scene type. The environment image; using the above training set, training the initial long-term and short-term memory network model to obtain the second feature extraction model.
  • the second feature extraction model may be the above electronic device or other electronic devices. If it is another electronic device, it may be sent to the electronic device after the electronic device is established.
  • LSM Long-Short Term Memory
  • the LSTM is suitable for processing and predicting important events with very long intervals and delays in the time series.
  • a general model needs to process the sequence of images to determine the scene.
  • the training set is a variety of pictures, the general model may be less effective.
  • long- and short-term memory networks can be demonstrated in the actual context for controlling unmanned vehicles: the establishment of long- and short-term memory networks is based on test sets, which may have multiple images for a single scene type.
  • the second feature extraction model is based on long-term and short-term memory networks. When extracting feature vectors, the main features of pedestrians crossing the road can be extracted according to the images of pedestrians crossing the road with different degrees.
  • the second feature extraction model may focus on extracting the biased main features when extracting the reference feature vectors for the at least two reference environment images.
  • the foregoing first feature extraction model may be established by: acquiring a test set, where the test set includes a test environment image set in association with the scene type; and each test environment is used The image is imported into the second feature extraction model to obtain a second image feature vector of the test environment image; and the initial long-short-term memory network model is trained by using the test set and the obtained second image feature vector to obtain the first feature extraction. model.
  • the trained second feature extraction model may be introduced into the first feature extraction model.
  • the second feature extraction model can focus on the extraction of the main features, and the bias of the feature extraction of the first feature extraction model is affected by the second feature extraction model, and the image extraction process for the image features to be recognized can be obtained.
  • the initial long-term and short-term memory network model is trained by using the foregoing test set and each of the obtained second image feature vectors, and the first feature extraction model is obtained, which may include: testing Importing the first feature extraction model by using the environment image to obtain a first image feature vector of the test environment image; determining the first feature extraction model according to the obtained first image feature vector and the scene type associated with the test environment image Model error; updating the second feature extraction model and the first feature extraction model described above according to the model error described above.
  • the training process of the first feature extraction model introduces the second feature extraction model
  • the first feature extraction model and the second feature extraction model may be updated simultaneously, and the second feature extraction is updated.
  • a more accurate reference feature vector can be obtained.
  • the model can be accurately extracted by training the model in this way.
  • samples of some scenes may be very small in practice, for example, pedestrians crossing the road. Therefore, the introduction of the model training method into the field of unmanned vehicles can solve the problem of minority scene recognition of unmanned vehicles.
  • the present application provides an embodiment of an apparatus for controlling an unmanned vehicle, the apparatus embodiment being in accordance with the method embodiment illustrated in FIG.
  • the device can be specifically applied to various electronic devices.
  • the apparatus 500 for controlling an unmanned vehicle described above in this embodiment includes an acquisition unit 501, a determination unit 502, and an execution unit 503.
  • the acquiring unit is configured to acquire an image of the environment to be recognized of the unmanned vehicle
  • the determining unit is configured to import the image to be recognized into the scene recognition model to obtain a scene type corresponding to the image to be recognized, wherein the scene recognition is performed.
  • the model is used to represent a correspondence between the environment image to be recognized and the scene type
  • the execution unit is configured to select and execute a control instruction according to a relationship between the preset scene type and the control instruction to control the unmanned vehicle. .
  • step 201 the specific processing of the taking unit 501, the determining unit 502, and the executing unit 503 and the technical effects thereof may be referred to the related descriptions of step 201, step 202, and step 203 in the corresponding embodiment of FIG. 2, respectively. This will not be repeated here.
  • the acquired environment image to be identified is imported into the pre-trained first feature extraction model to obtain a feature vector to be identified corresponding to the to-be-identified environment image, where the first The feature extraction model is configured to represent a correspondence between the environment image to be identified and the feature vector; acquire at least two reference feature vectors, wherein the reference feature vector is associated with the scene type; and determine a similarity between the feature vector and each reference feature vector; The determined similarity determines the scene type corresponding to the environment image to be identified.
  • the reference feature vector of the at least two reference feature vectors is obtained by: acquiring at least two reference environment images, where the reference environment image is associated with the scene type; For each of the at least two reference environment images, the reference environment image is imported into the pre-trained second feature extraction model to obtain a reference feature vector corresponding to the reference environment image, wherein the second feature extraction is performed.
  • the model is used to characterize the correspondence between the reference environment image and the reference feature vector.
  • the foregoing second feature extraction model is obtained by: acquiring an initial long-term and short-term memory network model and a training set, where the training set includes training associated with the scene type. Using the environment image; using the above training set, training the initial long-term and short-term memory network model to obtain the second feature extraction model.
  • the first feature extraction model is obtained by: acquiring a test set, where the test set includes a test environment image set in association with the scene type; The environment image is imported into the second feature extraction model to obtain a second image feature vector of the test environment image; and the initial long-short-term memory network model is trained by using the test set and each second image feature vector to obtain the first feature extraction. model.
  • the initial long-term and short-term memory network model is trained by using the foregoing test set and each second predicted scene type, and the first feature extraction model is obtained, including: testing the environment image Importing the first feature extraction model to obtain a first image feature vector of the test environment image; determining a model error of the first feature extraction model according to the scene type associated with the first image feature vector and the test environment image; The model error described above updates the second feature extraction model and the first feature extraction model described above.
  • FIG. 6 a block diagram of a computer system 600 suitable for use in implementing the electronic device of the embodiments of the present application is shown.
  • the electronic device shown in FIG. 6 is merely an example, and should not impose any limitation on the function and scope of use of the embodiments of the present application.
  • the computer system 600 includes a central processing unit (CPU) 601 that can be loaded into random access according to a program stored in a read only memory (ROM) 602 or from the storage portion 606.
  • CPU central processing unit
  • ROM read only memory
  • RAM Random Access Memory
  • various programs and data required for the operation of the system 600 are also stored.
  • the CPU 601, the ROM 602, and the RAM 603 are connected to each other through a bus 604.
  • An input/output (I/O, Input/Output) interface 605 is also coupled to bus 604.
  • the following components are connected to the I/O interface 605: a storage portion 606 including a hard disk or the like; and a communication portion 607 including a network interface card such as a LAN (Local Area Network) card, a modem, and the like.
  • the communication section 607 performs communication processing via a network such as the Internet.
  • Driver 608 is also coupled to I/O interface 605 as needed.
  • a removable medium 609 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory or the like, is mounted on the drive 608 as needed so that a computer program read therefrom is installed into the storage portion 606 as needed.
  • an embodiment of the present disclosure includes a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for executing the method illustrated in the flowchart.
  • the computer program can be downloaded and installed from the network via communication portion 607, and/or installed from removable media 609.
  • the central processing unit (CPU) 601 the above-described functions defined in the method of the present application are performed.
  • the computer readable medium described above may be a computer readable signal medium or a computer readable storage medium or any combination of the two.
  • the computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the above. More specific examples of computer readable storage media may include, but are not limited to, electrical connections having one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the foregoing.
  • a computer readable storage medium may be any tangible medium that can contain or store a program, which can be used by or in connection with an instruction execution system, apparatus or device.
  • a computer readable signal medium may include a data signal that is propagated in the baseband or as part of a carrier, carrying computer readable program code. Such propagated data signals can take a variety of forms including, but not limited to, electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the computer readable signal medium can also be any computer readable medium other than a computer readable storage medium, which can transmit, propagate, or transport a program for use by or in connection with the instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable medium can be transmitted by any suitable medium, including but not limited to wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
  • each block of the flowchart or block diagram can represent a module, a program segment, or a portion of code that includes one or more of the logic functions for implementing the specified.
  • Executable instructions can also occur in a different order than that illustrated in the drawings. For example, two successively represented blocks may in fact be executed substantially in parallel, and they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowcharts, and combinations of blocks in the block diagrams and/or flowcharts can be implemented in a dedicated hardware-based system that performs the specified function or operation. Or it can be implemented by a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments of the present application may be implemented by software or by hardware.
  • the described unit may also be provided in the processor, for example, as a processor including an acquisition unit, a determination unit, and an execution unit.
  • the names of these units do not constitute a limitation on the unit itself in some cases, for example, the acquisition unit may also be described as "a unit for acquiring an image of an environment to be recognized of an unmanned vehicle".
  • the present application also provides a computer readable medium, which may be included in the apparatus described in the above embodiments, or may be separately present and not incorporated into the apparatus.
  • the computer readable medium carries one or more programs, when the one or more programs are executed by the device, causing the device to: acquire an image of the environment to be recognized of the driverless vehicle; and import the image to be recognized into the scene recognition
  • the model obtains a scene type corresponding to the environment image to be identified, wherein the scene recognition model is used to represent a correspondence between the image to be recognized and the scene type; and according to a relationship between the preset scene type and the control instruction, Control commands are selected and executed to control the above-mentioned unmanned vehicle.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

本申请实施例公开了用于控制无人驾驶车辆的方法和装置。该方法的一具体实施方式包括:获取无人驾驶车辆的待识别环境图像;将该待识别环境图像导入场景识别模型,得到该待识别环境图像对应的场景类型,其中,该场景识别模型用于表征待识别环境图像与场景类型之间的对应关系;根据预设的场景类型与控制指令之间的关联关系,选取并执行控制指令,以控制该无人驾驶车辆。该实施方式提高了无人驾驶车辆的控制效率。

Description

用于控制无人驾驶车辆的方法和装置
相关申请的交叉引用
本专利申请要求于2017年9月5日提交的、申请号为201710792595.2、申请人为百度在线网络技术(北京)有限公司、发明名称为“用于控制无人驾驶车辆的方法和装置”的中国专利申请的优先权,该申请的全文以引用的方式并入本申请中。
技术领域
本申请涉及机动车技术领域,具体涉及无人驾驶车辆技术领域,尤其涉及用于控制无人驾驶车辆的方法和装置。
背景技术
随着科技的发展和进步,通过自动控制系统控制的无人驾驶车辆能够给人们的出行带来便利并提高人们的生活质量。
然而,现有的无人驾驶车辆的控制方式,通常存在着控制效率较低的问题。
发明内容
本申请实施例的目的在于提出一种改进的用于控制无人驾驶车辆的方法和装置,来解决以上背景技术部分提到的技术问题。
第一方面,本申请实施例提供了一种用于控制无人驾驶车辆的方法,上述方法包括:获取无人驾驶车辆的待识别环境图像;将上述待识别环境图像导入场景识别模型,得到上述待识别环境图像对应的场景类型,其中,上述场景识别模型用于表征待识别环境图像与场景类型之间的对应关系;根据预设的场景类型与控制指令之间的关联关系,选取并执行控制指令,以控制上述无人驾驶车辆。
第二方面,本申请实施例提供了一种用于控制无人驾驶车辆的装 置,上述装置包括:获取单元,用于获取无人驾驶车辆的待识别环境图像;确定单元,用于将上述待识别环境图像导入场景识别模型,得到上述待识别环境图像对应的场景类型,其中,上述场景识别模型用于表征待识别环境图像与场景类型之间的对应关系;执行单元,用于根据预设的场景类型与控制指令之间的关联关系,选取并执行控制指令,以控制上述无人驾驶车辆。
第三方面,本申请实施例提供了一种无人驾驶车辆,包括:一个或多个处理器;图像采集装置,用于采集待识别图像;存储装置,用于存储一个或多个程序,当上述一个或多个程序被上述一个或多个处理器执行时,使得上述一个或多个处理器实现如第一方面的方法。
第四方面,本申请实施例提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如第一方面的方法。
本申请实施例提供的用于控制无人驾驶车辆的方法和装置,通过利用预先建立的场景识别模型处理待识别环境图像,可以快速得到无人驾驶车辆的当前场景类型,然后可以根据场景类型与控制指令之间的关联关系,快速选取并执行控制指令,提高了无人驾驶车辆的控制效率。
附图说明
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本申请的其它特征、目的和优点将会变得更明显:
图1是本申请可以应用于其中的示例性系统架构图;
图2是根据本申请的用于控制无人驾驶车辆的方法的一个实施例的流程图;
图3是根据本申请的用于控制无人驾驶车辆的方法的一个应用场景的示意图;
图4是根据本申请的用于控制无人驾驶车辆的方法的又一个实施例的流程图;
图5是根据本申请的用于控制无人驾驶车辆的装置的一个实施例的结构示意图;
图6是适于用来实现本申请实施例的电子设备的计算机系统的结构示意图。
具体实施方式
下面结合附图和实施例对本申请作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释相关发明,而非对该发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与有关发明相关的部分。
需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。下面将参考附图并结合实施例来详细说明本申请。
图1示出了可以应用本申请的用于控制无人驾驶车辆的方法或用于确定无人驾驶车辆的装置的实施例的示例性系统架构100。
如图1所示,系统架构100可以包括无人驾驶车辆101。无人驾驶车辆101上可以安装有驾驶控制设备1011、网络1012、图像采集装置1013。网络1012用以在驾驶控制设备1011和图像采集装置1013之间提供通信链路的介质。网络1012可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。
驾驶控制设备(又称为车载大脑)1011负责无人驾驶车辆的智能控制。驾驶控制设备1011可以是单独设置的控制器,例如可编程逻辑控制器(Programmable Logic Controller,PLC)、单片机、工业控制机等;也可以是由其他具有输入/输出端口,并具有运算控制功能的电子器件组成的设备;还可以是安装有车辆驾驶控制类应用的计算机设备。
需要说明的是,本申请实施例所提供的用于控制无人驾驶车辆的方法一般由驾驶控制设备1011执行,相应地,用于控制无人驾驶车辆的装置一般设置于驾驶控制设备1011中。
应该理解,图1中的驾驶控制设备和图像采集装置的数目仅仅是示意性的。根据实现需要,可以具有任意数目的驾驶控制设备、图像采集装置。需要说明的是,本系统架构中也可以不包括图像采集装置。待识别
继续参考图2,其示出了根据本申请的用于控制无人驾驶车辆的方法的一个实施例的流程200。上述的用于控制无人驾驶车辆的方法,包括以下步骤:
步骤201,获取无人驾驶车辆的待识别环境图像。
在本实施例中,用于控制无人驾驶车辆的方法运行于其上的电子设备(例如图1所示的驾驶控制设备)可以获取无人驾驶车辆的待识别环境图像。
在本实施例中,上述电子设备可以通过有线连接方式或者无线连接方式,从图像采集装置实时获取图像采集装置采集的无人驾驶车辆的环境图像。作为示例,图像采集装置可以是照相机、摄像机等。需要说明的是,图像采集装置可以是设置在无人驾驶车辆上的,也可以设置的位置上,例如,可以在道路旁边设置道路图像采集设备,上述电子设备可以从上述道路图像采集设备获取待识别环境图像。
在本实施例中,待识别环境图像可以是无人车周围的环境的图像。图像采集装置可以以单帧图像的形式将待识别环境图像发送至上述电子设备。
步骤202,将待识别环境图像导入场景识别模型,得到待识别环境图像对应的场景类型。
在本实施例中,用于控制无人驾驶车辆的方法运行于其上的电子设备(例如图1所示的驾驶控制设备)可以将上述待识别环境图像导入场景识别模型,得到上述待识别环境图像对应的场景类型。
在本实施例中,上述场景识别模型用于表征待识别环境图像与场景类型之间的对应关系。
在本实施例中,场景可以是一些常见或者不常见的场景。作为示例,常见场景可以是遇到交通灯、遇到交通十字路口等。作为示例,不常见场景可以是行人横穿马路、前车追尾等。场景类型可以是各种形式的标识信息,例如,场景名称、场景编号等。
在本实施例的一些可选的实现方式中,环境识别模型可以是存储在上述电子设备本地的。需要说明的是,可以由其它电子设备建立上述环境识别模型。
在本实施例的一些可选的实现方式中,步骤202的待识别环境图像模型可以通过以下方式得到:利用训练集,训练包括与场景类型关联设置的训练用环境图像。利用上述训练集,训练初始的卷积神经网络或递归神经网络,得到待识别环境图像模型。
步骤203,根据预设的场景类型与控制指令之间的关联关系,选取并执行控制指令,以控制无人驾驶车辆。
在本实施例中,用于控制无人驾驶车辆的方法运行于其上的电子设备(例如图1所示的驾驶控制设备)可以根据预设的场景类型与控制指令之间的关联关系,选取并执行控制指令,以控制上述无人驾驶车辆。
在本实施例中,上述电子设备中可以预先存储多种场景类型和多种控制指令,并且存储关联关系信息,关联关系信息用于指示场景类型与控制指令之间的关联关系。
作为示例,场景类型可能关联一条控制指令,例如行人横穿马路的场景类型,所关联的控制指令可能控制无人驾驶车辆紧急刹车。
作为示例,场景类型可能关联多条控制指令,这种情况可以根据预先设置的与这种场景对应的控制策略信息,选取控制指令。例如,遇到交通信号灯的场景,控制策略信息可以指示“红灯停,绿灯行”,如果当前遇到的是红灯,那么选取指示刹车的指令;如果当前遇到的是绿灯,那么选取指示行驶的指令。
继续参见图3,图3是根据本实施例的用于控制无人驾驶车辆的方法的应用场景的一个示意图。在图3的应用场景中,无人驾驶车辆301行驶在路上,行人302突然横穿马路。无人驾驶车辆的摄像机可以采集的待识别环境图像,摄像机采集到的图像可能包括行走匆忙的行人。摄像机可以将采集的待识别环境图像303传送至无人驾驶车辆的驾驶控制设备304。在这里,为了便于说明,图3中将驾驶控制设备示意了两次。驾驶控制设备304可以获取无人驾驶车辆的待识别环境图像303。驾驶控制设备可以将上述待识别环境图像导入场景识别模型,得到上述待识别环境图像对应的场景类型305。驾驶控制设备可以根据预设的场景类型与控制指令之间的关联关系,选取并执行控 制指令306,以控制上述无人驾驶车辆。例如,控制上述无人驾驶车辆紧急刹车。
本申请的上述实施例提供的方法,通过利用预先建立的场景识别模型处理待识别环境图像,可以快速得到无人驾驶车辆的当前场景类型,然后可以根据场景类型与控制指令之间的对应关系,快速选取并执行控制指令,提高了无人驾驶车辆的控制效率。
进一步参考图4,其示出了用于控制无人驾驶车辆的方法的又一个实施例的流程400。该用于控制无人驾驶车辆的方法的流程400,包括以下步骤:
步骤401,获取无人驾驶车辆的待识别环境图像。
在本实施例中,用于控制无人驾驶车辆的方法运行于其上的电子设备(例如图1所示的驾驶控制设备)可以获取无人驾驶车辆的待识别环境图像。
步骤402,将获取到的待识别环境图像导入预先训练的第一特征提取模型,得到与待识别环境图像对应的待识别特征向量。
在本实施例中,用于控制无人驾驶车辆的方法运行于其上的电子设备(例如图1所示的驾驶控制设备)可以将获取到的待识别环境图像导入预先训练的第一特征提取模型,得到与上述待识别环境图像对应的待识别特征向量。
在本实施例中,上述第一特征提取模型用于表征待识别环境图像与特征向量的对应关系。
需要说明的是,建立上述第一特征提取模型的,可以是上述电子设备,也可以是其它电子设备。如果是其它电子设备,可以是上述电子设备建立之后发送到上述电子设备的。
步骤403,获取至少两个参考特征向量。
在本实施例中,用于控制无人驾驶车辆的方法运行于其上的电子设备(例如图1所示的驾驶控制设备)可以获取至少两个参考特征向量。
在本实施例中,参考特征向量与场景类型关联设置。
在本实施例中,参考特征向量可以是上述电子设备预先在本地存 储的。需要说明的是,参考特征向量可以是由其它电子设备确定、发送至上述电子设备的。
在本实施例的一些可选的实现方式中,上述至少两个参考特征向量中的参考特征向量,可以是从其它电子设备获取到的,也可以是技术人员自己写入的。
在本实施例的一些可选的实现方式中,上述至少两个参考特征向量中的参考特征向量,可以通过以下步骤得到:获取至少两个参考环境图像,其中,参考环境图像与场景类型关联设置。对于上述至少两个参考环境图像中的每个参考环境图像,将该参考环境图像导入预先训练的第二特征提取模型,得到与该参考环境图像对应的参考特征向量,其中,上述第二特征提取模型用于表征参考环境图像与参考特征向量的对应关系。
需要说明的是,第二特征提取模型可以与第一特征提取模型相同,也可以不同。
步骤404,确定特征向量与各个参考特征向量的相似度。
在本实施例中,用于控制无人驾驶车辆的方法运行于其上的电子设备(例如图1所示的驾驶控制设备)可以确定上述特征向量与各个参考特征向量的相似度。
需要说明的是,如何计算向量之间的相似度,是本领域技术人员所公知的,在此不再赘述。
步骤405,根据所确定的相似度,确定上述环境图像对应的场景类型。
在本实施例中,用于控制无人驾驶车辆的方法运行于其上的电子设备(例如图1所示的驾驶控制设备)可以根据所确定的相似度,确定上述环境图像对应的场景类型。
在本实施例的一些可选的实现方式中,可以选取出最大的相似度,将最大相似度对应的参考特征向量关联的场景类型,确定为上述环境图像关联的场景类型。
在本实施例的一些可选的实现方式中,参考特征向量的场景类型的数量,可能小于参考特征向量。即参考特征向量可能有三个,这三 个向量中有两个的场景类型相同。还可以对于相同场景类型的参考特征信息,可以计算这些参考特征信息与待识别特征向量的相似度进行加权平均,将加权平均的结果作为待识别图像属于这一场景类型的概率。选取概率最高的场景类型,作为识别图像属于场景类型。
步骤406,根据预设的场景类型与控制指令之间的关联关系,选取并执行控制指令,以控制无人驾驶车辆。
在本实施例中,用于控制无人驾驶车辆的方法运行于其上的电子设备(例如图1所示的驾驶控制设备)可以根据预设的场景类型与控制指令之间的关联关系,选取并执行控制指令,以控制上述无人驾驶车辆。
从图4中可以看出,与图2对应的实施例相比,本实施例中的用于控制无人驾驶车辆的方法的流程400突出了利用环境图像的特征向量与参考特征向量之间的相似度确定场景类型的步骤。由此,本实施例描述的方案可以引入参考样本以确定场景类型,从而提高了控制无人驾驶车辆的准确率,进而提高了控制无人驾驶汽车的效率。
在本实施例的一些可选的实现方式中,上述第二特征提取模型可以通过以下方式建立:获取初始的长短期记忆网络模型和训练集,其中,训练集包括与场景类型关联设置的训练用环境图像;利用上述训练集,训练上述初始的长短期记忆网络模型,得到上述第二特征提取模型。
需要说明的是,第二特征提取模型可以是上述电子设备,也可以是其它电子设备。如果是其它电子设备,可以是上述电子设备建立之后发送到上述电子设备的。
长短期记忆网络(Long-Short Term Memory,LSTM)是一种时间递归神经网络。LSTM适合于处理和预测时间序列中间隔和延迟非常长的重要事件。应用到本申请的实际场合中,例如,在行人横穿马路的场景中,可能一般的模型需要对图像序列进行处理才能确定出这个场景。但是,如果训练集是各种图片,一般的模型可能识别效果较差。
此时,长短期记忆网络的优势可以在用于控制无人驾驶车辆的实际情境中体现出优势:建立基于长短期记忆网络建立是基于测试集, 测试集中对于对应单个场景类型可能有多幅图像,第二特征提取模型基于长短期记忆网络建立,提取特征向量时,可以根据行人不同程度的横穿马路的图像,偏向提取出行人横穿马路的主要特征。第二特征提取模型可以在对上述至少两个参考环境图像提取参考特征向量时,着重提取出所偏向的主要特征。
在本实施例的一些可选的实现方式中,上述第一特征提取模型可以通过以下方式建立:获取测试集,其中,测试集包括与场景类型关联设置的测试用环境图像;将各个测试用环境图像导入上述第二特征提取模型,得到测试用环境图像的第二图像特征向量;利用上述测试集和得到的各个第二图像特征向量,训练初始的长短期记忆网络模型,得到上述第一特征提取模型。
需要说明的是,利用上述测试集和得到的各个第二图像特征向量,可以将所训练的第二特征提取模型引入第一特征提取模型。第二特征提取模型可以着重对主要特征的提取,进而第第一特征提取模型的特征提取的偏向受到第二特征提取模型的影响,进而可以在对待识别图像特征进行图像提取的过程中,得到更为准确的可以表征场景类型的特征向量。
需要说明的是,如何利用上述测试集和得到的各个第二图像特征向量,可以通过多种方式实现,例如将第二图像特征向量与模型训练过程中的状态变量相结合。长短期记忆网络模型中的状态变量本领域技术人员所公知的,如何结合可以是灵活选择的,在此不再赘述。
在本实施例的一些可选的实现方式中,上述利用上述测试集和得到的各个第二图像特征向量,训练初始的长短期记忆网络模型,得到上述第一特征提取模型,可以包括:将测试用环境图像导入上述第一特征提取模型,得到测试用环境图像的第一图像特征向量;根据得到的第一图像特征向量和测试用环境图像所关联的场景类型,确定上述第一特征提取模型的模型误差;根据上述模型误差,更新上述第二特征提取模型和上述第一特征提取模型。
需要说明的是,如何确定模型误差本身在此不再赘述。
需要说明的是,由上述更新步骤,因为第一特征提取模型的训练 过程引入了第二特征提取模型,因此,可以同时更新第一特征提取模型和第二特征提取模型,在更新第二特征提取模型后,可以得到更为准确的参考特征向量。进而,对于训练样本较少的场景类型,利用这种方式训练出模型也可以准确地进行特征提取。在无人车领域,一些场景的样本在实践过程中可能非常少,例如,行人横穿马路。所以将此中模型训练方式引入无人车领域,能够解决无人车少数场景识别的问题。
进一步参考图5,作为对上述各图所示方法的实现,本申请提供了一种用于控制无人驾驶车辆的装置的一个实施例,该装置实施例与图2所示的方法实施例相对应,该装置具体可以应用于各种电子设备中。
如图5所示,本实施例上述的用于控制无人驾驶车辆的装置500包括:获取单元501、确定单元502和执行单元503。其中,获取单元,用于获取无人驾驶车辆的待识别环境图像;确定单元,用于将上述待识别环境图像导入场景识别模型,得到上述待识别环境图像对应的场景类型,其中,上述场景识别模型用于表征待识别环境图像与场景类型之间的对应关系;执行单元,用于根据预设的场景类型与控制指令之间的关联关系,选取并执行控制指令,以控制上述无人驾驶车辆。
在本实施例中,取单元501、确定单元502和执行单元503的具体处理及其所带来的技术效果可分别参考图2对应实施例中步骤201、步骤202以及步骤203的相关说明,在此不再赘述。
在本实施例的一些可选的实现方式中,将获取到的待识别环境图像导入预先训练的第一特征提取模型,得到与上述待识别环境图像对应的待识别特征向量,其中,上述第一特征提取模型用于表征待识别环境图像与特征向量的对应关系;获取至少两个参考特征向量,其中,参考特征向量与场景类型关联设置;确定上述特征向量与各个参考特征向量的相似度;根据所确定的相似度,确定上述待识别环境图像对应的场景类型。
在本实施例的一些可选的实现方式中,上述至少两个参考特征向量中的参考特征向量,通过以下步骤得到:获取至少两个参考环境图 像,其中,参考环境图像与场景类型关联设置;对于上述至少两个参考环境图像中的每个参考环境图像,将该参考环境图像导入预先训练的第二特征提取模型,得到与该参考环境图像对应的参考特征向量,其中,上述第二特征提取模型用于表征参考环境图像与参考特征向量的对应关系。
在本实施例的一些可选的实现方式中,上述第二特征提取模型,通过以下步骤训练得到:获取初始的长短期记忆网络模型和训练集,其中,训练集包括与场景类型关联设置的训练用环境图像;利用上述训练集,训练上述初始的长短期记忆网络模型,得到上述第二特征提取模型。
在本实施例的一些可选的实现方式中,上述第一特征提取模型,通过以下步骤训练得到:获取测试集,其中,测试集包括与场景类型关联设置的测试用环境图像;将各个测试用环境图像导入上述第二特征提取模型,得到测试用环境图像的第二图像特征向量;利用上述测试集和得到各个第二图像特征向量,训练初始的长短期记忆网络模型,得到上述第一特征提取模型。
在本实施例的一些可选的实现方式中,上述利用上述测试集和各个第二预测场景类型,训练初始的长短期记忆网络模型,得到上述第一特征提取模型,包括:将测试用环境图像导入上述第一特征提取模型,得到测试用环境图像的第一图像特征向量;根据得到第一图像特征向量和测试用环境图像所关联的场景类型,确定上述第一特征提取模型的模型误差;根据上述模型误差,更新上述第二特征提取模型和上述第一特征提取模型。
需要说明的是,本实施例提供的用于控制无人驾驶车辆的装置中各单元的实现细节和技术效果可以参考本申请中其它实施例的说明,在此不再赘述。
下面参考图6,其示出了适于用来实现本申请实施例的电子设备的计算机系统600的结构示意图。图6示出的电子设备仅仅是一个示例,不应对本申请实施例的功能和使用范围带来任何限制。
如图6所示,计算机系统600包括中央处理单元(CPU,Central  Processing Unit)601,其可以根据存储在只读存储器(ROM,Read Only Memory)602中的程序或者从存储部分606加载到随机访问存储器(RAM,Random Access Memory)603中的程序而执行各种适当的动作和处理。在RAM 603中,还存储有系统600操作所需的各种程序和数据。CPU 601、ROM 602以及RAM 603通过总线604彼此相连。输入/输出(I/O,Input/Output)接口605也连接至总线604。
以下部件连接至I/O接口605:包括硬盘等的存储部分606;以及包括诸如LAN(局域网,Local Area Network)卡、调制解调器等的网络接口卡的通信部分607。通信部分607经由诸如因特网的网络执行通信处理。驱动器608也根据需要连接至I/O接口605。可拆卸介质609,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器608上,以便于从其上读出的计算机程序根据需要被安装入存储部分606。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分607从网络上被下载和安装,和/或从可拆卸介质609被安装。在该计算机程序被中央处理单元(CPU)601执行时,执行本申请的方法中限定的上述功能。需要说明的是,本申请上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本申请中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者 与其结合使用。而在本申请中,计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:无线、电线、光缆、RF等等,或者上述的任意合适的组合。
附图中的流程图和框图,图示了按照本申请各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本申请实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。所描述的单元也可以设置在处理器中,例如,可以描述为:一种处理器包括获取单元、确定单元和执行单元。其中,这些单元的名称在某种情况下并不构成对该单元本身的限定,例如,获取单元还可以被描述为“用于获取无人驾驶车辆的待识别环境图像的单元”。
作为另一方面,本申请还提供了一种计算机可读介质,该计算机可读介质可以是上述实施例中描述的装置中所包含的;也可以是单独存在,而未装配入该装置中。上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该装置执行时,使得该装置:获 取无人驾驶车辆的待识别环境图像;将上述待识别环境图像导入场景识别模型,得到上述待识别环境图像对应的场景类型,其中,上述场景识别模型用于表征待识别环境图像与场景类型之间的对应关系;根据预设的场景类型与控制指令之间的关联关系,选取并执行控制指令,以控制上述无人驾驶车辆。
以上描述仅为本申请的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本申请中所涉及的发明范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述发明构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本申请中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。

Claims (10)

  1. 一种用于控制无人驾驶车辆的方法,其特征在于,所述方法包括:
    获取无人驾驶车辆的待识别环境图像;
    将所述待识别环境图像导入场景识别模型,得到所述待识别环境图像对应的场景类型,其中,所述场景识别模型用于表征待识别环境图像与场景类型之间的对应关系;
    根据预设的场景类型与控制指令之间的关联关系,选取并执行控制指令,以控制所述无人驾驶车辆。
  2. 根据权利要求1所述的方法,其特征在于,所述将所述待识别环境图像导入场景识别模型,得到所述待识别环境图像对应的场景类型,包括:
    将获取到的待识别环境图像导入预先训练的第一特征提取模型,得到与所述待识别环境图像对应的待识别特征向量,其中,所述第一特征提取模型用于表征待识别环境图像与特征向量的对应关系;
    获取至少两个参考特征向量,其中,参考特征向量与场景类型关联设置;
    确定所述特征向量与各个参考特征向量的相似度;
    根据所确定的相似度,确定所述待识别环境图像对应的场景类型。
  3. 根据权利要求2所述的方法,其特征在于,所述至少两个参考特征向量中的参考特征向量,通过以下步骤得到:
    获取至少两个参考环境图像,其中,参考环境图像与场景类型关联设置;
    对于所述至少两个参考环境图像中的每个参考环境图像,将该参考环境图像导入预先训练的第二特征提取模型,得到与该参考环境图像对应的参考特征向量,其中,所述第二特征提取模型用于表征参考环境图像与参考特征向量的对应关系。
  4. 根据权利要求1-3中任一项所述的方法,其特征在于,所述第二特征提取模型,通过以下步骤训练得到:
    获取初始的长短期记忆网络模型和训练集,其中,训练集包括与场景类型关联设置的训练用环境图像;
    利用所述训练集,训练所述初始的长短期记忆网络模型,得到所述第二特征提取模型。
  5. 根据权利要求4所述的方法,其特征在于,所述第一特征提取模型,通过以下步骤训练得到:
    获取测试集,其中,测试集包括与场景类型关联设置的测试用环境图像;
    将各个测试用环境图像导入所述第二特征提取模型,得到测试用环境图像的第二图像特征向量;
    利用所述测试集和得到各个第二图像特征向量,训练初始的长短期记忆网络模型,得到所述第一特征提取模型。
  6. 根据权利要求5所述的方法,其特征在于,所述利用所述测试集和得到各个第二图像特征向量,训练初始的长短期记忆网络模型,得到所述第一特征提取模型,包括:
    将测试用环境图像导入所述第一特征提取模型,得到测试用环境图像的第一图像特征向量;
    根据得到第一图像特征向量和测试用环境图像所关联的场景类型,确定所述第一特征提取模型的模型误差;
    根据所述模型误差,更新所述第二特征提取模型和所述第一特征提取模型。
  7. 一种用于控制无人驾驶车辆的装置,其特征在于,所述装置包括:
    获取单元,用于获取无人驾驶车辆的待识别环境图像;
    确定单元,用于将所述待识别环境图像导入场景识别模型,得到所述待识别环境图像对应的场景类型,其中,所述场景识别模型用于表征待识别环境图像与场景类型之间的对应关系;
    执行单元,用于根据预设的场景类型与控制指令之间的关联关系,选取并执行控制指令,以控制所述无人驾驶车辆。
  8. 根据权利要求7所述的装置,其特征在于,所述确定单元,还用于:
    将获取到的待识别环境图像导入预先训练的第一特征提取模型,得到与所述待识别环境图像对应的待识别特征向量,其中,所述第一特征提取模型用于表征待识别环境图像与特征向量的对应关系;
    获取至少两个参考特征向量,其中,参考特征向量与场景类型关联设置;
    确定所述特征向量与各个参考特征向量的相似度;
    根据所确定的相似度,确定所述待识别环境图像对应的场景类型。
  9. 一种无人驾驶车辆,其特征在于,所述设备包括:
    一个或多个处理器;
    图像采集装置,用于采集待识别图像;
    存储装置,用于存储一个或多个程序,
    当所述一个或多个程序被所述一个或多个处理器执行时,使得所述一个或多个处理器实现如权利要求1-6中任一所述的方法。
  10. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1-6中任一所述的方法。
PCT/CN2018/099170 2017-09-05 2018-08-07 用于控制无人驾驶车辆的方法和装置 WO2019047656A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710792595.2A CN107609502A (zh) 2017-09-05 2017-09-05 用于控制无人驾驶车辆的方法和装置
CN201710792595.2 2017-09-05

Publications (1)

Publication Number Publication Date
WO2019047656A1 true WO2019047656A1 (zh) 2019-03-14

Family

ID=61055783

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/099170 WO2019047656A1 (zh) 2017-09-05 2018-08-07 用于控制无人驾驶车辆的方法和装置

Country Status (2)

Country Link
CN (1) CN107609502A (zh)
WO (1) WO2019047656A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111339834A (zh) * 2020-02-04 2020-06-26 浙江大华技术股份有限公司 车辆行驶方向的识别方法、计算机设备及存储介质
CN111612820A (zh) * 2020-05-15 2020-09-01 北京百度网讯科技有限公司 多目标跟踪方法、特征提取模型的训练方法和装置
CN112115285A (zh) * 2019-06-21 2020-12-22 杭州海康威视数字技术股份有限公司 图片清洗方法及装置
CN112634343A (zh) * 2020-12-23 2021-04-09 北京百度网讯科技有限公司 图像深度估计模型的训练方法、图像深度信息的处理方法
CN112926512A (zh) * 2021-03-25 2021-06-08 深圳市无限动力发展有限公司 环境类型的识别方法、装置和计算机设备
CN113642644A (zh) * 2021-08-13 2021-11-12 北京赛目科技有限公司 车辆环境等级的确定方法、装置、电子设备及存储介质
US20210397198A1 (en) * 2020-06-18 2021-12-23 Ford Global Technologies, Llc Enhanced vehicle operation

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107609502A (zh) * 2017-09-05 2018-01-19 百度在线网络技术(北京)有限公司 用于控制无人驾驶车辆的方法和装置
CN110096051B (zh) * 2018-01-31 2024-04-09 北京京东乾石科技有限公司 用于生成车辆控制指令的方法和装置
CN108388886A (zh) * 2018-03-16 2018-08-10 广东欧珀移动通信有限公司 图像场景识别的方法、装置、终端和计算机可读存储介质
CN110738221B (zh) * 2018-07-18 2024-04-26 华为技术有限公司 一种运算系统及方法
WO2020052344A1 (zh) * 2018-09-12 2020-03-19 华为技术有限公司 一种智能驾驶方法及智能驾驶系统
CN110893858B (zh) * 2018-09-12 2021-11-09 华为技术有限公司 一种智能驾驶方法及智能驾驶系统
CN109693672B (zh) * 2018-12-28 2020-11-06 百度在线网络技术(北京)有限公司 用于控制无人驾驶汽车的方法和装置
CN109858369A (zh) * 2018-12-29 2019-06-07 百度在线网络技术(北京)有限公司 自动驾驶方法和装置
CN109726804B (zh) * 2019-01-25 2023-06-13 江苏大学 一种基于行车预测场和bp神经网络的智能车辆驾驶行为拟人化决策方法
CN109976153B (zh) * 2019-03-01 2021-03-26 北京三快在线科技有限公司 控制无人驾驶设备及模型训练的方法、装置及电子设备
CN111738037B (zh) * 2019-03-25 2024-03-08 广州汽车集团股份有限公司 一种自动驾驶方法及其系统、车辆
CN110126846B (zh) * 2019-05-24 2021-07-23 北京百度网讯科技有限公司 驾驶场景的表示方法、装置、系统和存储介质
CN110244728A (zh) * 2019-06-17 2019-09-17 北京三快在线科技有限公司 确定无人驾驶控制策略的方法、装置、设备及存储介质
CN112204566A (zh) * 2019-08-15 2021-01-08 深圳市大疆创新科技有限公司 基于机器视觉的图像处理方法和设备
CN110579216B (zh) * 2019-09-12 2022-02-18 阿波罗智能技术(北京)有限公司 测试场景库构建方法、装置、电子设备和介质
CN112948956A (zh) * 2019-11-26 2021-06-11 北京新能源汽车股份有限公司 一种车辆参数生成方法、装置及设备
CN111666307A (zh) * 2019-12-03 2020-09-15 张少军 根据场景观察进行直觉推断的无人驾驶安全判断系统
CN112466158B (zh) * 2020-11-26 2022-04-29 东南大学 一种面向平面交叉口的车辆碰撞风险评估及预测方法
CN113673344B (zh) * 2021-07-19 2023-06-06 杭州大杰智能传动科技有限公司 一种智能塔吊物料挂载位置识别方法和装置
CN113673403B (zh) * 2021-08-12 2022-10-11 深圳普捷利科技有限公司 行驶环境检测方法、系统、装置、计算机设备、计算机可读存储介质及汽车
CN114550143A (zh) * 2022-04-28 2022-05-27 新石器慧通(北京)科技有限公司 无人车行驶中的场景识别方法及装置
CN115203457B (zh) * 2022-07-15 2023-11-14 小米汽车科技有限公司 图像检索方法、装置、车辆、存储介质及芯片
CN117245643A (zh) * 2022-12-06 2023-12-19 北京小米机器人技术有限公司 终端设备的控制方法、装置、终端设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106203346A (zh) * 2016-07-13 2016-12-07 吉林大学 一种面向智能车辆驾驶模式切换的道路环境图像分类方法
CN106845491A (zh) * 2017-01-18 2017-06-13 浙江大学 一种停车场场景下基于无人机的自动纠偏方法
US20170221241A1 (en) * 2016-01-28 2017-08-03 8681384 Canada Inc. System, method and apparatus for generating building maps
CN107609502A (zh) * 2017-09-05 2018-01-19 百度在线网络技术(北京)有限公司 用于控制无人驾驶车辆的方法和装置

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102390370B (zh) * 2011-10-25 2013-07-03 河海大学 一种基于立体视觉的车辆行驶应急处理装置及方法
WO2016156236A1 (en) * 2015-03-31 2016-10-06 Sony Corporation Method and electronic device
CN105575119B (zh) * 2015-12-29 2018-06-19 大连楼兰科技股份有限公司 路况气候深度学习及识别方法和装置
CN106022317A (zh) * 2016-06-27 2016-10-12 北京小米移动软件有限公司 人脸识别方法及装置
CN106289797B (zh) * 2016-07-19 2019-02-01 百度在线网络技术(北京)有限公司 用于测试无人驾驶车辆的方法和装置
CN106154834B (zh) * 2016-07-20 2019-10-29 百度在线网络技术(北京)有限公司 用于控制无人驾驶车辆的方法和装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170221241A1 (en) * 2016-01-28 2017-08-03 8681384 Canada Inc. System, method and apparatus for generating building maps
CN106203346A (zh) * 2016-07-13 2016-12-07 吉林大学 一种面向智能车辆驾驶模式切换的道路环境图像分类方法
CN106845491A (zh) * 2017-01-18 2017-06-13 浙江大学 一种停车场场景下基于无人机的自动纠偏方法
CN107609502A (zh) * 2017-09-05 2018-01-19 百度在线网络技术(北京)有限公司 用于控制无人驾驶车辆的方法和装置

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112115285A (zh) * 2019-06-21 2020-12-22 杭州海康威视数字技术股份有限公司 图片清洗方法及装置
CN111339834A (zh) * 2020-02-04 2020-06-26 浙江大华技术股份有限公司 车辆行驶方向的识别方法、计算机设备及存储介质
CN111339834B (zh) * 2020-02-04 2023-06-02 浙江大华技术股份有限公司 车辆行驶方向的识别方法、计算机设备及存储介质
CN111612820A (zh) * 2020-05-15 2020-09-01 北京百度网讯科技有限公司 多目标跟踪方法、特征提取模型的训练方法和装置
CN111612820B (zh) * 2020-05-15 2023-10-13 北京百度网讯科技有限公司 多目标跟踪方法、特征提取模型的训练方法和装置
US20210397198A1 (en) * 2020-06-18 2021-12-23 Ford Global Technologies, Llc Enhanced vehicle operation
CN112634343A (zh) * 2020-12-23 2021-04-09 北京百度网讯科技有限公司 图像深度估计模型的训练方法、图像深度信息的处理方法
CN112926512A (zh) * 2021-03-25 2021-06-08 深圳市无限动力发展有限公司 环境类型的识别方法、装置和计算机设备
CN112926512B (zh) * 2021-03-25 2024-03-15 深圳市无限动力发展有限公司 环境类型的识别方法、装置和计算机设备
CN113642644A (zh) * 2021-08-13 2021-11-12 北京赛目科技有限公司 车辆环境等级的确定方法、装置、电子设备及存储介质
CN113642644B (zh) * 2021-08-13 2024-05-10 北京赛目科技有限公司 车辆环境等级的确定方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN107609502A (zh) 2018-01-19

Similar Documents

Publication Publication Date Title
WO2019047656A1 (zh) 用于控制无人驾驶车辆的方法和装置
CN111626208B (zh) 用于检测小目标的方法和装置
WO2021085848A1 (ko) 강화학습 기반 신호 제어 장치 및 신호 제어 방법
CN109508580B (zh) 交通信号灯识别方法和装置
CN106154834B (zh) 用于控制无人驾驶车辆的方法和装置
WO2019047644A1 (zh) 用于控制无人驾驶车辆的方法和装置
US20180157934A1 (en) Inspection neural network for assessing neural network reliability
WO2019047650A1 (zh) 无人驾驶车辆的数据采集方法和装置
JP6817384B2 (ja) 自動運転車両の視覚感知方法、自動運転車両の視覚感知装置、制御機器及びコンピュータ読み取り可能な記憶媒体
KR102015947B1 (ko) 자율주행을 위한 학습대상 이미지 추출 장치 및 방법
CN110119725B (zh) 用于检测信号灯的方法及装置
CN110135302B (zh) 训练车道线识别模型的方法、装置、设备和存储介质
CN111967368B (zh) 一种交通灯识别的方法和装置
CN110348463B (zh) 用于识别车辆的方法和装置
US11017270B2 (en) Method and apparatus for image processing for vehicle
JP6700373B2 (ja) ビデオ動画の人工知能のための学習対象イメージパッケージング装置及び方法
CN109407679B (zh) 用于控制无人驾驶汽车的方法和装置
CN109740590A (zh) 基于目标跟踪辅助的roi精确提取方法及系统
CN114926766A (zh) 识别方法及装置、设备、计算机可读存储介质
CN113392793A (zh) 用于识别车道线的方法、装置、设备、存储介质以及无人车
CN115830399A (zh) 分类模型训练方法、装置、设备、存储介质和程序产品
CN110175519B (zh) 一种变电站的分合标识仪表识别方法、装置与存储介质
CN109747655B (zh) 用于自动驾驶车辆的驾驶指令生成方法和装置
Nagaraj et al. Edge-based street object detection
CN112699754B (zh) 信号灯识别方法、装置、设备以及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18853176

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 05/08/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18853176

Country of ref document: EP

Kind code of ref document: A1