CN107766872B - Method and device for identifying illumination driving scene - Google Patents

Method and device for identifying illumination driving scene Download PDF

Info

Publication number
CN107766872B
CN107766872B CN201710792216.XA CN201710792216A CN107766872B CN 107766872 B CN107766872 B CN 107766872B CN 201710792216 A CN201710792216 A CN 201710792216A CN 107766872 B CN107766872 B CN 107766872B
Authority
CN
China
Prior art keywords
driving
vehicle
illumination
scene
analysis model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710792216.XA
Other languages
Chinese (zh)
Other versions
CN107766872A (en
Inventor
姜雨
郁浩
闫泳杉
郑超
唐坤
张云飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201710792216.XA priority Critical patent/CN107766872B/en
Publication of CN107766872A publication Critical patent/CN107766872A/en
Priority to PCT/CN2018/093358 priority patent/WO2019047597A1/en
Application granted granted Critical
Publication of CN107766872B publication Critical patent/CN107766872B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Lighting Device Outwards From Vehicle And Optical Signal (AREA)
  • Circuit Arrangement For Electric Light Sources In General (AREA)

Abstract

The invention aims to provide a method and a device for identifying an illumination driving scene. Compared with the prior art, the method has the advantages that under different illumination conditions, data are collected through the vehicle-mounted sensor of the automatic driving vehicle, the collected data are used as input, the illumination driving scene corresponding to the automatic driving vehicle is used as output, and a scene analysis model is trained; then, in the automatic driving process of the vehicle, acquiring real-time data acquired by a vehicle-mounted sensor, judging whether the illumination condition changes or not by using the scene analysis model, and if the illumination condition changes, switching to an illumination driving scene corresponding to the changed illumination condition; the method for training a plurality of models by classifying the driving scenes so as to switch in real time is a brand new attempt, and the application of the scene analysis technology to automatic driving has higher thinking leap.

Description

Method and device for identifying illumination driving scene
Technical Field
The invention relates to the technical field of automatic vehicle driving, in particular to a technology for recognizing an illumination driving scene.
Background
With the development of the automatic driving technology, the demand for the automatic driving vehicle is also higher and higher. However, the existing automatic driving scheme cannot adapt to a complex illumination environment, and the automatic driving vehicle cannot normally work under a severe illumination condition. For example, in an illumination environment such as a backlight or a night scene, it is difficult to ensure normal operation of an autonomous vehicle, which brings about a safety hazard.
Therefore, how to enable an autonomous vehicle to recognize an illuminated driving scene becomes one of the problems that those skilled in the art need to solve.
Disclosure of Invention
The invention aims to provide a method and a device for identifying an illumination driving scene.
According to an aspect of the present invention, a method of identifying an illuminated driving scene is provided, wherein the method comprises:
a, acquiring data through a vehicle-mounted sensor of an automatic driving vehicle under different illumination conditions;
b, using the collected data as input and using an illumination driving scene corresponding to the automatic driving vehicle as output, and training a scene analysis model;
wherein, the method also comprises:
x, in the automatic driving process of the vehicle, acquiring real-time data acquired by a vehicle-mounted sensor, and judging whether the illumination condition changes or not by using the scene analysis model;
and if the illumination condition changes, switching to an illumination driving scene corresponding to the changed illumination condition.
Preferably, the step y comprises:
if the scene analysis model judges that the illumination condition changes, then the sequential probability ratio test is combined to judge whether the illumination condition really changes;
and if the illumination condition is really changed, switching to an illumination driving scene corresponding to the changed illumination condition.
Preferably, the step y comprises:
if the scene analysis model judges that the illumination condition changes, judging whether the illumination condition really changes or not by combining with rationality judgment;
if the illumination condition actually changes, switching to an illumination driving scene corresponding to the changed illumination condition;
wherein the rationality determination is made based on time and/or geographic location of the autonomous vehicle.
Preferably, the step b further comprises:
acquiring positive and negative samples of the data marked by the automatic driving vehicle in each corresponding illumination driving scene;
and training the scene analysis model according to the positive and negative samples.
Preferably, the method further comprises:
according to data collected by a vehicle-mounted sensor of the automatic driving vehicle under different illumination conditions and driving operation of a driver, different illumination driving scenes corresponding to the automatic driving vehicle under different illumination conditions are trained and obtained.
Preferably, the illuminated driving scene comprises any one of:
natural light and smooth driving;
driving in the backlight of natural light;
driving in the early morning;
driving in the dusk;
driving in dark with illumination;
driving in darkness without lighting.
According to another aspect of the present invention, there is also provided an apparatus for identifying an illuminated driving scene, wherein the apparatus comprises:
the acquisition device is used for acquiring data through a vehicle-mounted sensor of the automatic driving vehicle under different illumination conditions;
the training device is used for training a scene analysis model by taking the acquired data as input and taking an illumination driving scene corresponding to the automatic driving vehicle as output;
wherein, the device still includes:
the judging device is used for acquiring real-time data acquired by the vehicle-mounted sensor in the automatic driving process of the vehicle and judging whether the illumination condition changes or not by using the scene analysis model;
and the switching device is used for switching to the illumination driving scene corresponding to the changed illumination condition if the illumination condition changes.
Preferably, the switching device is configured to:
if the scene analysis model judges that the illumination condition changes, then the sequential probability ratio test is combined to judge whether the illumination condition really changes;
and if the illumination condition is really changed, switching to an illumination driving scene corresponding to the changed illumination condition.
Preferably, the switching device is configured to:
if the scene analysis model judges that the illumination condition changes, judging whether the illumination condition really changes or not by combining with rationality judgment;
if the illumination condition actually changes, switching to an illumination driving scene corresponding to the changed illumination condition;
wherein the rationality determination is made based on time and/or geographic location of the autonomous vehicle.
Preferably, the training device is configured to:
acquiring positive and negative samples of the data marked by the automatic driving vehicle in each corresponding illumination driving scene;
and training the scene analysis model according to the positive and negative samples.
Preferably, the apparatus further comprises:
the obtaining device is used for training and obtaining different illumination driving scenes corresponding to the automatic driving vehicle under different illumination conditions according to data collected by a vehicle-mounted sensor of the automatic driving vehicle under different illumination conditions and driving operation of a driver.
Preferably, the illuminated driving scene comprises any one of:
natural light and smooth driving;
driving in the backlight of natural light;
driving in the early morning;
driving in the dusk;
driving in dark with illumination;
driving in darkness without lighting.
According to yet another aspect of the invention, there is also provided a computer readable storage medium storing computer code which, when executed, performs a method as in any one of the preceding.
According to yet another aspect of the invention, there is also provided a computer program product, which when executed by a computer device, performs the method of any of the preceding claims.
According to still another aspect of the present invention, there is also provided a computer apparatus including:
one or more processors;
a memory for storing one or more computer programs;
the one or more computer programs, when executed by the one or more processors, cause the one or more processors to implement the method of any preceding claim.
Compared with the prior art, the method has the advantages that under different illumination conditions, data are collected through the vehicle-mounted sensor of the automatic driving vehicle, the collected data are used as input, the illumination driving scene corresponding to the automatic driving vehicle is used as output, and a scene analysis model is trained; then, in the automatic driving process of the vehicle, acquiring real-time data acquired by a vehicle-mounted sensor, judging whether the illumination condition changes or not by using the scene analysis model, and if the illumination condition changes, switching to an illumination driving scene corresponding to the changed illumination condition; the method is based on deep learning, the illumination scene is disassembled, and the illumination driving scene under different illumination conditions is trained in advance; simultaneously training a scene analysis model, analyzing and judging the illumination scene of the vehicle in real time in the running process of the vehicle, and switching the illumination driving scene; the method for training a plurality of models in a classified manner to switch in real time is a brand-new attempt, and the scene analysis technology is applied to automatic driving, so that the method has high thinking leap. Furthermore, when the illumination driving scene of the automatic driving vehicle needs to be switched or not in reality by judging whether the illumination condition changes or not, the accuracy of identifying the illumination driving scene is improved by further combining and utilizing the sequential probability ratio test in statistics. The invention greatly improves the feasibility and robustness of automatic driving, particularly end-to-end automatic driving, and compared with the training of a unified model, the method for independently training the illumination driving scene in each scene has the advantages of reducing the complexity, improving the calculation efficiency, saving the calculation resources and further reducing the automatic driving cost.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments made with reference to the following drawings:
FIG. 1 illustrates a block diagram of an exemplary computer system/server 12 suitable for use in implementing embodiments of the present invention;
FIG. 2 illustrates a flow diagram of a method for identifying an illuminated driving scene in accordance with an aspect of the present invention;
fig. 3 shows a schematic structural diagram of an apparatus for recognizing an illuminated driving scene according to another aspect of the present invention.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel, concurrently, or simultaneously. In addition, the order of the operations may be re-arranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
The term "computer device" or "computer" in this context refers to an intelligent electronic device that can execute predetermined processes such as numerical calculation and/or logic calculation by running predetermined programs or instructions, and may include a processor and a memory, wherein the processor executes a pre-stored instruction stored in the memory to execute the predetermined processes, or the predetermined processes are executed by hardware such as ASIC, FPGA, DSP, or a combination thereof. Computer devices include, but are not limited to, servers, personal computers, laptops, tablets, smart phones, and the like.
The computer equipment comprises user equipment and network equipment. Wherein the user equipment includes but is not limited to computers, smart phones, PDAs, etc.; the network device includes, but is not limited to, a single network server, a server group consisting of a plurality of network servers, or a Cloud Computing (Cloud Computing) based Cloud consisting of a large number of computers or network servers, wherein Cloud Computing is one of distributed Computing, a super virtual computer consisting of a collection of loosely coupled computers. Wherein the computer device can be operated alone to implement the invention, or can be accessed to a network and implement the invention through interoperation with other computer devices in the network. The network in which the computer device is located includes, but is not limited to, the internet, a wide area network, a metropolitan area network, a local area network, a VPN network, and the like.
It should be noted that the user equipment, the network device, the network, etc. are only examples, and other existing or future computer devices or networks may also be included in the scope of the present invention, and are included by reference.
The methods discussed below, some of which are illustrated by flow diagrams, may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks may be stored in a machine or computer readable medium such as a storage medium. The processor(s) may perform the necessary tasks.
Specific structural and functional details disclosed herein are merely representative and are provided for purposes of describing example embodiments of the present invention. The present invention may, however, be embodied in many alternate forms and should not be construed as limited to only the embodiments set forth herein.
It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element may be termed a second element, and, similarly, a second element may be termed a first element, without departing from the scope of example embodiments. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present. In contrast, when an element is referred to as being "directly connected" or "directly coupled" to another element, there are no intervening elements present. Other words used to describe the relationship between elements (e.g., "between" versus "directly between", "adjacent" versus "directly adjacent to", etc.) should be interpreted in a similar manner.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be noted that, in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may, in fact, be executed substantially concurrently, or the figures may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
The present invention is described in further detail below with reference to the attached drawing figures.
FIG. 1 illustrates a block diagram of an exemplary computer system/server 12 suitable for use in implementing embodiments of the present invention. The computer system/server 12 shown in FIG. 1 is only one example and should not be taken to limit the scope of use or the functionality of embodiments of the present invention.
As shown in FIG. 1, computer system/server 12 is in the form of a general purpose computing device. The components of computer system/server 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Computer system/server 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer system/server 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)30 and/or cache memory 32. The computer system/server 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 1, and commonly referred to as a "hard drive"). Although not shown in FIG. 1, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of the described embodiments of the invention.
The computer system/server 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), and may also communicate with one or more devices that enable a user to interact with the computer system/server 12, and/or with any devices (e.g., network card, modem, etc.) that enable the computer system/server 12 to communicate with one or more other computing devices, such communication may occur via AN input/output (I/O) interface 22. moreover, the computer system/server 12 may also communicate with one or more networks (e.g., a local area network (L AN), a Wide Area Network (WAN) and/or a public network, such as the Internet) via a network adapter 20. As shown, the network adapter 20 communicates with other modules of the computer system/server 12 via a bus 18. it should be appreciated that, although not shown in FIG. 1, other hardware and/or software modules may be used in conjunction with the computer system/server 12, including, but not limited to, microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data storage systems, etc.
The processing unit 16 executes various functional applications and data processing by executing programs stored in the memory 28.
For example, the memory 28 stores a computer program for executing the functions and processes of the present invention, and the processing unit 16 executes the computer program, so that the present invention recognizes the intention of an incoming call on the network side.
The specific functions/steps of the present invention for identifying illuminated driving scenes will be described in detail below.
FIG. 2 illustrates a flow diagram of a method for identifying illuminated driving scenes in accordance with an aspect of the present invention.
In step S201, the apparatus 1 collects data through the in-vehicle sensors of the autonomous vehicle under different lighting conditions.
Specifically, under different lighting conditions, the autonomous vehicle may have different lighting driving scenarios, and the onboard sensor of the autonomous vehicle may acquire corresponding different data, such as video data, image data, radar data, and the like under different lighting conditions. Here, the lighting conditions include, but are not limited to, natural light front light, natural light back light, morning, evening, dark with lighting, dark without lighting, and the like, and the data collected by the vehicle-mounted sensor naturally differs under different lighting conditions, for example, video or image data captured by the vehicle-mounted camera in the case of dark with lighting and in the case of dark without lighting is definitely significantly different. In step S201, the apparatus 1 collects the above-mentioned different data through the in-vehicle sensor of the autonomous vehicle under different lighting conditions. Here, the in-vehicle sensor of the autonomous vehicle includes, but is not limited to, an in-vehicle camera, an in-vehicle radar, and the like.
For example, when the autonomous vehicle is driven under a lighting condition of natural light, the apparatus 1 collects video or image data of each angle of view by an on-vehicle camera of the autonomous vehicle. The vehicle-mounted camera is used to simulate the driver's line of sight and viewing angle, and may be located, for example, in the cab, left and right side mirrors, center mirror, etc. of the autonomous vehicle, and may be, for example, a binocular camera. The video or image data captured by the onboard camera can be viewed, for example, as the various surrounding scenes the driver sees under natural, direct lighting conditions, assuming the autonomous vehicle is being driven by the driver. Subsequently, if the vehicle is left, and thus the vehicle is automatically driven under the illumination condition of natural light backlight, the device 1 continues to acquire video or image data of each visual angle through the vehicle-mounted camera of the vehicle, and stores the video or image data as data under the condition of natural light backlight for subsequent model training.
The method is suitable for an end-to-end driving mode in automatic driving of the vehicle, for example, the end-to-end driving mode means that the automatic driving vehicle senses surrounding scenes by using a vehicle-mounted sensor, such as a vehicle-mounted camera, a vehicle-mounted radar and the like, so as to judge how to perform automatic driving, such as judging whether to step on an accelerator or a brake, judging how to turn on a steering wheel and the like, and the degree of freedom of automatic driving of the vehicle is high; the tracking driving mode is that the automatic driving vehicle acquires the position of the vehicle by using a high-precision GPS and automatically drives along a preset track, and although the tracking driving mode is safe relatively, the driving track is fixed and not flexible.
It will be understood by those skilled in the art that the above-described lighting conditions are merely exemplary, and that other lighting conditions, now known or later developed, such as may be suitable for use with the present invention, are also included within the scope of the present invention and are hereby incorporated by reference.
It will also be appreciated by those skilled in the art that the above-described in-vehicle sensors are merely exemplary, and that other in-vehicle sensors, now known or later developed, that may be suitable for use with the present invention are also intended to be included within the scope of the present invention and are hereby incorporated by reference.
It will also be appreciated by those skilled in the art that the above described manner of collecting data is by way of example only, and that other manners of collecting data, now known or later developed, which may be suitable for use with the present invention, are also encompassed within the scope of the present invention and are hereby incorporated by reference.
Preferably, the method further comprises step S205 (not shown). In step S205, the apparatus 1 trains and obtains different illumination driving scenes corresponding to the autonomous vehicle under different illumination conditions according to data collected by a vehicle-mounted sensor of the autonomous vehicle under different illumination conditions and driving operations of the driver.
In particular, the autonomous vehicle may be manually driven by a trained driver in different lighting conditions, while the operation of the driver may be different for the same location, facing the same event, in different lighting conditions, e.g., the autonomous vehicle may be facing the same event in the same location, and the operation of the driver may be distinct in natural light and natural light reverse light. Therefore, in step S205, the apparatus 1 may obtain the driving operations of the driver and record the driving operation performed on which event under which lighting condition, and in step S205, the apparatus 1 may further obtain data collected by an on-board sensor of the autonomous vehicle at that moment, such as video or image data collected by an on-board camera of the autonomous vehicle when the driver performs a certain driving operation, and train and obtain different lighting driving scenes corresponding to the autonomous vehicle under different lighting conditions according to the data and the corresponding driving operation of the driver.
For example, in the case of a dark and unlit lighting condition, the autonomous vehicle is manually driven by a driver who performs a driving operation of turning on the high beam after starting the vehicle, and when driving to another vehicle, the driver turns off the high beam and turns on the low beam at a distance of about 150 m; aiming at the illumination condition with illumination in the dark, when a driver drives the automatic driving vehicle manually, the driver does not need to turn on the high beam and only turns on the low beam after starting the vehicle, and the driver does not have the operation of changing the high beam and the low beam when driving other vehicles; in step S205, the device 1 acquires data acquired by the on-board sensors and driving operations of the corresponding driver, so as to perform model training, for example, input video or image data acquired by the on-board cameras under the lighting conditions of darkness and no illumination, and the driving conditions faced by the autonomous vehicle, such as starting the vehicle, approaching another vehicle, and the like, and output driving responses to be made by the autonomous vehicle when the autonomous vehicle is faced with the lighting conditions and the driving conditions, such as turning on a high beam, switching the high beam to a low beam, and the like, so as to train and obtain different lighting driving scenes corresponding to the autonomous vehicle under different lighting conditions.
Preferably, the illuminated driving scene comprises any one of:
natural light and smooth driving;
driving in the backlight of natural light;
driving in the early morning;
driving in the dusk;
driving in dark with illumination;
driving in darkness without lighting.
In this case, the autonomous vehicle may have different lighting driving scenarios in different lighting conditions, in which the autonomous vehicle has different driving modes, for example, in the same location in different lighting conditions, it may have different processing modes for the same situation. It will be understood by those skilled in the art that the above-described illuminated driving scenarios are merely exemplary, and that other illuminated driving scenarios, now known or later developed, such as may be suitable for use with the present invention, are also intended to be encompassed within the scope of the present invention and are hereby incorporated by reference.
In step S202, the apparatus 1 trains a scene analysis model using the collected data as input and the illuminated driving scene corresponding to the autonomous vehicle as output.
In particular, for different lighting conditions, the apparatus 1 collects a large amount of data such as video, image or radar data through the vehicle-mounted sensor, while the autonomous vehicle may have different lighting driving scenarios under different lighting conditions, and the autonomous vehicle may have different driving modes under these different lighting driving scenarios, for example, different processing modes may be provided for the same situation under different lighting conditions at the same location; in step S202, the device 1 takes as input the data captured by the onboard sensors of the autonomous vehicles under various lighting conditions, which are collected in the aforementioned step S201, and takes as output the lighting driving scenes corresponding to the autonomous vehicles, and trains a scene analysis model.
For example, in step S202, the apparatus 1 takes a large amount of video data captured by the vehicle-mounted camera under natural light conditions as input, and takes the illumination driving scene of the autonomous vehicle as output at this time to train the scene analysis model, so that the scene analysis model can accurately identify the illumination condition under which the autonomous vehicle belongs to natural light; similarly, in step S202, the apparatus 1 may also use video data captured by the vehicle-mounted camera under different illumination conditions such as a lot of natural light backlight, early morning, evening, dark with illumination, dark without illumination, and the like as input, and use the illumination driving scenes of the autonomous vehicle under the different illumination conditions as output, respectively, to train the scene analysis model, so that the scene analysis model can accurately identify which illumination condition the autonomous vehicle is in.
Here, the scene analysis model may be, for example, a simple classification model, which may be obtained by training through an existing training manner for the classification model, for example, where it is known what lighting conditions the autonomous vehicle is under, and it is known what the corresponding lighting driving scene the autonomous vehicle is under, so as to obtain various classification outputs through various classification inputs, thereby training the scene analysis model; after the training of the scene analysis model is completed, different data can be classified, for example, when the device 1 collects video data through a vehicle-mounted camera, the scene analysis model can determine what the corresponding illumination driving scene is according to the video data.
It will be understood by those skilled in the art that the above-described method of training a scene analysis model is merely exemplary, and other methods of training a scene analysis model that may be present or later become known, such as may be suitable for use with the present invention, are also included within the scope of the present invention and are hereby incorporated by reference.
Preferably, in step S202, the apparatus 1 obtains positive and negative samples of the data labeled by the autonomous vehicle in each corresponding lighting driving scene; and training the scene analysis model according to the positive and negative samples.
Specifically, the data of the autonomous driving vehicle collected by the device 1 under different illumination conditions may be labeled to indicate positive and negative samples in each corresponding illumination driving scene. For example, for dark unlit driving, the collected video data without illumination from the external light source can be labeled as a positive sample, and the collected video data with illumination from the external light source can be labeled as a negative sample. In step S202, the apparatus 1 obtains positive and negative samples of the data labeled by the autonomous vehicle in each corresponding lighting driving scenario, for example, if the positive and negative samples are manually labeled by a user through interaction with a user device, in step S202, the apparatus 1 may obtain the positive and negative samples of the autonomous vehicle labeled by the user in each corresponding lighting driving scenario by calling an Application Program Interface (API) provided by the user device one or more times or by using other agreed communication manners; then, the device 1 trains the scene analysis model according to the marked positive and negative samples. The scene analysis model can be trained by the marked positive and negative examples, for example, by an existing training method.
Wherein, the method further comprises step S203 and step S204.
In step S203, the device 1 acquires real-time data collected by the vehicle-mounted sensor during the automatic driving process of the vehicle, and determines whether there is a change in the illumination condition by using the scene analysis model.
Specifically, the foregoing steps S201 and S202 are training of a scene analysis model, which belongs to a previous work, and after the training of the scene analysis model is completed, the automatic driving vehicle may apply the scene analysis model in an actual automatic driving process, so as to determine whether the lighting condition of the automatic driving vehicle is changed. During the actual automatic driving process of the automatic driving vehicle, the vehicle-mounted sensor on the automatic driving vehicle can acquire data in real time, for example, a vehicle-mounted camera positioned at the cab, the left side rearview mirror, the right side rearview mirror, the central rearview mirror and the like of the automatic driving vehicle continuously shoots, captures and acquires corresponding video or image data during the actual automatic driving process of the automatic driving vehicle. In step S203, in the automatic driving process of the vehicle, the device 1 acquires real-time data collected by a vehicle-mounted sensor of the automatic driving vehicle through interaction with the vehicle-mounted sensor, and inputs the real-time data to the scene analysis model in real time, and determines whether the illumination condition of the automatic driving vehicle changes according to the output of the scene analysis model.
For example, originally, the automatic driving vehicle is always automatically driven under the natural-light taillight illumination condition, the vehicle-mounted camera thereon continuously collects video or image data in real time, in step S203, the device 1 also continuously acquires the video or image data collected in real time from the vehicle-mounted camera and inputs the video or image data to the scene analysis model in real time, and if the output of the scene analysis model is the natural-light taillight illumination driving scene, it indicates that the illumination condition of the automatic driving vehicle is not changed; thereafter, the vehicle-mounted camera on the autonomous driving vehicle is still continuously acquiring video or image data in real time, in step S203, the apparatus 1 also continuously acquires the video or image data acquired in real time from the vehicle-mounted camera and inputs the video or image data to the scene analysis model in real time, and at this time, the output of the scene analysis model is an illuminated driving scene driven by natural light and backlighted, which indicates that the illumination condition of the autonomous driving vehicle is changed.
Here, since the vehicle-mounted sensor continuously collects real-time data, and the device 1 also continuously acquires the real-time data and inputs the real-time data to the scene analysis model for judgment, the device 1 can judge that the illumination condition changes when the illumination driving scene output by the scene analysis model changes once; for example, for a certain amount of real-time data, if the amount of changes in the illuminated driving scene output by the scene analysis model exceeds a predetermined threshold, it is determined that the illumination condition has changed.
It should be understood by those skilled in the art that the above-mentioned manners for determining whether the illumination condition is changed are only examples, and other manners for determining whether the illumination condition is changed, which may be present or later come, are also included in the scope of the present invention, and are hereby incorporated by reference.
In step S204, if the lighting condition changes, the device 1 switches to the lighting driving scene corresponding to the changed lighting condition.
Specifically, if the device 1 determines in step S203 that the lighting condition of the autonomous vehicle has changed, in step S204, the device 1 switches the lighting driving scene of the autonomous vehicle to the lighting driving scene corresponding to the changed lighting condition, that is, causes the autonomous vehicle to autonomously drive in the lighting driving scene corresponding to the changed lighting condition, in accordance with the change in the lighting condition.
In the previous example, the original automatic driving vehicle is always automatically driven under the illumination condition of natural light and direct light, after the vehicle-mounted camera on the automatic driving vehicle drops off, the vehicle-mounted camera on the automatic driving vehicle still continuously collects video or image data in real time, in step S203, the device 1 continuously acquires the video or image data collected in real time from the vehicle-mounted camera and inputs the video or image data to the scene analysis model in real time, and at the moment, the output of the scene analysis model is the illumination driving scene of natural light and reverse light driving, which indicates that the illumination condition of the automatic driving vehicle changes; at this time, in step S204, the device 1 switches the lighting driving scene of the autonomous vehicle to the lighting driving scene corresponding to the changed lighting condition, that is, to the lighting driving scene of natural-light backlight driving.
The device 1 acquires data through a vehicle-mounted sensor of an automatic driving vehicle under different illumination conditions, takes the acquired data as input, takes an illumination driving scene corresponding to the automatic driving vehicle as output, and trains a scene analysis model; and then, in the automatic driving process of the vehicle, acquiring real-time data acquired by a vehicle-mounted sensor, judging whether the illumination condition changes or not by using the scene analysis model, and if the illumination condition changes, switching to an illumination driving scene corresponding to the changed illumination condition. The device 1 disassembles an illumination scene based on deep learning, and trains illumination driving scenes under different illumination conditions in advance; simultaneously training a scene analysis model, analyzing and judging the illumination scene of the vehicle in real time in the running process of the vehicle, and switching the illumination driving scene; the method for training a plurality of models in a classified manner to switch in real time is a brand-new attempt, and the scene analysis technology is applied to automatic driving, so that the method has high thinking leap. The device 1 greatly improves the feasibility and robustness of automatic driving, particularly end-to-end automatic driving, and compared with a unified model for training an illumination driving scene under each scene, the device has the advantages of reducing the complexity, improving the calculation efficiency, saving the calculation resources and further reducing the automatic driving cost.
Preferably, in step S204, if the scene analysis model determines that the illumination condition changes, the apparatus 1 further combines with sequential probability ratio test to determine whether the illumination condition actually changes; if the illumination condition actually changes, the device 1 switches to the illumination driving scene corresponding to the changed illumination condition.
Specifically, even if in step S203 the device 1 determines that the lighting condition of the autonomous vehicle has changed, the device does not immediately switch the lighting driving scene, but instead, the device 1 determines by combining sequential probability ratio check, for example, continuously and sequentially inputs the determination of the current lighting condition by the scene analysis model to the sequential probability ratio check module until it is determined that the lighting condition really has changed, and then in step S204 the device 1 switches the autonomous vehicle to the lighting driving scene corresponding to the changed lighting condition.
Sequential probability ratio tests are a branch of mathematical statistics, which are studied in so-called "sequential sampling schemes" and how to use the samples obtained in such sampling schemes for statistical inference. The sequential sampling scheme is that a small amount of samples are firstly extracted without specifying the total sampling number (observation or experiment times) in sampling, and then the sampling is stopped or sampling is continued and the amount of the samples is extracted according to the result until the sampling is stopped. On the contrary, a sampling scheme in which the number of samples is determined in advance is referred to as a fixed sampling scheme.
The probability ratio test is described below, wherein x is a random variable, the probability density function is f, and the original hypothesis H0:f=f0Opposite hypothesis H1:f=f1Defining a likelihood ratio of l (x) f1(x)/f0(x) Selecting a suitable constant r (r)>0) Rejecting the original hypothesis when l (x) is greater than or equal to r; when l (x)<r, the original hypothesis is accepted. After the probability ratio test is popularized, a sequential probability ratio test can be obtained, namely, a random variable sequence x is selected1,x2…, using their joint density function P { x }1∈dξ1,…,xn∈dξn}=fn1,…,ξn)dξ1…dξnAnd (n is 1,2, …) is judged. For example, suppose H0:fn=f0n,H1:fn=f1nLet ln=ln(x1,…,xn)=f1n(x1,…,xn)/f0n(x1,…,xn) Selecting constant 0<A<B<Infinity (general selection A)<1<B) And sample sequence x1,x2…, wherein the number of sample sequences selected (i.e. stop signals) depends on the random number N,
Figure BDA0001399489770000161
if N is present<Infinity, when lNIf not, rejecting the original hypothesis; when l isN<At time a, the original hypothesis is accepted.
Here, the change of the illumination condition of the autonomous vehicle determined by the scene analysis model may be used as an original assumption, for example, if the autonomous vehicle is originally automatically driven under the natural-light front-light illumination condition, and after the scene analysis model first determines that the illumination condition of the autonomous vehicle changes into the natural-light back-light after the autonomous vehicle falls off, the apparatus 1 may change the illumination condition into the natural-light back-light as the original assumption; thereafter, the vehicle-mounted camera on the autonomous vehicle still continuously acquires video or image data in real time, in step S203, the apparatus 1 still continuously acquires the video or image data acquired in real time from the vehicle-mounted camera and inputs the video or image data to the scene analysis model in real time, the scene analysis model also continuously judges whether the illumination condition changes, and continuously and sequentially inputs the judgment results to the sequential probability ratio inspection module, where the judgment results can be understood as samples sampled in the sequential probability ratio inspection, and when the samples are continuously input to the sequential probability ratio inspection module, the sequential probability ratio inspection module can judge that the illumination condition really changes when a certain sample is reached. For example, the scene analysis model continuously determines that the lighting condition of the autonomous vehicle changes to natural light backlight, and continuously and sequentially inputs the determination result to the sequential probability ratio checking module, and when a certain input is made, the sequential probability ratio checking module determines that the lighting condition actually changes to natural light backlight. Thereafter, in step S204, the apparatus 1 switches the autonomous vehicle to an illuminated driving scene in natural-light backlighting driving.
Here, when the device 1 determines whether the illumination condition actually changes and whether the illumination driving scene of the autonomous vehicle actually needs to be switched, the accuracy of identifying the illumination driving scene is improved by further combining and utilizing sequential probability ratio test in statistics.
Preferably, in step S204, if the scene analysis model determines that the lighting condition changes, the device 1 determines whether the lighting condition actually changes by combining with the rationality determination; if the illumination condition actually changes, the device 1 is switched to an illumination driving scene corresponding to the changed illumination condition; wherein the rationality determination is made based on time and/or geographic location of the autonomous vehicle.
Specifically, if the scene analysis model determines that the lighting condition has changed in step S203, then in step S204, the apparatus 1 does not immediately switch the lighting driving scene of the autonomous vehicle, but determines whether the lighting condition has actually changed in combination with a rationality determination, where the rationality determination is based on time and/or a geographic location of the autonomous vehicle.
For example, if the lighting driving scene of the autonomous vehicle in the previous second is driving in the early morning, the lighting driving scene in the next second cannot be driving in the evening, and even if the scene analysis model determines that the lighting condition changes into the evening, the autonomous vehicle cannot be informed; alternatively, if the lighting driving scene one second before the autonomous vehicle is natural light driving, and the geographic location of the autonomous vehicle is driving into the tunnel from the ground based on the data display provided by the GPS device of the autonomous vehicle, it is also possible and trusted if the scene analysis model determines that the lighting condition has changed to dark.
Here, when the device 1 judges whether the illumination condition actually changes or not and whether the illumination driving scene of the autonomous vehicle really needs to be switched or not, the device further combines with rationality judgment, and the accuracy of identifying the illumination driving scene is improved.
Preferably, if the scene analysis model determines that the illumination condition changes, the device 1 may determine whether the illumination condition actually changes by combining sequential probability ratio inspection and then by combining rationality determination; if the illumination condition actually changes, the device 1 switches to the illumination driving scene corresponding to the changed illumination condition.
Fig. 3 shows a schematic structural diagram of an apparatus for recognizing an illuminated driving scene according to another aspect of the present invention.
The device 1 comprises a collecting device 301, a training device 302, a judging device 303 and a switching device 304. The apparatus 1 may be located in a computer device, for example, in an autonomous vehicle, or may be a network device connected to the autonomous vehicle through a network, and further, the apparatus 1 may be partially located in the network device, and partially located in the autonomous vehicle, for example, the acquiring device 301 and the training device 302 are located in the network device, and the determining device 303 and the switching device 304 are located in the autonomous vehicle. It will be understood by those skilled in the art that the above-described arrangements are merely exemplary, and that other arrangements, now known or later developed, which may be suitable for use with the present invention, are also included within the scope of the present invention and are hereby incorporated by reference.
The acquisition device 301 acquires data through on-board sensors of the autonomous vehicle under different lighting conditions.
Specifically, under different lighting conditions, the autonomous vehicle may have different lighting driving scenarios, and the onboard sensor of the autonomous vehicle may acquire corresponding different data, such as video data, image data, radar data, and the like under different lighting conditions. Here, the lighting conditions include, but are not limited to, natural light front light, natural light back light, morning, evening, dark with lighting, dark without lighting, and the like, and the data collected by the vehicle-mounted sensor naturally differs under different lighting conditions, for example, video or image data captured by the vehicle-mounted camera in the case of dark with lighting and in the case of dark without lighting is definitely significantly different. The acquisition device 301 acquires the different data through the onboard sensors of the autonomous vehicle under different lighting conditions. Here, the in-vehicle sensor of the autonomous vehicle includes, but is not limited to, an in-vehicle camera, an in-vehicle radar, and the like.
For example, when the autonomous vehicle is driven under a lighting condition of natural light, the acquisition device 301 acquires video or image data of each angle of view through an on-vehicle camera of the autonomous vehicle. The vehicle-mounted camera is used to simulate the driver's line of sight and viewing angle, and may be located, for example, in the cab, left and right side mirrors, center mirror, etc. of the autonomous vehicle, and may be, for example, a binocular camera. The video or image data captured by the onboard camera can be viewed, for example, as the various surrounding scenes the driver sees under natural, direct lighting conditions, assuming the autonomous vehicle is being driven by the driver. Subsequently, if the vehicle is left, so as to automatically drive under the illumination condition of natural light backlight, the collecting device 301 continues to collect video or image data of each viewing angle through the vehicle-mounted camera of the vehicle, and stores the video or image data as data under the condition of natural light backlight for subsequent model training.
The method is suitable for an end-to-end driving mode in automatic driving of the vehicle, for example, the end-to-end driving mode means that the automatic driving vehicle senses surrounding scenes by using a vehicle-mounted sensor, such as a vehicle-mounted camera, a vehicle-mounted radar and the like, so as to judge how to perform automatic driving, such as judging whether to step on an accelerator or a brake, judging how to turn on a steering wheel and the like, and the degree of freedom of automatic driving of the vehicle is high; the tracking driving mode is that the automatic driving vehicle acquires the position of the vehicle by using a high-precision GPS and automatically drives along a preset track, and although the tracking driving mode is safe relatively, the driving track is fixed and not flexible.
It will be understood by those skilled in the art that the above-described lighting conditions are merely exemplary, and that other lighting conditions, now known or later developed, such as may be suitable for use with the present invention, are also included within the scope of the present invention and are hereby incorporated by reference.
It will also be appreciated by those skilled in the art that the above-described in-vehicle sensors are merely exemplary, and that other in-vehicle sensors, now known or later developed, that may be suitable for use with the present invention are also intended to be included within the scope of the present invention and are hereby incorporated by reference.
It will also be appreciated by those skilled in the art that the above described manner of collecting data is by way of example only, and that other manners of collecting data, now known or later developed, which may be suitable for use with the present invention, are also encompassed within the scope of the present invention and are hereby incorporated by reference.
Preferably, the device 1 further comprises obtaining means (not shown). The obtaining device trains and obtains different illumination driving scenes corresponding to the automatic driving vehicle under different illumination conditions according to data collected by a vehicle-mounted sensor of the automatic driving vehicle under different illumination conditions and driving operation of a driver.
In particular, the autonomous vehicle may be manually driven by a trained driver in different lighting conditions, while the operation of the driver may be different for the same location, facing the same event, in different lighting conditions, e.g., the autonomous vehicle may be facing the same event in the same location, and the operation of the driver may be distinct in natural light and natural light reverse light. Therefore, the obtaining device can obtain the driving operations of the driver and record the driving operation of which event is faced under which lighting condition, and in addition, the obtaining device can also obtain the data collected by the vehicle-mounted sensor of the automatic driving vehicle at the moment, for example, when the driver carries out a certain driving operation, the video or image data collected by the vehicle-mounted camera of the automatic driving vehicle, and the different lighting driving scenes corresponding to the automatic driving vehicle under different lighting conditions are trained and obtained by using the data and the driving operation of the corresponding driver.
For example, in the case of a dark and unlit lighting condition, the autonomous vehicle is manually driven by a driver who performs a driving operation of turning on the high beam after starting the vehicle, and when driving to another vehicle, the driver turns off the high beam and turns on the low beam at a distance of about 150 m; aiming at the illumination condition with illumination in the dark, when a driver drives the automatic driving vehicle manually, the driver does not need to turn on the high beam and only turns on the low beam after starting the vehicle, and the driver does not have the operation of changing the high beam and the low beam when driving other vehicles; the vehicle-mounted sensor of the automatic driving vehicle always collects corresponding sensor data, the obtaining device obtains the data collected by the vehicle-mounted sensors and the driving operation of a corresponding driver, so that model training is performed, for example, video or image data collected by a vehicle-mounted camera under the condition of dark and no illumination is input, and the driving condition of the automatic driving vehicle is faced, such as starting the vehicle, approaching other vehicles, and the like, and the driving reflection which the automatic driving vehicle should do when facing the illumination conditions and the driving condition is output, such as starting a high beam lamp, switching the high beam lamp into a low beam lamp, and the like, so that different illumination driving scenes corresponding to the automatic driving vehicle under different illumination conditions are obtained through training.
Preferably, the illuminated driving scene comprises any one of:
natural light and smooth driving;
driving in the backlight of natural light;
driving in the early morning;
driving in the dusk;
driving in dark with illumination;
driving in darkness without lighting.
In this case, the autonomous vehicle may have different lighting driving scenarios in different lighting conditions, in which the autonomous vehicle has different driving modes, for example, in the same location in different lighting conditions, it may have different processing modes for the same situation. It will be understood by those skilled in the art that the above-described illuminated driving scenarios are merely exemplary, and that other illuminated driving scenarios, now known or later developed, such as may be suitable for use with the present invention, are also intended to be encompassed within the scope of the present invention and are hereby incorporated by reference.
The training device 302 takes the collected data as input and the illumination driving scene corresponding to the autonomous vehicle as output to train a scene analysis model.
Specifically, for different lighting conditions, the collecting device 301 collects a large amount of data such as video, image or radar data through the vehicle-mounted sensor, and the autonomous vehicle may have different lighting driving scenes under different lighting conditions, and the autonomous vehicle may have different driving modes under the different lighting driving scenes, for example, different processing modes may be provided for the same situation under different lighting conditions at the same location; the training device 302 takes the data captured by the vehicle-mounted sensors of the automatic driving vehicles under various different illumination conditions, which are acquired by the acquisition device 301, as input, and takes the illumination driving scene corresponding to the automatic driving vehicles as output, so as to train a scene analysis model.
For example, the training device 302 takes a large amount of video data captured by the vehicle-mounted camera under natural light conditions as input, and takes the illumination driving scene of the autonomous vehicle as output at this time to train the scene analysis model, so that the scene analysis model can accurately identify the illumination condition under which the autonomous vehicle belongs to natural light conditions; similarly, the training device 302 may also use video data captured by the vehicle-mounted camera under different illumination conditions such as a lot of natural light backlight, early morning, evening, dark with illumination, dark without illumination, and the like as input, and the light driving scenes of the autonomous vehicle under the different illumination conditions as output, respectively, to train the scene analysis model, so that the scene analysis model may accurately identify the light conditions of the autonomous vehicle.
Here, the scene analysis model may be, for example, a simple classification model, which may be obtained by training through an existing training manner for the classification model, for example, where it is known what lighting conditions the autonomous vehicle is under, and it is known what the corresponding lighting driving scene the autonomous vehicle is under, so as to obtain various classification outputs through various classification inputs, thereby training the scene analysis model; after the training of the scene analysis model is completed, different data can be classified, for example, when the device 1 collects video data through a vehicle-mounted camera, the scene analysis model can determine what the corresponding illumination driving scene is according to the video data.
It will be understood by those skilled in the art that the above-described method of training a scene analysis model is merely exemplary, and other methods of training a scene analysis model that may be present or later become known, such as may be suitable for use with the present invention, are also included within the scope of the present invention and are hereby incorporated by reference.
Preferably, the training device 302 obtains positive and negative samples of the data labeled by the autonomous vehicle in each corresponding illuminated driving scene; and training the scene analysis model according to the positive and negative samples.
Specifically, the data of the autonomous vehicle collected by the collecting device 301 under different lighting conditions may be labeled to indicate positive and negative samples in each corresponding lighting driving scene. For example, for dark unlit driving, the collected video data without illumination from the external light source can be labeled as a positive sample, and the collected video data with illumination from the external light source can be labeled as a negative sample. The training device 302 obtains positive and negative samples of the data labeled by the autonomous vehicle in each corresponding lighting driving scenario, for example, if the positive and negative samples are manually labeled by a user through interaction with a user device, the training device 302 may obtain the positive and negative samples of the autonomous vehicle labeled by the user in each corresponding lighting driving scenario by calling an Application Program Interface (API) provided by the user device one or more times, or by using other agreed communication manners; then, the training device 302 trains the scene analysis model according to the labeled positive and negative samples. The scene analysis model can be trained by the marked positive and negative examples, for example, by an existing training method.
Wherein the device 1 further comprises a judging means 303 and a switching means 304.
The determining device 303 obtains real-time data collected by the vehicle-mounted sensor during the automatic driving of the vehicle, and determines whether the illumination condition changes or not by using the scene analysis model.
Specifically, the acquisition device 301 and the training device 302 are used for training a scene analysis model, and belong to early-stage work, and after the training of the scene analysis model is completed, the scene analysis model can be applied to the automatic driving vehicle in an actual automatic driving process, so as to determine whether the illumination condition of the automatic driving vehicle is changed. During the actual automatic driving process of the automatic driving vehicle, the vehicle-mounted sensor on the automatic driving vehicle can acquire data in real time, for example, a vehicle-mounted camera positioned at the cab, the left side rearview mirror, the right side rearview mirror, the central rearview mirror and the like of the automatic driving vehicle continuously shoots, captures and acquires corresponding video or image data during the actual automatic driving process of the automatic driving vehicle. The determining device 303 obtains real-time data collected by a vehicle-mounted sensor of the autonomous vehicle through interaction with the vehicle-mounted sensor during the autonomous driving of the vehicle, and inputs the real-time data to the scene analysis model in real time, and determines whether the illumination condition of the autonomous vehicle changes according to the output of the scene analysis model.
For example, originally, the automatic driving vehicle is always automatically driven under the natural-light taillight illumination condition, the vehicle-mounted camera thereon continuously collects video or image data in real time, the judgment device 303 also continuously acquires the video or image data collected in real time from the vehicle-mounted camera and inputs the video or image data to the scene analysis model in real time, and the output of the scene analysis model is the natural-light taillight illumination driving scene, which indicates that the illumination condition of the automatic driving vehicle is not changed; after that, the vehicle is left with one head, the vehicle-mounted camera thereon still continuously acquires video or image data in real time, the determining device 303 also continuously acquires the video or image data acquired in real time from the vehicle-mounted camera and inputs the video or image data to the scene analysis model in real time, and at this time, the output of the scene analysis model is an illumination driving scene of natural light backlight driving, which indicates that the illumination condition of the vehicle is changed.
Here, since the vehicle-mounted sensor continuously collects real-time data, and the determining device 303 also continuously obtains the real-time data and inputs the real-time data to the scene analysis model for determination, the determining device 303 may determine that the lighting condition has changed once the lighting driving scene output by the scene analysis model has changed; for example, for a certain amount of real-time data, if the amount of changes in the illuminated driving scene output by the scene analysis model exceeds a predetermined threshold, it is determined that the illumination condition has changed.
It should be understood by those skilled in the art that the above-mentioned manners for determining whether the illumination condition is changed are only examples, and other manners for determining whether the illumination condition is changed, which may be present or later come, are also included in the scope of the present invention, and are hereby incorporated by reference.
If the lighting condition changes, the switching device 304 switches to the lighting driving scene corresponding to the changed lighting condition.
Specifically, if the determination device 303 determines that the lighting condition of the autonomous vehicle has changed, the switching device 304 switches the lighting driving scene of the autonomous vehicle to the lighting driving scene corresponding to the changed lighting condition, that is, causes the autonomous vehicle to perform autonomous driving in the lighting driving scene corresponding to the changed lighting condition, according to the change of the lighting condition.
In the former case, the original automatic driving vehicle always performs automatic driving under the illumination condition of natural light and direct light, after the vehicle-mounted camera on the automatic driving vehicle drops off, the vehicle-mounted camera on the automatic driving vehicle continuously acquires video or image data in real time, the judgment device 303 continuously acquires the video or image data acquired in real time from the vehicle-mounted camera and inputs the video or image data to the scene analysis model in real time, and at the moment, the output of the scene analysis model is the illumination driving scene of natural light and direct light driving, which indicates that the illumination condition of the automatic driving vehicle changes; at this time, switching device 304 switches the lighting driving scene of the autonomous vehicle to the lighting driving scene corresponding to the changed lighting condition, that is, to the lighting driving scene of natural-light backlight driving.
The device 1 acquires data through a vehicle-mounted sensor of an automatic driving vehicle under different illumination conditions, takes the acquired data as input, takes an illumination driving scene corresponding to the automatic driving vehicle as output, and trains a scene analysis model; and then, in the automatic driving process of the vehicle, acquiring real-time data acquired by a vehicle-mounted sensor, judging whether the illumination condition changes or not by using the scene analysis model, and if the illumination condition changes, switching to an illumination driving scene corresponding to the changed illumination condition. The device 1 disassembles an illumination scene based on deep learning, and trains illumination driving scenes under different illumination conditions in advance; simultaneously training a scene analysis model, analyzing and judging the illumination scene of the vehicle in real time in the running process of the vehicle, and switching the illumination driving scene; the method for training a plurality of models in a classified manner to switch in real time is a brand-new attempt, and the scene analysis technology is applied to automatic driving, so that the method has high thinking leap. The device 1 greatly improves the feasibility and robustness of automatic driving, particularly end-to-end automatic driving, and compared with a unified model for training an illumination driving scene under each scene, the device has the advantages of reducing the complexity, improving the calculation efficiency, saving the calculation resources and further reducing the automatic driving cost.
Preferably, if the scene analysis model determines that the illumination condition changes, the switching device 304 determines whether the illumination condition actually changes by combining with sequential probability ratio test; if the lighting condition actually changes, the switching device 304 switches to the lighting driving scene corresponding to the changed lighting condition.
Specifically, even if the determining device 303 determines that the lighting condition of the autonomous vehicle is changed, the determination is not performed by switching the lighting driving scene immediately, but is performed by combining the sequential probability ratio check, for example, the scene analysis model continuously and sequentially inputs the determination of the current lighting condition to the sequential probability ratio check module, until it is determined that the lighting condition actually changes, the switching device 304 switches the autonomous vehicle to the lighting driving scene corresponding to the changed lighting condition.
Sequential probability ratio tests are a branch of mathematical statistics, which are studied in so-called "sequential sampling schemes" and how to use the samples obtained in such sampling schemes for statistical inference. The sequential sampling scheme is that a small amount of samples are firstly extracted without specifying the total sampling number (observation or experiment times) in sampling, and then the sampling is stopped or sampling is continued and the amount of the samples is extracted according to the result until the sampling is stopped. On the contrary, a sampling scheme in which the number of samples is determined in advance is referred to as a fixed sampling scheme.
The probability ratio test is described below, wherein x is a random variable, the probability density function is f, and the original hypothesis H0:f=f0Opposite hypothesis H1:f=f1Defining a likelihood ratio of l (x) f1(x)/f0(x) Selecting a suitable constant r (r)>0) Rejecting the original hypothesis when l (x) is greater than or equal to r; when l (x)<r, the original hypothesis is accepted. After the probability ratio test is popularized, a sequential probability ratio test can be obtained, namely, a random variable sequence x is selected1,x2…, using their joint density function P { x }1∈dξ1,…,xn∈dξn}=fn1,…,ξn)dξ1…dξnAnd (n is 1,2, …) is judged. For example, suppose H0:fn=f0n,H1:fn=f1nLet ln=ln(x1,…,xn)=f1n(x1,…,xn)/f0n(x1,…,xn) Selecting constant 0<A<B<Infinity (general selection A)<1<B) And sample sequence x1,x2…, wherein the number of sample sequences selected (i.e. stop signals) depends on the random number N,
Figure BDA0001399489770000261
if N is present<Infinity, when lNIf not, rejecting the original hypothesis; when l isN<At time a, the original hypothesis is accepted.
Here, the change of the illumination condition of the autonomous vehicle determined by the scene analysis model may be used as an original assumption, for example, if the autonomous vehicle is originally automatically driven under the natural-light front-light illumination condition, and after the scene analysis model first determines that the illumination condition of the autonomous vehicle changes into the natural-light back-light after the autonomous vehicle has left the scene, the determining device 303 may change the illumination condition into the natural-light back-light as the original assumption; thereafter, the vehicle-mounted camera on the autonomous vehicle continuously acquires video or image data in real time, the determining device 303 continuously acquires the video or image data acquired in real time from the vehicle-mounted camera and inputs the video or image data to the scene analysis model in real time, the scene analysis model continuously determines whether the illumination condition changes, and continuously and sequentially inputs the determination result to the sequential probability ratio checking module, where the determination result can be understood as a sample sampled in the sequential probability ratio checking, and when the samples are continuously input to the sequential probability ratio checking module, the sequential probability ratio checking module can determine that the illumination condition really changes when a certain sample is reached. For example, the scene analysis model continuously determines that the lighting condition of the autonomous vehicle changes to natural light backlight, and continuously and sequentially inputs the determination result to the sequential probability ratio checking module, and when a certain input is made, the sequential probability ratio checking module determines that the lighting condition actually changes to natural light backlight. Thereafter, the switching means 304 switches the autonomous vehicle to an illuminated driving scene of natural light backlighting driving.
Here, when the device 1 determines whether the illumination condition actually changes and whether the illumination driving scene of the autonomous vehicle actually needs to be switched, the accuracy of identifying the illumination driving scene is improved by further combining and utilizing sequential probability ratio test in statistics.
Preferably, if the scene analysis model determines that the lighting condition changes, the switching device 304 determines whether the lighting condition actually changes by combining with the rationality determination; if the illumination condition actually changes, the switching device 304 switches to the illumination driving scene corresponding to the changed illumination condition; wherein the rationality determination is made based on time and/or geographic location of the autonomous vehicle.
Specifically, if the scene analysis model determines that the lighting condition changes, the switching device 304 does not immediately switch the lighting driving scene of the autonomous vehicle, but determines whether the lighting condition actually changes in combination with a rationality determination, where the rationality determination is based on time and/or a geographic location of the autonomous vehicle.
For example, if the lighting driving scene of the autonomous vehicle in the previous second is driving in the early morning, the lighting driving scene in the next second cannot be driving in the evening, and even if the scene analysis model determines that the lighting condition changes into the evening, the autonomous vehicle cannot be informed; alternatively, if the lighting driving scene one second before the autonomous vehicle is natural light driving, and the geographic location of the autonomous vehicle is driving into the tunnel from the ground based on the data display provided by the GPS device of the autonomous vehicle, it is also possible and trusted if the scene analysis model determines that the lighting condition has changed to dark.
Here, when the device 1 judges whether the illumination condition actually changes or not and whether the illumination driving scene of the autonomous vehicle really needs to be switched or not, the device further combines with rationality judgment, and the accuracy of identifying the illumination driving scene is improved.
Preferably, if the scene analysis model determines that the illumination condition changes, the switching device 304 may determine whether the illumination condition actually changes by combining sequential probability ratio inspection and then combining reasonableness determination; if the lighting condition actually changes, the switching device 304 switches to the lighting driving scene corresponding to the changed lighting condition.
The invention also provides a computer readable storage medium having stored thereon computer code which, when executed, performs a method as in any one of the preceding claims.
The invention also provides a computer program product, which when executed by a computer device, performs the method of any of the preceding claims.
The present invention also provides a computer device, comprising:
one or more processors;
a memory for storing one or more computer programs;
the one or more computer programs, when executed by the one or more processors, cause the one or more processors to implement the method of any preceding claim.
It is noted that the present invention may be implemented in software and/or in a combination of software and hardware, for example, the various means of the invention may be implemented using Application Specific Integrated Circuits (ASICs) or any other similar hardware devices. In one embodiment, the software program of the present invention may be executed by a processor to implement the steps or functions described above. Also, the software programs (including associated data structures) of the present invention can be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Further, some of the steps or functions of the present invention may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.

Claims (10)

1. A method of identifying an illuminated driving scene, wherein the method comprises:
a, acquiring data through a vehicle-mounted sensor of an automatic driving vehicle under different illumination conditions;
b, using the collected data as input and using an illumination driving scene corresponding to the automatic driving vehicle as output, and training a scene analysis model; the scene analysis model is obtained by training collected labeling samples under different illumination driving scenes;
wherein, the method also comprises:
x, in the automatic driving process of the vehicle, acquiring real-time data acquired by a vehicle-mounted sensor, and judging whether the illumination condition changes or not by using the scene analysis model;
if the illumination condition changes, switching to an illumination driving scene corresponding to the changed illumination condition;
the step y comprises the following steps:
if the scene analysis model judges that the illumination condition changes, then the sequential probability ratio inspection is combined, and/or the rationality judgment is combined, and whether the illumination condition really changes is judged;
if the illumination condition actually changes, switching to an illumination driving scene corresponding to the changed illumination condition;
wherein the rationality determination is made based on time and/or geographic location of the autonomous vehicle.
2. The method of claim 1, wherein step b further comprises:
acquiring positive and negative samples of the data marked by the automatic driving vehicle in each corresponding illumination driving scene;
and training the scene analysis model according to the positive and negative samples.
3. The method of claim 1 or 2, wherein the method further comprises:
according to data collected by a vehicle-mounted sensor of the automatic driving vehicle under different illumination conditions and driving operation of a driver, different illumination driving scenes corresponding to the automatic driving vehicle under different illumination conditions are trained and obtained.
4. The method of claim 1 or 2, wherein the illuminated driving scene comprises any one of:
natural light and smooth driving;
driving in the backlight of natural light;
driving in the early morning;
driving in the dusk;
driving in dark with illumination;
driving in darkness without lighting.
5. An apparatus for identifying an illuminated driving scene, wherein the apparatus comprises:
the acquisition device is used for acquiring data through a vehicle-mounted sensor of the automatic driving vehicle under different illumination conditions;
the training device is used for training a scene analysis model by taking the acquired data as input and taking an illumination driving scene corresponding to the automatic driving vehicle as output; the scene analysis model is obtained by training collected labeling samples under different illumination driving scenes;
wherein, the device still includes:
the judging device is used for acquiring real-time data acquired by the vehicle-mounted sensor in the automatic driving process of the vehicle and judging whether the illumination condition changes or not by using the scene analysis model;
the switching device is used for switching to an illumination driving scene corresponding to the changed illumination condition if the illumination condition changes;
the switching device is used for:
if the scene analysis model judges that the illumination condition changes, then the sequential probability ratio inspection is combined, and/or the rationality judgment is combined, and whether the illumination condition really changes is judged;
if the illumination condition actually changes, switching to an illumination driving scene corresponding to the changed illumination condition;
wherein the rationality determination is made based on time and/or geographic location of the autonomous vehicle.
6. The apparatus of claim 5, wherein the training apparatus is to:
acquiring positive and negative samples of the data marked by the automatic driving vehicle in each corresponding illumination driving scene;
and training the scene analysis model according to the positive and negative samples.
7. The apparatus of claim 5 or 6, wherein the apparatus further comprises:
the obtaining device is used for training and obtaining different illumination driving scenes corresponding to the automatic driving vehicle under different illumination conditions according to data collected by a vehicle-mounted sensor of the automatic driving vehicle under different illumination conditions and driving operation of a driver.
8. The apparatus of claim 5 or 6, wherein the illuminated driving scene comprises any one of:
natural light and smooth driving;
driving in the backlight of natural light;
driving in the early morning;
driving in the dusk;
driving in dark with illumination;
driving in darkness without lighting.
9. A computer readable storage medium storing computer code which, when executed, performs the method of any of claims 1 to 4.
10. A computer device, the computer device comprising:
one or more processors;
a memory for storing one or more computer programs;
the one or more computer programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-4.
CN201710792216.XA 2017-09-05 2017-09-05 Method and device for identifying illumination driving scene Active CN107766872B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201710792216.XA CN107766872B (en) 2017-09-05 2017-09-05 Method and device for identifying illumination driving scene
PCT/CN2018/093358 WO2019047597A1 (en) 2017-09-05 2018-06-28 Method and apparatus for recognizing lighting driving scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710792216.XA CN107766872B (en) 2017-09-05 2017-09-05 Method and device for identifying illumination driving scene

Publications (2)

Publication Number Publication Date
CN107766872A CN107766872A (en) 2018-03-06
CN107766872B true CN107766872B (en) 2020-08-04

Family

ID=61265284

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710792216.XA Active CN107766872B (en) 2017-09-05 2017-09-05 Method and device for identifying illumination driving scene

Country Status (2)

Country Link
CN (1) CN107766872B (en)
WO (1) WO2019047597A1 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107766872B (en) * 2017-09-05 2020-08-04 百度在线网络技术(北京)有限公司 Method and device for identifying illumination driving scene
CN108629280B (en) * 2018-03-27 2020-03-10 维沃移动通信有限公司 Face recognition method and mobile terminal
CN109271924A (en) * 2018-09-14 2019-01-25 盯盯拍(深圳)云技术有限公司 Image processing method and image processing apparatus
CN111091739B (en) * 2018-10-24 2022-08-02 百度在线网络技术(北京)有限公司 Automatic driving scene generation method and device and storage medium
CN113377033B (en) * 2019-01-15 2024-03-22 北京百度网讯科技有限公司 Data acquisition method, device, equipment and computer readable storage medium
WO2021026855A1 (en) * 2019-08-15 2021-02-18 深圳市大疆创新科技有限公司 Machine vision-based image processing method and device
CN111338669B (en) * 2020-02-17 2023-10-24 深圳英飞拓仁用信息有限公司 Method and device for updating intelligent function in intelligent analysis box
CN111653139A (en) * 2020-05-15 2020-09-11 天津市职业大学 Self-adaptive front lighting system teaching platform and teaching method thereof

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102076142A (en) * 2009-11-25 2011-05-25 现代自动车株式会社 Adaptive front lighting system for performing highway and urban district modes of front lighting
CN105722708A (en) * 2013-09-26 2016-06-29 法雷奥照明公司 Driving assistance device and method
CN105956268A (en) * 2016-04-29 2016-09-21 百度在线网络技术(北京)有限公司 Construction method and device applied to test scene of pilotless automobile
CN107097780A (en) * 2012-11-30 2017-08-29 伟摩有限责任公司 Enable and disable automatic Pilot

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060095171A1 (en) * 2004-11-02 2006-05-04 Whittaker William L Methods, devices and systems for high-speed autonomous vehicle and high-speed autonomous vehicle
US8751154B2 (en) * 2008-04-24 2014-06-10 GM Global Technology Operations LLC Enhanced clear path detection in the presence of traffic infrastructure indicator
CN107766872B (en) * 2017-09-05 2020-08-04 百度在线网络技术(北京)有限公司 Method and device for identifying illumination driving scene

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102076142A (en) * 2009-11-25 2011-05-25 现代自动车株式会社 Adaptive front lighting system for performing highway and urban district modes of front lighting
CN107097780A (en) * 2012-11-30 2017-08-29 伟摩有限责任公司 Enable and disable automatic Pilot
CN105722708A (en) * 2013-09-26 2016-06-29 法雷奥照明公司 Driving assistance device and method
CN105956268A (en) * 2016-04-29 2016-09-21 百度在线网络技术(北京)有限公司 Construction method and device applied to test scene of pilotless automobile

Also Published As

Publication number Publication date
WO2019047597A1 (en) 2019-03-14
CN107766872A (en) 2018-03-06

Similar Documents

Publication Publication Date Title
CN107766872B (en) Method and device for identifying illumination driving scene
CN107153363B (en) Simulation test method, device, equipment and readable medium for unmanned vehicle
CN108571974B (en) Vehicle positioning using a camera
US11630458B2 (en) Labeling autonomous vehicle data
CN110796007B (en) Scene recognition method and computing device
US10628890B2 (en) Visual analytics based vehicle insurance anti-fraud detection
CN109961522B (en) Image projection method, device, equipment and storage medium
CN109064763A (en) Test method, device, test equipment and the storage medium of automatic driving vehicle
CN109345829B (en) Unmanned vehicle monitoring method, device, equipment and storage medium
CN109472884B (en) Unmanned vehicle data storage method, device, equipment and storage medium
WO2019047596A1 (en) Method and device for switching driving modes
CN109711285B (en) Training and testing method and device for recognition model
CN106157572A (en) The method of testing of automobile active safety early warning system and test device
KR20210034097A (en) Camera evaluation technologies for autonomous vehicles
CN109916415B (en) Road type determination method, device, equipment and storage medium
CN109710658A (en) Automatic Pilot evaluating method, device and equipment
CN109444904A (en) Generation method, device, equipment and the storage medium of assignment test data
CN116580271A (en) Evaluation method, device, equipment and storage medium for perception fusion algorithm
CN110111018B (en) Method, device, electronic equipment and storage medium for evaluating vehicle sensing capability
CN112356845B (en) Method, device and equipment for predicting motion state of target and vehicle
CN113492756A (en) Method, device, equipment and storage medium for displaying vehicle external information
CN110784680B (en) Vehicle positioning method and device, vehicle and storage medium
CN114820504B (en) Method and device for detecting image fusion deviation, electronic equipment and storage medium
CN110198439A (en) Method and apparatus for testing the image recognition performance of ADAS camera automatically
CN111126336B (en) Sample collection method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant