CN115471495A - Model robustness detection method, related device and storage medium - Google Patents

Model robustness detection method, related device and storage medium Download PDF

Info

Publication number
CN115471495A
CN115471495A CN202211231908.4A CN202211231908A CN115471495A CN 115471495 A CN115471495 A CN 115471495A CN 202211231908 A CN202211231908 A CN 202211231908A CN 115471495 A CN115471495 A CN 115471495A
Authority
CN
China
Prior art keywords
target
vehicle
target object
model
pattern
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211231908.4A
Other languages
Chinese (zh)
Other versions
CN115471495B (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Real AI Technology Co Ltd
Original Assignee
Beijing Real AI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Real AI Technology Co Ltd filed Critical Beijing Real AI Technology Co Ltd
Priority to CN202211231908.4A priority Critical patent/CN115471495B/en
Publication of CN115471495A publication Critical patent/CN115471495A/en
Application granted granted Critical
Publication of CN115471495B publication Critical patent/CN115471495B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the application discloses a model robustness detection method, a related device and a storage medium. The method comprises the steps of determining at least one target object to be attacked currently in a driving scene simulation platform; in a target time period indicated by a preset script, introducing at least one preset target confrontation pattern into an effective range of at least one target object to obtain at least one target confrontation sample; and inputting at least one target confrontation sample into the vehicle perception model to obtain a recognition result, wherein the recognition result is used for controlling and generating a driving instruction of the target vehicle, the recognition result indicates that the first confidence degree is higher than the second confidence degree, the first confidence degree is the confidence degree when the target confrontation sample is recognized as a non-target object by the vehicle perception model, and the second confidence degree is the confidence degree when the target confrontation sample is recognized as a target object by the vehicle perception model. According to the embodiment, the attack effect on the model can be improved, the iteration period of the model can be shortened, and the like.

Description

Model robustness detection method, related device and storage medium
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to a model robustness detection method, a related apparatus, and a storage medium.
Background
With the increase of computer hardware level and the creation of large-scale automatic driving data sets, the perception technology based on deep learning is more and more widely applied in the fields of autonomous robots, automatic driving and the like. Due to the complex design inside the deep learning model and the data-driven principle, the model has low interpretability and has the characteristic of a black box, so the model is easy to be attacked to be difficult to detect, and results inconsistent with human thinking appear, which can cause inestimable loss in driving. In order to verify the robustness of the detection model, various test scenes need to be designed, and the effect of resisting and defending the attack and defense in different scenes is simulated.
At present, two methods are mainly used for simulating interference on a model so as to test the robustness of the model; firstly, directly adding disturbance to original image and point cloud data, and then inputting the data added with the disturbance into a model; secondly, directly printing an attack image made on a digital image into a real object, and then inputting image information after resisting the attack to the model through a camera in a real scene.
However, the first prior art directly adds the confrontation sample obtained by disturbance to the acquired image, and cannot simulate the scene of continuous attack on the vehicle in the running process of the vehicle, so that the attack effect is poor; the second prior art needs to print the countermeasure pattern into a real object, and needs to shoot the countermeasure pattern in a real scene, which increases the time cost of model robustness detection, and the iteration period of the model is long.
Disclosure of Invention
The embodiment of the application provides a model robustness detection method, a related device and a storage medium, which can ensure the safety of model detection, improve the attack effect on the model and shorten the iteration cycle of the model.
In a first aspect, an embodiment of the present application provides a method for detecting model robustness, where the method is applied to a model robustness detection system, where the model robustness detection system includes a driving scenario simulation platform, a vehicle sensing model, and a virtual controller, the driving scenario simulation platform includes a target vehicle and at least one target object, the target vehicle is in a running state according to a preset script, and an image and point cloud data of the target vehicle on the driving scenario simulation platform are transmitted to the vehicle sensing model through a port, and the method includes:
determining at least one target object to be attacked currently in the driving scene simulation platform, wherein the target object is an object sensed by the vehicle sensing model;
guiding at least one preset target confrontation pattern into an effective range of at least one target object within a target time period indicated by the preset script to obtain at least one target confrontation sample;
inputting at least one target confrontation sample into the vehicle perception model to obtain a recognition result, wherein the recognition result is used for controlling the virtual controller to generate a driving instruction of the target vehicle, the recognition result indicates that a first confidence degree is higher than a second confidence degree, the first confidence degree is a confidence degree when the vehicle perception model recognizes the target confrontation sample as a non-target object, and the second confidence degree is a confidence degree when the vehicle perception model recognizes the target confrontation sample as the target object.
In some embodiments, the manner in which the target confrontation pattern is imported into the driving scenario simulation platform includes at least one of: program interface import, virtual projection device projection, and automated icon dragging.
In a second aspect, an embodiment of the present application further provides a model robustness detection apparatus, where the model robustness detection apparatus is configured in a model robustness detection system, the model robustness detection system includes a driving scenario simulation platform, a vehicle sensing model, and a virtual controller, the driving scenario simulation platform includes a target vehicle and at least one target object, the target vehicle is in a running state according to a preset scenario, and an image and point cloud data of the target vehicle on the driving scenario simulation platform are transmitted to the vehicle sensing model through a port, the apparatus includes:
the receiving and sending module is used for acquiring the preset script;
the processing module is used for determining at least one target object to be attacked currently in the driving scene simulation platform, and the target object is an object sensed by the vehicle sensing model; guiding at least one preset target confrontation pattern into an effective range of at least one target object within a target time period indicated by the preset script to obtain at least one target confrontation sample; inputting at least one target confrontation sample into the vehicle perception model to obtain a recognition result, wherein the recognition result is used for controlling the virtual controller to generate a driving instruction of the target vehicle, the recognition result indicates that a first confidence degree is higher than a second confidence degree, the first confidence degree is a confidence degree when the vehicle perception model recognizes the target confrontation sample as a non-target object, and the second confidence degree is a confidence degree when the vehicle perception model recognizes the target confrontation sample as the target object.
In some embodiments, the driving scene simulation platform presets a plurality of weather materials, the target time period comprises a first time period, and the target countermeasure pattern comprises a first countermeasure pattern; when the step of importing, by the processing module, at least one preset target confrontation pattern into an effective range of at least one target object within the target period indicated by the preset script to obtain at least one target confrontation sample is executed, the processing module is specifically configured to:
determining a target weather material corresponding to the first time period from the various weather materials;
switching a current scene interface of the target vehicle into a target weather scene interface corresponding to the target weather material in the first time period, wherein the target weather scene interface comprises at least one target object;
selecting at least one first countermeasure pattern matched with the target weather material according to the characteristics of the target weather material;
adding at least one first antagonizing pattern to the effective range of at least one target object to obtain at least one first antagonizing sample;
inputting at least one target confrontation sample into the vehicle perception model to obtain a recognition result, wherein the recognition result comprises the following steps:
and inputting the first antagonizing sample into the vehicle perception model to obtain a recognition result of the non-target object.
In some embodiments, a plurality of running cycles are preset in the preset script, and a corresponding relationship between a running cycle number and a weather material is set in the preset script, where the first time period is a running duration of the running cycle, and the first time period is a running time period corresponding to the running cycle; when the processing module executes the step of determining the target weather material corresponding to the first time period from the plurality of weather materials, the processing module is specifically configured to:
determining the current cycle number of the current operation cycle before the current cycle starts to operate;
and determining a target weather material corresponding to the current cycle number from the plurality of weather materials according to the corresponding relation between the running cycle number and the weather materials.
In some embodiments, the target object is a first vehicle, the target period of time comprises a second period of time, the target countermeasure pattern comprises a second countermeasure pattern; when the step of importing, by the processing module, at least one preset target confrontation pattern into an effective range of at least one target object within the target period indicated by the preset script to obtain at least one target confrontation sample is executed, the processing module is specifically configured to:
acquiring the second antagonizing pattern corresponding to the first vehicle preset in the second time period;
adding the second antagonizing pattern into the effective range of the first vehicle to obtain a second antagonizing sample;
inputting at least one target confrontation sample into the vehicle perception model to obtain a recognition result, wherein the recognition result comprises the following steps:
and inputting the second antagonizing sample into the vehicle perception model to obtain an identification result of a second vehicle, wherein the vehicle type of the second vehicle is different from that of the first vehicle.
In some embodiments, the target object is a pedestrian, the target time periods include a third time period, and the target counter pattern includes a third counter pattern; when the step of introducing at least one preset target confrontation pattern into an effective range of at least one target object within the target time period indicated by the preset script is executed by the processing module to obtain at least one target confrontation sample, the processing module is specifically configured to:
when a specific road section or a traffic sign in the driving scene simulation platform is in a preset state, acquiring a third confrontation pattern corresponding to the pedestrian, which is preset in the third time period;
adding the third confrontation pattern into the effective range of the pedestrian to obtain a third confrontation sample;
inputting at least one target confrontation sample into the vehicle perception model to obtain a recognition result, wherein the recognition result comprises:
and inputting the third confrontation sample into the vehicle perception model to obtain a recognition result without pedestrians.
In some embodiments, the driving scenario simulation platform includes a first scenario interface where the target vehicle travels within a target time period; when the step of introducing at least one preset target confrontation pattern into an effective range of at least one target object within the target time period indicated by the preset script is executed by the processing module to obtain at least one target confrontation sample, the processing module is specifically configured to:
displaying virtual projection equipment preset in the target time period in the first scene interface;
and projecting at least one target confrontation pattern into the effective range of at least one target object through the virtual projection equipment to obtain at least one target confrontation sample.
In some embodiments, the transceiver module is further configured to: receiving a new adding instruction of a target object;
and the processing module is further used for adding a target object in the driving scene simulation platform according to the new instruction.
In a third aspect, an embodiment of the present application further provides a computer device, which includes a memory and a processor, where the memory stores a computer program, and the processor implements the above method when executing the computer program.
In a fourth aspect, the present application also provides a computer-readable storage medium, in which a computer program is stored, the computer program including program instructions, which when executed by a processor, implement the above method.
Compared with the prior art, in the scheme provided by the embodiment of the application, a target vehicle runs in a driving scene simulation platform according to a preset script, at least one target object is arranged in the driving scene simulation platform, and a target countermeasure sample is obtained after the target countermeasure pattern is guided into an effective range corresponding to the target object within a target time period indicated by the preset script, and is input into a vehicle perception model to realize attack on the model.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1a is a schematic view of an application scenario of a model robustness detection method provided in an embodiment of the present application;
FIG. 1b is a schematic diagram of the main sensing tasks of the vehicle sensing model provided in the embodiment of the present application;
fig. 2 is a schematic flowchart of a model robustness detection method provided in an embodiment of the present application;
fig. 3a is a schematic view of another application scenario of the model robustness detection method provided in the embodiment of the present application;
fig. 3b is a schematic view of another application scenario of the model robustness detection method according to the embodiment of the present application;
fig. 4a is a schematic view of another application scenario of the model robustness detection method provided in the embodiment of the present application;
fig. 4b is a schematic view of another application scenario of the model robustness detection method provided in the embodiment of the present application;
fig. 4c is a schematic view of another application scenario of the model robustness detection method provided in the embodiment of the present application;
fig. 5 is a schematic view of another application scenario of the model robustness detection method provided in the embodiment of the present application;
fig. 6a is a schematic view of another application scenario of the model robustness detection method provided in the embodiment of the present application;
fig. 6b is a schematic view of another application scenario of the model robustness detection method according to the embodiment of the present application;
fig. 7 is a schematic view of another application scenario of the model robustness detection method according to the embodiment of the present application;
FIG. 8 is a schematic block diagram of a model robustness detection apparatus provided in an embodiment of the present application;
FIG. 9 is a schematic diagram of a server according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a terminal in an embodiment of the present application;
fig. 11 is a schematic structural diagram of a server in an embodiment of the present application.
Detailed Description
The terms "first," "second," and the like in the description and in the claims of the embodiments of the application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprise," "include," and "have," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or modules is not necessarily limited to those steps or modules expressly listed, but may include other steps or modules not expressly listed or inherent to such process, method, article, or apparatus, such that a division of modules presented in an embodiment of the present application is merely a logical division and may be implemented in a practical application in a different manner, such that multiple modules may be combined or integrated into another system or some features may be omitted or not implemented, such that a shown or discussed coupling or direct coupling or communication between modules may be through some interfaces and an indirect coupling or communication between modules may be electrical or other similar, and such that embodiments are not limited in this application. Moreover, the modules or sub-modules described as separate components may or may not be physically separated, may or may not be physical modules, or may be distributed in a plurality of circuit modules, and some or all of the modules may be selected according to actual needs to implement the purpose of the embodiments of the present application.
An execution main body of the model robustness detection method can be the model robustness detection device provided by the embodiment of the application or computer equipment integrated with the model robustness detection device, wherein the model robustness detection device can be realized in a hardware or software mode, and the computer equipment can be a terminal or a server.
When the computer device is a server, the server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, middleware service, a domain name service, a security service, a CDN, a big data and artificial intelligence platform, and the like.
When the computer device is a terminal, the terminal may include: smart terminals carrying multimedia data processing functions (e.g., video data playing function, music data playing function), such as smart phones, tablet computers, notebook computers, desktop computers, smart televisions, smart speakers, personal Digital Assistants (PDA), desktop computers, smart watches, etc., but are not limited thereto.
The scheme of the embodiment of the application can be realized based on an artificial intelligence technology, and particularly relates to the technical field of computer vision in the artificial intelligence technology and the fields of cloud computing, cloud storage, databases and the like in the cloud technology, which are respectively introduced below.
Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject, and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
Computer Vision technology (CV) Computer Vision is a science for researching how to make a machine "see", and further means that a camera and a Computer are used for replacing human eyes to perform machine Vision such as identification, tracking and measurement on a target, and further performing graphic processing, so that the Computer processing becomes an image more suitable for human eyes to observe or transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. Computer vision technologies generally include technologies such as image processing, model robustness detection, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technology, virtual reality, augmented reality, synchronous positioning, map construction, and the like, and also include common biological feature recognition technologies such as model robustness detection, fingerprint recognition, and the like.
With the research and progress of artificial intelligence technology, the artificial intelligence technology is developed and applied in a plurality of fields, such as common smart homes, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned driving, automatic driving, unmanned aerial vehicles, robots, smart medical care, smart customer service, and the like.
The scheme of the embodiment of the application can be realized based on a cloud technology, particularly relates to the technical fields of cloud computing, cloud storage, databases and the like in the cloud technology, and is respectively introduced below.
Cloud technology refers to a hosting technology for unifying series of resources such as hardware, software, and network in a wide area network or a local area network to realize calculation, storage, processing, and sharing of data. Cloud technology (Cloud technology) is based on a general term of network technology, information technology, integration technology, management platform technology, application technology and the like applied in a Cloud computing business model, can form a resource pool, is used as required, and is flexible and convenient. Cloud computing technology will become an important support. Background services of the technical network system require a large amount of computing and storage resources, such as video websites, image-like websites and more portal websites. With the high development and application of the internet industry, each article may have its own identification mark and needs to be transmitted to a background system for logic processing, data in different levels are processed separately, and various industrial data need strong system background support and can only be realized through cloud computing. According to the embodiment of the application, the identification result can be stored through a cloud technology.
A distributed cloud storage system (hereinafter, referred to as a storage system) refers to a storage system that integrates a large number of storage devices (storage devices are also referred to as storage nodes) of different types in a network through application software or application interfaces to cooperatively work by using functions such as cluster application, grid technology, and a distributed storage file system, and provides a data storage function and a service access function to the outside. In the embodiment of the application, information such as network configuration and the like can be stored in the storage system, so that the server can conveniently call the information.
At present, a storage method of a storage system is as follows: logical volumes are created, and when created, each logical volume is allocated physical storage space, which may be the disk composition of a certain storage device or of several storage devices. The client stores data on a certain logical volume, that is, stores the data on a file system, the file system divides the data into a plurality of parts, each part is an object, the object includes not only the data but also additional information such as data identification (ID, ID entry), the file system writes each object into a physical storage space of the logical volume, and the file system records storage location information of each object, so that when the client requests to access the data, the file system can allow the client to access the data according to the storage location information of each object.
The process of allocating physical storage space for the logical volume by the storage system specifically includes: physical storage space is divided in advance into stripes according to a group of capacity measures of objects stored in a logical volume (the measures often have a large margin with respect to the capacity of the actual objects to be stored) and Redundant Array of Independent Disks (RAID), and one logical volume can be understood as one stripe, thereby allocating physical storage space to the logical volume.
The Database (Database), which can be regarded as an electronic file cabinet in short, is a place for storing electronic files, and a user can add, query, update, delete, etc. data in the files. A "database" is a collection of data that is stored together in a manner that can be shared by multiple users, has as little redundancy as possible, and is independent of the application.
A Database Management System (DBMS) is a computer software System designed for managing a Database, and generally has basic functions of storage, interception, security assurance, backup, and the like. The database management system can make classification according to the database model supported by it, such as relational expression, XML (Extensible Markup Language); or classified according to the type of computer supported, e.g., server cluster, mobile phone; or classified according to the Query Language used, such as SQL (Structured Query Language), XQuery; or by performance impact emphasis, such as maximum size, maximum operating speed; or other classification schemes. Regardless of the manner of classification used, some DBMSs are capable of supporting multiple query languages across categories, for example, simultaneously. In the embodiment of the application, the identification result can be stored in the database management system, so that the server can conveniently call the identification result.
It should be noted that the service terminal related to the embodiments of the present application may be a device providing voice and/or data connectivity to the service terminal, a handheld device having a wireless connection function, or other processing devices connected to a wireless modem. Such as mobile telephones (or "cellular" telephones) and computers with mobile terminals, such as mobile devices that may be portable, pocket, hand-held, computer-included, or vehicle-mounted, that exchange voice and/or data with a radio access network. Examples of such devices include Personal Communication Service (PCS) phones, cordless phones, session Initiation Protocol (SIP) phones, wireless Local Loop (WLL) stations, and Personal Digital Assistants (PDA).
In some embodiments, the present application may be applied to a model robustness detection system as shown in fig. 1a, where the model robustness detection system includes a driving scenario simulation platform, a vehicle sensing model, and a virtual controller, the driving scenario simulation platform includes a target vehicle and at least one target object (in fig. 1a, the target object is a truck, for example), the target vehicle is in a running state according to a preset scenario, and an image and point cloud data of the target vehicle on the driving scenario simulation platform are transmitted to the vehicle sensing model through a port.
Specifically, in some embodiments, the model robustness detection system receives a preset script input by a user, so that a target vehicle in the driving scene simulation platform runs according to the preset script; when the model robustness detection system 1 executes the model robustness detection method provided by the application, at least one target object to be attacked currently in the driving scene simulation platform is determined, wherein the target object is an object sensed by the vehicle sensing model; then, in a target time period indicated by the preset script, introducing at least one preset target confrontation pattern into an effective range of at least one target object to obtain at least one target confrontation sample; and finally, inputting at least one target confrontation sample into the vehicle perception model to obtain a recognition result, wherein at the moment, because the target object is attached with the confrontation pattern, the target object is invisible to the vehicle perception model, and the recognition result of the vehicle perception model is that no vehicle exists in front. The recognition result is used for controlling the virtual controller to generate a driving instruction of the target vehicle, the recognition result indicates that a first confidence degree is higher than a second confidence degree, the first confidence degree is a confidence degree when the vehicle perception model recognizes the target confrontation sample as being not the target object, and the second confidence degree is a confidence degree when the vehicle perception model recognizes the target confrontation sample as being the target object.
In this embodiment, a target vehicle is automatically driven in a driving scene simulation platform through a vehicle perception model, and the vehicle perception model generates a driving instruction for controlling the target vehicle according to a perceived road scene, wherein a main perception task of the vehicle perception model in this embodiment is shown in fig. 1b, the vehicle perception model is fused with a visual model and a point cloud model, the visual model can identify 2D information of the driving scene, such as color and type of a target object in the driving scene, and the point cloud model mainly identifies 3D position information, such as distance from the target object. The objects which can be identified by the visual model comprise traffic lights, lane lines, traffic signs, falling rocks, pedestrians, vehicles and the like; the objects which can be identified by the point cloud model comprise rockfall, pedestrians, vehicles and the like, and more accurate spatial information of the objects is provided. Aiming at the sensing objects, the countermeasure method is designed, for example, the attack method of various 2D and 3D countermeasure samples is provided for different scenes of road sides and road scenes, the countermeasure samples are dynamically generated in a driving scene simulation platform to deceive a vehicle sensing model, and the vehicle sensing model enables the countermeasure samples to be mistakenly identified as a real result, so that driving decision errors or failures are caused.
The technical solution of the present application will be described in detail with reference to several embodiments.
Referring to fig. 2, a method for detecting robustness of a model provided in an embodiment of the present application is described below, where the embodiment of the present application includes:
201. and the driving scene simulation platform determines at least one target object to be attacked currently in the driving scene simulation platform.
In some embodiments, before the driving scene simulation platform determines the target object to be attacked currently, a preset script sent by a user needs to be received first, where the preset script is provided with a route where the target vehicle travels, the target object needing to be attacked, a time (target time period) when the corresponding countermeasure pattern is imported to the target object, and a storage path of the countermeasure pattern needing to be imported. The target object in the embodiment is an object which can be perceived by the vehicle perception model in the running process of the target vehicle.
In some embodiments, according to the instruction of the preset script, at least one of the target objects to be attacked at present is all target objects that need to attack the target vehicle in the whole operation process of the target vehicle in the driving scene simulation platform. At this time, the countermeasure pattern of the target object in the driving route of the target vehicle may be set in advance when the simulation scene of the driving scene simulation platform is constructed.
In other embodiments, the driving scenario simulation platform includes a plurality of scenario interfaces, and according to the instruction of the preset script, at least one target object to be attacked at present is a target object that needs to attack the target vehicle and appears in the scenario interface currently displayed by the driving scenario simulation platform. In this case, the countermeasure pattern of the target object can be dynamically introduced in real time while the target vehicle is traveling.
In this embodiment, a plurality of playing time periods are set in the preset script, different scene interfaces are played in the driving scene simulation platform in different playing time periods, the playing time periods include target time periods, and the target time periods are time periods in which countermeasure patterns need to be imported.
The target object in this embodiment may be a traffic light, a lane line, a traffic signboard, a falling rock, a pedestrian, another vehicle, a triangular pyramid, and the like in the driving scene simulation platform.
202. And in a target time period indicated by the preset script, the driving scene simulation platform leads at least one preset target confrontation pattern into an effective range of at least one target object to obtain at least one target confrontation sample.
In some embodiments, at least one target time interval is set in the preset script, and the target vehicle runs in different scenes in different target time intervals, that is, different target time intervals correspond to different driving scenes.
In some embodiments, the manner in which the target confrontation pattern is imported into the driving scenario simulation platform includes at least one of: program interface import, virtual projection device projection, and automated icon dragging.
When the countermeasure pattern is imported through the program interface, a storage path of the target object corresponding to the target countermeasure pattern is set in the preset script, and the effective range of the target object is set. Specifically, the current playing time interval is a target time interval, and at this time, a target countermeasure pattern needs to be added to the target object in the current scene interface, that is, a storage path of a specified pattern in the effective range of the target object in the preset script is replaced with a storage path of a corresponding target countermeasure pattern, where the specified pattern is a pattern that needs to be replaced with the target countermeasure pattern, and then the current scene interface is reloaded and rendered, so that the addition of the target countermeasure pattern is realized, and a target countermeasure sample is obtained. The countermeasure pattern is imported through the program interface, so that the countermeasure pattern importing speed is high, and the generation speed of the target countermeasure sample is improved.
When the confrontation pattern is introduced through projection of the virtual projection equipment, at the moment, the driving scene simulation platform comprises a first scene interface which is used for driving the target vehicle in a target time period, and a target confrontation sample is obtained through the following method: displaying virtual projection equipment preset in the target time period in the first scene interface; and projecting at least one target confrontation pattern into the effective range of at least one target object through the virtual projection equipment to obtain at least one target confrontation sample. Specifically, a virtual projection device is preset in the driving scene simulation platform, the virtual projection device is displayed in a first scene interface within a target time period, and then a target countermeasure pattern is projected into an effective range of a target object in the first scene interface through the virtual projection device, where the virtual projection device may be a virtual unmanned aerial vehicle projection device, and an interface schematic diagram for guiding the countermeasure pattern through virtual unmanned aerial vehicle projection is shown in fig. 3 a. The diversity of the countermeasure pattern introduction can be increased by projecting the countermeasure pattern introduction through the virtual projection device.
When the countermeasure patterns are imported through automatic icon dragging, at this time, a display interface of the driving scene simulation platform comprises a first area and a second area, the first area displays a plurality of candidate countermeasure patterns, the second area currently displays a picture of a target driving scene, at this time, a preset script is provided with an automatic dragging instruction, when the driving scene simulation platform plays the picture of the target driving scene corresponding to a target time period, the automatic dragging instruction is triggered, the target countermeasure pattern indicated by the automatic dragging instruction is selected from the candidate countermeasure patterns in the first area, the target countermeasure pattern is automatically dragged to an effective range of a target object, the guidance of the countermeasure pattern is carried out through automatic icon dragging, and a target countermeasure sample is obtained, wherein an interface schematic diagram for guiding the countermeasure pattern through automatic icon dragging is shown in fig. 3 b. The countermeasures are displayed and guided in an automatic icon dragging mode, the guiding process of the countermeasures can be visually presented to a user, and interestingness of model robustness detection is improved.
It should be noted that the target countermeasure sample in the present embodiment includes an image countermeasure sample and a point cloud countermeasure sample.
203. The driving scene simulation platform inputs at least one target confrontation sample into the vehicle perception model.
In this embodiment, after the driving scenario simulation platform generates the target confrontation sample, the generated target confrontation sample is sent to the vehicle perception model.
Specifically, the target countermeasure sample is displayed in a current scene interface, when a target vehicle is driven in the current scene interface, the image and the point cloud data in the current scene interface of the driving scene simulation platform are transmitted to the vehicle perception model through a port, so that the vehicle perception model can acquire the target countermeasure sample, and at least one target countermeasure sample in the current scene interface is input into the vehicle perception model.
204. And the vehicle perception model identifies the at least one target confrontation sample to obtain an identification result, and sends the identification result to the virtual controller.
After a target confrontation sample is obtained by a vehicle perception model, the obtained target confrontation sample is recognized in real time, and a recognition result is output, wherein the recognition result is used for controlling the virtual controller to generate a driving instruction of the target vehicle, the recognition result indicates that a first confidence coefficient is higher than a second confidence coefficient, the first confidence coefficient is a confidence coefficient when the target confrontation sample is recognized as a non-target object by the vehicle perception model, and the second confidence coefficient is a confidence coefficient when the target confrontation sample is recognized as the target object by the vehicle perception model.
205. And the virtual controller generates a driving instruction of the target vehicle according to the recognition result and controls the target vehicle in the driving scene simulation platform according to the driving instruction.
Specifically, in the embodiment, after the recognition result is obtained through the vehicle sensing model, the recognition result is sent to the virtual controller, the virtual controller generates the driving instruction according to the recognition result, and the target vehicle is controlled to run in the driving scene provided by the driving scene simulation platform through the driving instruction.
In order to provide an environment basis for testing the natural robustness of a vehicle perception model and realize more comprehensive robustness detection on the model, the driving scene simulation platform in this embodiment can simulate various environments, at this time, various weather materials are preset in the driving scene simulation platform, the target time period includes a first time period, the target countermeasure pattern includes a first countermeasure pattern, and at this time, the target countermeasure sample is obtained through the following steps:
a. and determining a target weather material corresponding to the first time period from the various weather materials.
The various weather materials comprise sunny weather materials (including sunny, cloudy and cloudy weather materials), cloudy weather materials, foggy weather materials (including heavy fog and light fog weather materials), rainy weather materials (including heavy rain, light rain and thunderstorm weather materials), snowy weather materials (including heavy snow, light snow and snowy weather materials) and the like.
In some embodiments, the preset script is preset with a plurality of running cycles, and a corresponding relationship between the running cycles and weather materials is set in the preset script, the first time period is a running duration of the running cycle, and the first time period is a running time period corresponding to the running cycle; at this time, when the target weather material corresponding to the first time period is determined, the following steps are specifically executed: determining the current cycle number of the current operation cycle before the current cycle starts to operate; and determining a target weather material corresponding to the current cycle number from the plurality of weather materials according to the corresponding relation between the running cycle number and the weather materials.
In this embodiment, the driving scene simulation platform may play the driving scene in a circulating manner, and different weather materials are set for different playing periods in this embodiment.
The corresponding relationship between the operation cycles and the weather materials may be a preset corresponding relationship in the driving scene simulation platform, or a corresponding relationship set in a preset script by a user, for example, the corresponding relationship between the operation cycles and the weather materials may be that the weather material corresponding to the first operation cycle is a sunny weather material, the weather material corresponding to the second operation cycle is a cloudy weather material, the weather material corresponding to the third operation cycle is a foggy weather material, the weather material corresponding to the fourth operation cycle is a rainy weather material, and the like.
In this embodiment, when the current cycle number of the current operation cycle is the fourth operation cycle, at this time, the corresponding target weather material is a rainy weather material.
b. And switching the current scene interface of the target vehicle into a target weather scene interface corresponding to the target weather material in the first time period.
Specifically, if the corresponding target weather material is a rainy weather material, at this time, in a first time period of the current operation cycle, the current scene interface is switched to the target weather scene interface corresponding to the rainy weather material, where the target weather scene interface includes at least one target object. The target weather scene interface corresponding to the rainy weather material is shown in fig. 4 a.
c. Selecting at least one first countermeasure pattern that matches the target weather material according to the characteristics of the target weather material.
In some embodiments, a countermeasure pattern set is preset in the model robustness detection system, where the countermeasure pattern set includes countermeasure patterns respectively matched with multiple weather materials, and features of a target weather material may be brightness features and texture features corresponding to the target weather material, for example, brightness corresponding to cloudy days, rainy days, and the like is darker, brightness corresponding to sunny days is brighter, rainy days has texture features of rainwater, and snowy days has texture features of snowflakes, and the like.
d. And adding at least one first antagonizing pattern into the effective range of at least one target object to obtain at least one first antagonizing sample.
In this embodiment, after the first countermeasure pattern is determined, the first countermeasure pattern is added to the effective range of the corresponding target object to obtain a first countermeasure sample, and the scene interface to which the first countermeasure pattern is added is shown in fig. 4 b.
At this time, the first countermeasure sample is input into the vehicle perception model, so as to obtain a recognition result of a non-target object, where a target object in a current scene interface is a vehicle (hereinafter referred to as a front vehicle) that is traveling at a constant speed in front of the target vehicle, and the driving scene simulation platform transmits image and point cloud data in a current display interface (including the first countermeasure sample) corresponding to rainy weather materials to the vehicle perception model through a port, so that the vehicle perception model obtains the first countermeasure sample, and the vehicle perception model obtains a recognition result of the non-target object, at this time, the front vehicle hides the target vehicle due to the addition of the countermeasure pattern to the target object, and the vehicle perception model of the target vehicle does not detect the front vehicle, at this time, as shown in fig. 4c, the front vehicle is traveling at a constant speed in front of the target vehicle, the target vehicle is preset with an acceleration instruction in a target time period, at this time, the target vehicle detects whether a current driving environment is suitable for acceleration through the vehicle perception model, and the front vehicle does not identify the front vehicle that is traveling in front, and at this time, the current environment is suitable for acceleration according to obtain a decision that the current environment is suitable for acceleration, the target vehicle is accelerated, and the front vehicle collision is caused.
In some embodiments, after obtaining the recognition result that is not the target object, the method further includes: and taking the first countermeasure sample corresponding to the non-target object as an effective countermeasure sample.
In some embodiments, the target object is a first vehicle, the target time period comprises a second time period, and the step of obtaining the target confrontation sample comprises: acquiring the second antagonizing pattern corresponding to the first vehicle preset in the second time period; and adding the second antagonizing pattern into the effective range of the first vehicle to obtain a second antagonizing sample.
Specifically, when the driving scene simulation platform broadcasts to a scene interface corresponding to a second time period, a second countermeasure pattern corresponding to the first vehicle in the scene interface is obtained, and then the second countermeasure pattern is added into the effective range of the first vehicle to obtain a second countermeasure sample.
At this time, the inputting at least one target confrontation sample into the vehicle perception model to obtain a recognition result includes: and inputting the second antagonizing sample into the vehicle perception model to obtain a recognition result of a second vehicle, wherein the vehicle type of the second vehicle is different from that of the first vehicle, for example, the first vehicle is a large truck, the second vehicle is a car, and the vehicle perception model recognizes the first vehicle as the second vehicle due to the fact that the second antagonizing pattern is added on the first vehicle. At this time, the target vehicle meets the first vehicle, and the large truck is identified as a car, so that the target vehicle is scratched when meeting the first vehicle, as specifically shown in fig. 5.
And when the second countermeasure sample is input into the vehicle perception model to obtain the identification result of the second vehicle, and the vehicle type of the second vehicle is different from that of the first vehicle, taking the second countermeasure sample corresponding to the identification result of the second vehicle as an effective countermeasure sample.
In some embodiments, the target object is a pedestrian, the target period comprises a third period, the target confrontation pattern comprises a third confrontation pattern; when obtaining the target confrontation sample, the method comprises the following steps: when a specific road section (such as a road section with a zebra crossing) or a traffic sign in the driving scene simulation platform is in a preset state (such as a traffic sign which pays attention to a pedestrian crossing a road), acquiring a third confrontation pattern which is preset in the third time period and corresponds to the pedestrian; and adding the third confrontation pattern into the effective range of the pedestrian to obtain a third confrontation sample.
Specifically, in some embodiments, the third countermeasure pattern may be clothing of a pedestrian, and at this time, adding the third countermeasure pattern into the effective range of the pedestrian is specifically to replace the clothing of the pedestrian with the third countermeasure pattern, where the replace-pedestrian clothing is specifically operative to query a pedestrian class object through a code, obtain a set a of all pedestrians, where the set a includes all the pedestrian objects, and the pedestrian object includes attributes such as position coordinates, an activity route, a clothing Uniform Resource Locator (URL) address, and a model URL address of a virtual scene in the driving scene simulation platform about the pedestrian, change a clothing URL path address of a pedestrian object corresponding to a target object in the set, replace an original clothing address with a countermeasure clothing address (the third countermeasure pattern), reload the rendered scene, and replace the clothing corresponding to the target object with the pedestrian. When the target vehicle is about to cross the zebra crossing, inputting the third confrontation sample into the vehicle perception model to obtain the recognition result without the pedestrian, wherein the target vehicle has the following conditions:
first, as shown in fig. 6a, the target vehicle decelerates normally through the zebra crossing without a pedestrian, the pedestrian is knocked over by the vehicle, and an accident occurs; second, as shown in fig. 6b, the target vehicle decelerates normally through the zebra crossing without the pedestrian, the dead spot hits the pedestrian, there is no courtesy to the pedestrian, and the crossing rule is violated.
And when the third confrontation sample is input into the vehicle perception model to obtain a recognition result without a pedestrian, taking the third confrontation sample corresponding to the recognition result without the pedestrian as an effective confrontation sample.
In the embodiment, all the pedestrian objects in the driving scene simulation platform are examples of pedestrians, the pedestrians include attributes such as position coordinates, moving routes, clothes pattern URL addresses and model URL addresses of the pedestrians in the virtual scene, and one pedestrian set is formed by a plurality of pedestrian objects. The code can change the information such as the position, clothes and the like of the corresponding pedestrian in the virtual scene only by changing the attribute of the pedestrian object. At this time, if a pedestrian class object is to be added, only the attribute of the pedestrian to be added is added to the set a, where the attribute includes the position coordinate of the added pedestrian in the virtual scene, the activity route, the clothes pattern URL address, the model URL address, and the like.
In some embodiments, the target object is a traffic light, the target time period comprises a fourth time period, and the target counter pattern comprises a fourth counter pattern; the scene interface corresponding to the fourth time period comprises a traffic light, and when the target confrontation sample is obtained, the method comprises the following steps: acquiring a fourth confrontation pattern corresponding to the traffic light and preset in the fourth time period; and adding the fourth countermeasure pattern into the effective range of the traffic light to obtain a fourth countermeasure sample.
Specifically, in some embodiments, the fourth countermeasure pattern may be a map on the surface of a traffic light, and at this time, the traffic light lights up a red light, and at this time, the fourth countermeasure pattern is added into an effective range of the traffic light, that is, the traffic light is mapped, specifically, an object set B of all traffic lights is queried, the set B includes all traffic light objects, the traffic light objects include attributes such as position coordinates of the traffic light in a scene, a traffic light model URL address, a material map URL, and the like, a traffic light material map URL path is replaced, a URL path of the fourth countermeasure pattern covers a sample material map URL path in which the traffic light is normal, a rendering scene is reloaded, the red light contains the countermeasure pattern, and at this time, when a target vehicle is about to drive a crossroad which is a bright red light, the fourth countermeasure sample is input into the vehicle perception model, and an identification result of the traffic light which is bright green light is obtained, as shown in fig. 7, because the target vehicle identifies the red light as a green light, the coming vehicle decelerates the crossing.
In some embodiments, the target object may also be a lane line, a traffic sign, a falling stone, a triangular pyramid, or the like, and the step of obtaining the countermeasure sample for these objects is similar to the step of obtaining the countermeasure sample according to the traffic light object, which is not described herein again. At this time, in order to more comprehensively perform robustness detection on the vehicle perception model, the driving scenario simulation platform at least further includes the following scenarios: for example, if the target object is a lane line, at this time, a countermeasure pattern is introduced to the straight lane line, and at this time, the vehicle perception model in the target vehicle recognizes the straight lane line as a lane line turning left, resulting in the target vehicle walking in a wrong way; for example, if the target object is a traffic sign board which decelerates and travels slowly, the target vehicle currently (in a target time period) travels on a curve of a mountain road, a countermeasure pattern is introduced into the traffic sign board which decelerates and travels slowly, the vehicle perception model in the target vehicle recognizes the traffic sign board as a traffic sign board which eliminates a speed limit of 40km/h, the target vehicle accelerates, turns out not timely when the speed limit is reached, falls off a cliff, and a traffic accident occurs; if the target object is a falling rock, the falling rock is positioned in front of the driving road of the target vehicle, at this time, a countermeasure pattern is introduced to the falling rock, at this time, the target vehicle recognizes the falling rock as a plane pattern on the ground, and a car accident occurs by directly hitting the falling rock.
The driving scene simulation platform in this embodiment can simulate various scenes, including an expressway, a rural road, an urban road, a commercial street, a mountain road, and a stream of people and vehicles corresponding to each scene by the model, where the stream of people includes pedestrians crossing a zebra crossing, people walking on the roadside, and the like, and the vehicles include vehicles meeting with the target vehicle, vehicles traveling in front of the target vehicle, vehicles having a fault in front of the target vehicle, and vehicles traveling alongside the target vehicle, and the like.
In addition, at least one target object is preset in the driving scene simulation platform, and when the driving scene simulation platform is used, one or more target objects can be added or deleted according to actual test requirements. When a target object is newly added, receiving a newly added instruction of the target object input by a user; and adding a target object in the driving scene simulation platform according to the new instruction. Specifically, the new adding instruction of the target object comprises the attribute of the target object, wherein the attribute comprises the position coordinates of the target object in the virtual scene, the target object model URL address and the like, and if the target object is a pedestrian or a vehicle, the attribute further comprises the walking/driving path of the target object.
In summary, in the present disclosure, a target vehicle travels in a driving scenario simulation platform according to a preset script, and at least one target object is disposed in the driving scenario simulation platform, and in a target time period indicated by the preset script, after a target countermeasure pattern is guided into an effective range of a corresponding target object, a target countermeasure sample is obtained, and the target countermeasure sample is input into a vehicle sensing model, so as to implement an attack on the model.
Fig. 8 is a schematic block diagram of a model robustness detection apparatus according to an embodiment of the present application. As shown in fig. 8, the present application also provides a model robustness detection apparatus 800 corresponding to the above model robustness detection method. The model robustness detecting apparatus 800 includes means for performing the above-described model robustness detecting method, and the model robustness detecting apparatus 800 may be configured in a computer device. The device is installed in a computer device, and specifically, the device is configured in a model robustness detection system in the computer device, and the model robustness detection system includes a driving scenario simulation platform, a vehicle perception model and a virtual controller, the driving scenario simulation platform includes a target vehicle and at least one target object, the target vehicle is in a running state according to a preset script, and an image and point cloud data of the target vehicle on the driving scenario simulation platform are transmitted to the vehicle perception model through a port, the model robustness detection device 800 includes a transceiver module 801 and a processing module 802, wherein:
a transceiver module 801, configured to acquire the preset script;
a processing module 802, configured to determine at least one target object to be attacked currently in the driving scene simulation platform, where the target object is an object perceived by the vehicle perception model; guiding at least one preset target confrontation pattern into an effective range of at least one target object within a target time period indicated by the preset script to obtain at least one target confrontation sample; inputting at least one target confrontation sample into the vehicle perception model to obtain a recognition result, wherein the recognition result is used for controlling the virtual controller to generate a driving instruction of the target vehicle, the recognition result indicates that a first confidence degree is higher than a second confidence degree, the first confidence degree is a confidence degree when the vehicle perception model recognizes the target confrontation sample as a non-target object, and the second confidence degree is a confidence degree when the vehicle perception model recognizes the target confrontation sample as the target object.
In some embodiments, the driving scenario simulation platform presets a plurality of weather materials, the target time period includes a first time period, and the target countermeasure pattern includes a first countermeasure pattern; when the step of introducing at least one preset target confrontation pattern into the effective range of at least one target object within the target time period indicated by the preset script to obtain at least one target confrontation sample is executed by the processing module 802, specifically, the step is to:
determining a target weather material corresponding to the first time period from the multiple weather materials;
switching a current scene interface of the target vehicle into a target weather scene interface corresponding to the target weather material in the first time period, wherein the target weather scene interface comprises at least one target object;
selecting at least one first countermeasure pattern matched with the target weather material according to the characteristics of the target weather material;
adding at least one first antagonizing pattern into the effective range of at least one target object to obtain at least one first antagonizing sample;
inputting at least one target confrontation sample into the vehicle perception model to obtain a recognition result, wherein the recognition result comprises the following steps:
and inputting the first anti-collision sample into the vehicle perception model to obtain a recognition result of the non-target object.
In some embodiments, the preset script is preset with a plurality of running cycles, and a corresponding relationship between the running cycles and weather materials is set in the preset script, the first time period is a running duration of the running cycle, and the first time period is a running time period corresponding to the running cycle; when the processing module 802 executes the step of determining the target weather material corresponding to the first time period from the multiple weather materials, it is specifically configured to:
determining the current cycle number of the current operation cycle before the current cycle starts to operate;
and determining a target weather material corresponding to the current cycle number from the plurality of weather materials according to the corresponding relation between the running cycle number and the weather materials.
In some embodiments, the target object is a first vehicle, the target time period comprises a second time period, the target countermeasure pattern comprises a second countermeasure pattern; when the step of introducing at least one preset target confrontation pattern into the effective range of at least one target object within the target period indicated by the preset script to obtain at least one target confrontation sample is executed by the processing module 802, the processing module is specifically configured to:
acquiring the second antagonizing pattern corresponding to the first vehicle preset in the second time period;
adding the second antagonizing pattern into the effective range of the first vehicle to obtain a second antagonizing sample;
inputting at least one target confrontation sample into the vehicle perception model to obtain a recognition result, wherein the recognition result comprises:
and inputting the second antagonizing sample into the vehicle perception model to obtain an identification result of a second vehicle, wherein the vehicle type of the second vehicle is different from that of the first vehicle.
In some embodiments, the target object is a pedestrian, the target period comprises a third period, the target confrontation pattern comprises a third confrontation pattern; when the step of introducing at least one preset target confrontation pattern into the effective range of at least one target object within the target time period indicated by the preset script to obtain at least one target confrontation sample is executed by the processing module 802, specifically, the step is to:
when a specific road section or a traffic sign in the driving scene simulation platform is in a preset state, acquiring a third confrontation pattern corresponding to the pedestrian, which is preset in the third time period;
adding the third confrontation pattern into the effective range of the pedestrian to obtain a third confrontation sample;
inputting at least one target confrontation sample into the vehicle perception model to obtain a recognition result, wherein the recognition result comprises the following steps:
and inputting the third confrontation sample into the vehicle perception model to obtain a recognition result without pedestrians.
In some embodiments, the driving scenario simulation platform includes a first scenario interface where the target vehicle travels within a target time period; when the step of introducing at least one preset target confrontation pattern into the effective range of at least one target object within the target period indicated by the preset script to obtain at least one target confrontation sample is executed by the processing module 802, the processing module is specifically configured to:
displaying virtual projection equipment preset in the target time period in the first scene interface;
and projecting at least one target confrontation pattern into the effective range of at least one target object through the virtual projection equipment to obtain at least one target confrontation sample.
In some embodiments, the transceiver module 801 is further configured to: receiving a new instruction of a target object;
the processing module 802 is further configured to add a target object in the driving scene simulation platform according to the new instruction.
In summary, the model robustness detection apparatus 800 in this embodiment controls a target vehicle to travel in a driving scene simulation platform according to a preset script, and at least one target object is arranged in the driving scene simulation platform, and in a target time period indicated by the preset script, after a target countermeasure pattern is guided into an effective range corresponding to the target object, a target countermeasure sample is obtained, and the target countermeasure sample is input into a vehicle sensing model, so that an attack on the model can be realized.
It should be noted that, as can be clearly understood by those skilled in the art, the specific implementation processes of the model robustness detection apparatus and each unit may refer to the corresponding descriptions in the foregoing method embodiments, and for convenience and conciseness of description, details are not repeated herein.
The model robustness detecting device in the embodiment of the present application is described above from the perspective of a modular functional entity, and the model robustness detecting device in the embodiment of the present application is described below from the perspective of hardware processing.
It should be noted that in the embodiments of the present application (including the embodiments shown in fig. 8), all entity devices corresponding to the transceiver modules may be transceivers, and all entity devices corresponding to the processing modules may be processors. When one of the devices has the structure as shown in fig. 8, the processor, the transceiver and the memory implement the same or similar functions of the transceiver module and the processing module provided in the device embodiment corresponding to the device, and the memory in fig. 9 stores a computer program that needs to be called when the processor executes the model robustness detection method.
The system shown in fig. 8 may have a structure as shown in fig. 9, when the apparatus shown in fig. 8 has a structure as shown in fig. 9, the processor in fig. 9 can implement the same or similar functions of the processing module provided by the apparatus embodiment corresponding to the apparatus, the transceiver in fig. 9 can implement the same or similar functions of the transceiver module provided by the apparatus embodiment corresponding to the apparatus, and the memory in fig. 9 stores a computer program that needs to be called when the processor executes the model robustness detection method. In the embodiment shown in fig. 8 of the present application, the entity device corresponding to the transceiver module may be an input/output interface, and the entity device corresponding to the processing module may be a processor.
As shown in fig. 10, for convenience of description, only the portions related to the embodiments of the present application are shown, and specific technical details that are not disclosed refer to the method portion in the embodiments of the present application. The terminal device may be any terminal device including a mobile phone, a tablet computer, a Personal Digital Assistant (PDA, for short, the whole english is: personal Digital Assistant), a Point of sale terminal (POS, for short, the whole english is: point of Sales), a vehicle-mounted computer, etc., taking the terminal as the mobile phone as an example:
fig. 10 is a block diagram illustrating a partial structure of a mobile phone related to a terminal device provided in an embodiment of the present application. Referring to fig. 10, the handset includes: radio Frequency (RF) circuit 1010, memory 1020, input unit 1030, display unit 1040, sensor 1050, audio circuit 1060, wireless fidelity (Wi-Fi) module 1070, processor 1080, and power source 1090. Those skilled in the art will appreciate that the handset configuration shown in fig. 10 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following specifically describes each constituent component of the mobile phone with reference to fig. 10:
RF circuit 1010 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, for processing downlink information of a base station after receiving the downlink information to processor 1080; in addition, the data for designing uplink is transmitted to the base station. In general, RF circuit 1010 includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuitry 1010 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communications (GSM), general Packet Radio Service (GPRS), code Division Multiple Access (CDMA), wideband Code Division Multiple Access (WCDMA), long Term Evolution (LTE), e-mail, short Message Service (SMS), etc.
The memory 1020 can be used for storing software programs and modules, and the processor 1080 executes various functional applications and data processing of the mobile phone by operating the software programs and modules stored in the memory 1020. The memory 1020 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, etc. Further, the memory 1020 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 1030 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone. Specifically, the input unit 1030 may include a touch panel 1031 and other input devices 1032. The touch panel 1031, also referred to as a touch screen, may collect touch operations by a user (e.g., operations by a user on or near the touch panel 1031 using any suitable object or accessory such as a finger, a stylus, etc.) and drive corresponding connection devices according to a preset program. Alternatively, the touch panel 1031 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 1080, and can receive and execute commands sent by the processor 1080. In addition, the touch panel 1031 may be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 1030 may include other input devices 1032 in addition to the touch panel 1031. In particular, other input devices 1032 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a track ball, a mouse, a joystick, and the like.
The display unit 1040 may be used to display information input by a user or information provided to the user and various menus of the cellular phone. The Display unit 1040 may include a Display panel 1041, and optionally, the Display panel 1041 may be configured by using a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), and the like. Further, the touch panel 1031 can cover the display panel 1041, and when the touch panel 1031 detects a touch operation on or near the touch panel 1031, the touch operation is transmitted to the processor 1080 to determine the type of the touch event, and then the processor 1080 provides a corresponding visual output on the display panel 1041 according to the type of the touch event. Although in fig. 10, the touch panel 1031 and the display panel 1041 are two separate components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 1031 and the display panel 1041 may be integrated to implement the input and output functions of the mobile phone.
The handset may also include at least one sensor 1050, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 1041 according to the brightness of ambient light, and the proximity sensor may turn off the display panel 1041 and/or the backlight when the mobile phone moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the gesture of the mobile phone (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
Audio circuitry 1060, speaker 1061, microphone 1062 may provide an audio interface between the user and the handset. The audio circuit 1060 can transmit the electrical signal converted from the received audio data to the speaker 1061, and the electrical signal is converted into a sound signal by the speaker 1061 and output; on the other hand, the microphone 1062 converts the collected sound signal into an electrical signal, which is received by the audio circuit 1060 and converted into audio data, which is then processed by the audio data output processor 1080 and then sent to, for example, another cellular phone via the RF circuit 1010, or output to the memory 1020 for further processing.
Wi-Fi belongs to the short-distance wireless transmission technology, and the mobile phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the Wi-Fi module 1070, and provides wireless broadband Internet access for the user. Although fig. 10 shows the Wi-Fi module 1070, it is understood that it does not belong to the essential constitution of the cellular phone and may be omitted entirely as needed within the scope of not changing the essence of the application.
The processor 1080 is a control center of the mobile phone, connects various parts of the whole mobile phone by using various interfaces and lines, and executes various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 1020 and calling data stored in the memory 1020, thereby integrally monitoring the mobile phone. Optionally, processor 1080 may include one or more processing units; preferably, the processor 1080 may integrate an application processor, which primarily handles operating systems, user interfaces, application programs, etc., and a modem processor, which primarily handles wireless communications. It is to be appreciated that the modem processor described above may not be integrated into processor 1080.
The handset also includes a power supply 1090 (e.g., a battery) for powering the various components, which may be logically coupled to processor 1080 via a power management system, thereby providing management of charging, discharging, and power consumption via the power management system.
Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which are not described herein.
In the embodiment of the present application, the processor 1080 included in the handset also has a flowchart for controlling the execution of the model robustness detection method described above with reference to fig. 2.
Fig. 11 is a schematic diagram of a server 1120 according to an embodiment of the present disclosure, where the server 1120 may have a relatively large difference due to different configurations or performances, and may include one or more Central Processing Units (CPUs) 1122 (e.g., one or more processors) and a memory 1132, and one or more storage media 1130 (e.g., one or more mass storage devices) for storing an application program 1142 or data 1144. Memory 1132 and storage media 1130 may be, among other things, transient storage or persistent storage. The program stored on the storage medium 1130 may include one or more modules (not shown), each of which may include a series of instruction operations for the server. Still further, central processor 1122 may be provided in communication with storage medium 1130 to perform a series of instruction operations on storage medium 1130 on server 1120.
The Server 1120 may also include one or more power supplies 1126, one or more wired or wireless network interfaces 1150, one or more input-output interfaces 1158, and/or one or more operating systems 1141, such as Windows Server, mac OS X, unix, linux, freeBSD, etc.
The steps performed by the server in the above embodiment may be based on the structure of the server 1120 shown in fig. 11. The steps of the server shown by fig. 2 in the above-described embodiment may be based on the server structure shown in fig. 11, for example. For example, the processor 1122, by calling instructions in the memory 1132, performs the following operations:
determining at least one target object to be attacked currently in the driving scene simulation platform, wherein the target object is an object sensed by the vehicle sensing model;
guiding at least one preset target confrontation pattern into an effective range of at least one target object within a target time period indicated by the preset script to obtain at least one target confrontation sample;
inputting at least one target confrontation sample into the vehicle perception model to obtain a recognition result, wherein the recognition result is used for controlling the virtual controller to generate a driving instruction of the target vehicle, the recognition result indicates that a first confidence degree is higher than a second confidence degree, the first confidence degree is a confidence degree when the vehicle perception model recognizes the target confrontation sample as a non-target object, and the second confidence degree is a confidence degree when the vehicle perception model recognizes the target confrontation sample as the target object.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system, the apparatus and the module described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the embodiments of the present application, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical division, and in actual implementation, there may be other divisions, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed coupling or direct coupling or communication connection between each other may be through some interfaces, indirect coupling or communication connection between devices or modules, and may be in an electrical, mechanical or other form.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a separate product, may be stored in a computer-readable storage medium.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. The procedures or functions described in accordance with the embodiments of the present application are generated in whole or in part when the computer program is loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a server, a data center, etc., that is integrated with one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), among others.
The technical solutions provided in the embodiments of the present application are described in detail above, and the embodiments of the present application use specific examples to explain the principles and implementations of the embodiments of the present application, and the descriptions of the embodiments are only used to help understand the methods and core ideas of the embodiments of the present application; meanwhile, for a person skilled in the art, according to the idea of the embodiment of the present application, there may be a change in the specific implementation and application scope, and in summary, the content of the present specification should not be construed as a limitation to the embodiment of the present application.

Claims (10)

1. A model robustness detection method is applied to a model robustness detection system, the model robustness detection system comprises a driving scene simulation platform, a vehicle perception model and a virtual controller, the driving scene simulation platform comprises a target vehicle and at least one target object, the target vehicle is in a running state according to a preset script, and images and point cloud data of the target vehicle on the driving scene simulation platform are transmitted to the vehicle perception model through a port, and the method comprises the following steps:
determining at least one target object to be attacked currently in the driving scene simulation platform, wherein the target object is an object sensed by the vehicle sensing model;
guiding at least one preset target confrontation pattern into an effective range of at least one target object within a target time period indicated by the preset script to obtain at least one target confrontation sample;
inputting at least one target confrontation sample into the vehicle perception model to obtain a recognition result, wherein the recognition result is used for controlling the virtual controller to generate a driving instruction of the target vehicle, the recognition result indicates that a first confidence degree is higher than a second confidence degree, the first confidence degree is a confidence degree when the vehicle perception model recognizes the target confrontation sample as a non-target object, and the second confidence degree is a confidence degree when the vehicle perception model recognizes the target confrontation sample as the target object.
2. The method of claim 1, wherein the driving scenario simulation platform is pre-set with a plurality of weather materials, the target time period comprises a first time period, and the target countermeasure pattern comprises a first countermeasure pattern; the step of guiding at least one preset target confrontation pattern into a valid range of at least one target object within a target time period indicated by the preset script to obtain at least one target confrontation sample comprises the following steps:
determining a target weather material corresponding to the first time period from the multiple weather materials;
switching a current scene interface of the target vehicle into a target weather scene interface corresponding to the target weather material in the first time period, wherein the target weather scene interface comprises at least one target object;
selecting at least one first countermeasure pattern matched with the target weather material according to the characteristics of the target weather material;
adding at least one first antagonizing pattern to the effective range of at least one target object to obtain at least one first antagonizing sample;
inputting at least one target confrontation sample into the vehicle perception model to obtain a recognition result, wherein the recognition result comprises the following steps:
and inputting the first anti-collision sample into the vehicle perception model to obtain a recognition result of the non-target object.
3. The method according to claim 2, wherein the preset script is preset with a plurality of running cycles, and the preset script is provided with a corresponding relationship between the number of running cycles and a weather material, the first time period is a running duration of the running cycle, and the first time period is a running time period corresponding to the running cycle; the determining the target weather material corresponding to the first time period from the multiple weather materials comprises:
determining the current cycle number of the current operation cycle before the current cycle starts to operate;
and determining a target weather material corresponding to the current cycle number from the plurality of weather materials according to the corresponding relation between the running cycle number and the weather materials.
4. The method of claim 1, wherein the target object is a first vehicle, the target period of time comprises a second period of time, the target countermeasure pattern comprises a second countermeasure pattern; the step of guiding at least one preset target confrontation pattern into a valid range of at least one target object within a target time period indicated by the preset script to obtain at least one target confrontation sample comprises the following steps:
acquiring the second antagonizing pattern corresponding to the first vehicle preset in the second time period;
adding the second antagonizing pattern into the effective range of the first vehicle to obtain a second antagonizing sample;
inputting at least one target confrontation sample into the vehicle perception model to obtain a recognition result, wherein the recognition result comprises:
and inputting the second antagonizing sample into the vehicle perception model to obtain an identification result of a second vehicle, wherein the vehicle type of the second vehicle is different from that of the first vehicle.
5. The method of claim 1, wherein the target object is a pedestrian, the target period of time comprises a third period of time, and the target counter pattern comprises a third counter pattern; the step of leading at least one preset target confrontation pattern into a valid range of at least one target object within a target time period indicated by the preset script to obtain at least one target confrontation sample includes:
when a specific road section or a traffic sign in the driving scene simulation platform is in a preset state, acquiring a third confrontation pattern corresponding to the pedestrian, which is preset in the third time period;
adding the third confrontation pattern to the effective range of the pedestrian to obtain a third confrontation sample;
inputting at least one target confrontation sample into the vehicle perception model to obtain a recognition result, wherein the recognition result comprises the following steps:
and inputting the third confrontation sample into the vehicle perception model to obtain a recognition result without pedestrians.
6. The method of any of claims 1-5, wherein the driving scenario simulation platform comprises a first scenario interface for the target vehicle to travel within a target period of time; the step of leading at least one preset target confrontation pattern into a valid range of at least one target object within a target time period indicated by the preset script to obtain at least one target confrontation sample includes:
displaying virtual projection equipment preset in the target time period in the first scene interface;
and projecting at least one target confrontation pattern into the effective range of at least one target object through the virtual projection equipment to obtain at least one target confrontation sample.
7. The method according to any one of claims 1 to 5, further comprising:
receiving a new instruction of a target object;
and adding a target object in the driving scene simulation platform according to the new instruction.
8. A model robustness detecting apparatus, configured in a model robustness detecting system, the model robustness detecting system including a driving scenario simulation platform, a vehicle perception model and a virtual controller, the driving scenario simulation platform including a target vehicle and at least one target object, the target vehicle being in a running state according to a preset scenario, and image and point cloud data of the target vehicle on the driving scenario simulation platform being transmitted to the vehicle perception model through a port, the apparatus comprising:
the receiving and sending module is used for acquiring the preset script;
the processing module is used for determining at least one target object to be attacked currently in the driving scene simulation platform, and the target object is an object sensed by the vehicle sensing model; guiding at least one preset target confrontation pattern into an effective range of at least one target object within a target time period indicated by the preset script to obtain at least one target confrontation sample; inputting at least one target confrontation sample into the vehicle perception model to obtain a recognition result, wherein the recognition result is used for controlling the virtual controller to generate a driving instruction of the target vehicle, the recognition result indicates that a first confidence degree is higher than a second confidence degree, the first confidence degree is a confidence degree when the vehicle perception model recognizes the target confrontation sample as a non-target object, and the second confidence degree is a confidence degree when the vehicle perception model recognizes the target confrontation sample as the target object.
9. A computer arrangement, characterized in that the computer arrangement comprises a memory, on which a computer program is stored, and a processor, which when executing the computer program, carries out the method according to any one of claims 1-7.
10. A computer-readable storage medium, characterized in that the storage medium stores a computer program comprising program instructions which, when executed by a processor, implement the method according to any one of claims 1-7.
CN202211231908.4A 2022-09-30 2022-09-30 Model robustness detection method, related device and storage medium Active CN115471495B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211231908.4A CN115471495B (en) 2022-09-30 2022-09-30 Model robustness detection method, related device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211231908.4A CN115471495B (en) 2022-09-30 2022-09-30 Model robustness detection method, related device and storage medium

Publications (2)

Publication Number Publication Date
CN115471495A true CN115471495A (en) 2022-12-13
CN115471495B CN115471495B (en) 2024-02-13

Family

ID=84337301

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211231908.4A Active CN115471495B (en) 2022-09-30 2022-09-30 Model robustness detection method, related device and storage medium

Country Status (1)

Country Link
CN (1) CN115471495B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115909020A (en) * 2022-09-30 2023-04-04 北京瑞莱智慧科技有限公司 Model robustness detection method, related device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190258251A1 (en) * 2017-11-10 2019-08-22 Nvidia Corporation Systems and methods for safe and reliable autonomous vehicles
CN114368394A (en) * 2021-12-31 2022-04-19 北京瑞莱智慧科技有限公司 Method and device for attacking V2X equipment based on Internet of vehicles and storage medium
CN114463790A (en) * 2020-10-22 2022-05-10 上海思立微电子科技有限公司 Optical fingerprint identification and anti-counterfeiting method and system
WO2022141506A1 (en) * 2020-12-31 2022-07-07 华为技术有限公司 Method for constructing simulation scene, simulation method and device
CN114997393A (en) * 2021-03-01 2022-09-02 罗伯特·博世有限公司 Functional testing of movable objects using spatial representation learning and countermeasure generation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190258251A1 (en) * 2017-11-10 2019-08-22 Nvidia Corporation Systems and methods for safe and reliable autonomous vehicles
CN114463790A (en) * 2020-10-22 2022-05-10 上海思立微电子科技有限公司 Optical fingerprint identification and anti-counterfeiting method and system
WO2022141506A1 (en) * 2020-12-31 2022-07-07 华为技术有限公司 Method for constructing simulation scene, simulation method and device
CN114997393A (en) * 2021-03-01 2022-09-02 罗伯特·博世有限公司 Functional testing of movable objects using spatial representation learning and countermeasure generation
CN114368394A (en) * 2021-12-31 2022-04-19 北京瑞莱智慧科技有限公司 Method and device for attacking V2X equipment based on Internet of vehicles and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115909020A (en) * 2022-09-30 2023-04-04 北京瑞莱智慧科技有限公司 Model robustness detection method, related device and storage medium
CN115909020B (en) * 2022-09-30 2024-01-09 北京瑞莱智慧科技有限公司 Model robustness detection method, related device and storage medium

Also Published As

Publication number Publication date
CN115471495B (en) 2024-02-13

Similar Documents

Publication Publication Date Title
EP3944147A1 (en) Target detection method, model training method, device, apparatus and storage medium
CN109325967A (en) Method for tracking target, device, medium and equipment
CN110147705A (en) A kind of vehicle positioning method and electronic equipment of view-based access control model perception
CN104112213A (en) Method and apparatus of recommendation information
CN112802111B (en) Object model construction method and device
CN115588131B (en) Model robustness detection method, related device and storage medium
CN112203115B (en) Video identification method and related device
CN110443190A (en) A kind of object identifying method and device
CN112052778B (en) Traffic sign identification method and related device
CN112686197B (en) Data processing method and related device
CN112163280B (en) Method, device and equipment for simulating automatic driving scene and storage medium
CN116310745B (en) Image processing method, data processing method, related device and storage medium
CN115471495B (en) Model robustness detection method, related device and storage medium
CN115022098A (en) Artificial intelligence safety target range content recommendation method, device and storage medium
CN115526055B (en) Model robustness detection method, related device and storage medium
CN112435333B (en) Road scene generation method and related device
CN116071614A (en) Sample data processing method, related device and storage medium
CN115081643A (en) Countermeasure sample generation method, related device and storage medium
CN115623271A (en) Processing method of video to be injected and electronic equipment
CN113535055B (en) Method, equipment and storage medium for playing point-to-read based on virtual reality
CN115239941A (en) Confrontation image generation method, related device and storage medium
CN110795994B (en) Intersection image selection method and device
CN115984792B (en) Countermeasure test method, system and storage medium
CN115909020B (en) Model robustness detection method, related device and storage medium
CN113819913A (en) Path planning method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant