CN115984792B - Countermeasure test method, system and storage medium - Google Patents
Countermeasure test method, system and storage medium Download PDFInfo
- Publication number
- CN115984792B CN115984792B CN202211656749.2A CN202211656749A CN115984792B CN 115984792 B CN115984792 B CN 115984792B CN 202211656749 A CN202211656749 A CN 202211656749A CN 115984792 B CN115984792 B CN 115984792B
- Authority
- CN
- China
- Prior art keywords
- countermeasure
- target object
- target
- sample
- vehicle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003860 storage Methods 0.000 title claims abstract description 54
- 238000010998 test method Methods 0.000 title claims abstract description 21
- 238000012360 testing method Methods 0.000 claims abstract description 118
- 230000008447 perception Effects 0.000 claims abstract description 92
- 238000004088 simulation Methods 0.000 claims abstract description 74
- 238000000034 method Methods 0.000 claims abstract description 71
- 238000012545 processing Methods 0.000 claims abstract description 40
- 238000004422 calculation algorithm Methods 0.000 claims description 22
- 230000000694 effects Effects 0.000 claims description 21
- 230000004044 response Effects 0.000 claims description 7
- 238000013473 artificial intelligence Methods 0.000 abstract description 10
- 238000005516 engineering process Methods 0.000 description 31
- 230000006870 function Effects 0.000 description 24
- 230000008569 process Effects 0.000 description 19
- 238000001514 detection method Methods 0.000 description 15
- 238000010586 diagram Methods 0.000 description 15
- 241000283070 Equus zebra Species 0.000 description 12
- 238000004891 communication Methods 0.000 description 8
- 238000004590 computer program Methods 0.000 description 7
- 238000011156 evaluation Methods 0.000 description 7
- 230000003993 interaction Effects 0.000 description 7
- 238000007726 management method Methods 0.000 description 7
- 230000001976 improved effect Effects 0.000 description 5
- 206010039203 Road traffic accident Diseases 0.000 description 4
- 230000008859 change Effects 0.000 description 4
- 230000008485 antagonism Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000010801 machine learning Methods 0.000 description 3
- 239000000463 material Substances 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 238000007639 printing Methods 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000005236 sound signal Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 230000007257 malfunction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- -1 pedestrians Substances 0.000 description 1
- 238000011056 performance test Methods 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 239000011435 rock Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 230000008093 supporting effect Effects 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Traffic Control Systems (AREA)
Abstract
The embodiment of the application relates to the field of artificial intelligence and provides a countermeasure test method, a countermeasure test system and a storage medium. The method is applied to a countermeasure test system, wherein the countermeasure test system comprises a simulation platform and an automatic driving vehicle perception model, and the simulation platform comprises a first display area and a second display area; the second display area displays a target vehicle according to a running state of a preset script, and the first display area comprises a plurality of candidate countermeasure patterns; the method comprises the following steps: obtaining a challenge sample; image acquisition is carried out on a simulation test scene containing a countermeasure sample at a vehicle-mounted camera view angle, a video stream is obtained, and the video stream is transmitted to an automatic driving vehicle perception model; processing the video stream based on the automatic driving vehicle perception model to obtain a test result of the target vehicle; the method for obtaining the challenge sample comprises the following steps: acquiring at least one target object required by the current test environment from a preset target object library; and performing anti-disturbance processing on the at least one target object to obtain a target anti-disturbance sample.
Description
Technical Field
The embodiment of the application relates to the technical field of artificial intelligence, in particular to a countermeasure testing method, a countermeasure testing system and a storage medium.
Background
Automatic driving, namely autonomous driving, does not need manual operation, and automobile automatic driving is a trend of intelligent development of automobiles at present, and intelligent automobiles are provided with automatic driving functions by carrying advanced sensors and other devices and applying new technologies such as artificial intelligence and the like. When the automatic driving technology of the automobile is developed, various performance tests of the automatic driving are required, including the aspects of speed limit information identification and response, vehicle following running and the like, so that the AI algorithm related to the automatic driving automobile is more and more diversified, and the environmental perception algorithm safety problem of the automatic driving automobile is endlessly caused. For this reason, when performing an countermeasure function test for automatic driving of a vehicle, it is important to perform an automatic driving countermeasure evaluation more safely and efficiently.
In the related art, an attack image made on a digital image is directly printed into a real object, then image information after attack resistance is input into a vehicle perception model through a camera in a real scene, and simulation interference is carried out on a test scene so as to verify the countermeasure effect of the vehicle perception model. However, it is difficult to make exhaustion of the challenge sample in a real environment, there is a problem that the challenge sample is not sufficiently comprehensive in the challenge test scene, and the time cost of the challenge test of the vehicle perception model is increased, thereby increasing the iteration period of the vehicle perception model. The challenge evaluation requirements cannot be satisfied.
Disclosure of Invention
In order to overcome the problems in the related art, the embodiments of the present application provide a challenge test method, a challenge test system, and a storage medium, which can reduce the time cost of challenge test on a vehicle perception model, thereby shortening the iteration cycle of the vehicle perception model.
The countermeasure test method is applied to a countermeasure test system, wherein the countermeasure test system comprises a simulation platform and an automatic driving vehicle perception model, and a display interface of the simulation platform comprises a first display area and a second display area; the second display area displays a target vehicle according to a running state of a preset script, and the first display area comprises a plurality of candidate countermeasure patterns; the method comprises the following steps:
Obtaining a challenge sample;
Image acquisition is carried out on a simulation test scene containing the countermeasure sample at a vehicle-mounted camera view angle, a video stream is obtained, and the automatic driving vehicle perception model is imported based on an automatic interface;
Processing the video stream based on the automatic driving vehicle perception model to obtain a test result of the target vehicle;
The method for obtaining the challenge sample comprises the following steps:
Acquiring at least one target object required by the current test environment from a preset target object library;
And performing anti-disturbance treatment on the at least one target object to obtain an anti-sample.
In the method, the processing the video stream by the vehicle perception model comprises the following steps: the vehicle perception model identifies the countermeasure sample in the video stream to obtain a first confidence coefficient and a second confidence coefficient, and compares the first confidence coefficient with the second confidence coefficient to obtain a driving instruction of the target vehicle;
The first confidence is a confidence when the vehicle perception model identifies the target challenge sample as being a non-target object;
the second confidence is a confidence that the vehicle perception model recognizes the target challenge sample as the target object.
Further, the obtaining at least one target object required by the current testing environment from the preset target object library includes: responding to a user instruction for selecting a target object from the second display area, and selecting the target object from the second display area;
responding to a first operation instruction of a user for selecting a countermeasure pattern, and selecting the countermeasure pattern from the preset algorithm library; and adding an countermeasure disturbance to the effective area of the target object by using the countermeasure pattern to obtain the countermeasure sample.
In the method, the countermeasure pattern and the target object are preset with a corresponding relation; and selecting the countermeasure pattern from the preset algorithm library in response to a first operation instruction of a user selecting the countermeasure pattern, wherein the selecting the determined countermeasure pattern from a plurality of countermeasure patterns having a correspondence with the target object in response to the first operation instruction.
Further, the method further comprises the following steps: receiving a second operation instruction of a user; and responding to the second operation instruction, selecting one driving scene from a plurality of candidate driving scenes in a driving scene database preset by the simulation platform, and loading and displaying the driving scene in the second display area.
In the above method, the adding an countermeasure disturbance to the effective area of the target object using the countermeasure pattern includes: and carrying out disturbance processing on the shape or/and the color of the target object.
Further, the method further comprises: and if the first confidence coefficient is not higher than the second confidence coefficient, adjusting the countermeasure pattern, and carrying out disturbance treatment on the at least one target object in the simulation test scene again by using the adjusted countermeasure pattern to obtain a countermeasure sample until the first confidence coefficient is higher than the second confidence coefficient.
In the above method, the countermeasure pattern is adjusted according to a preset strategy, the preset strategy including at least one of the following: adjusting frequency, attack effect against samples, against attack scenario, against attack type.
Further, the method further comprises: the target countermeasure sample is materialized, and the materialized target countermeasure sample is placed in a physical environment to obtain a test result of an automatic driving vehicle provided with a vehicle perception model in the physical environment; and comparing the autonomous vehicle test results with the target vehicle test results of the challenge test system.
In yet another embodiment of the above method, the method further comprises: adding a target object into the target object library; and adding a challenge pattern to the challenge pattern library for the added target object.
The embodiment of the application also provides a countermeasure test system, which comprises a simulation platform and a vehicle perception model;
the simulation platform comprises a display interface of the simulation platform and a display interface of the simulation platform, wherein the display interface of the simulation platform comprises a first display area and a second display area, the first display area comprises a plurality of candidate countermeasure patterns, and the second display area displays a target vehicle according to a running state of a preset script;
the target object library is used for storing target objects;
A challenge sample generation unit for performing disturbance processing on the target objects in the target object library by using a challenge pattern to obtain a challenge sample;
The first simulation processor is used for acquiring images of a simulation test scene containing the countermeasure sample at the vehicle-mounted camera view angle to obtain a video stream;
and the vehicle perception model is used for processing the video stream based on an automatic driving algorithm to obtain a test result.
In the system, the vehicle perception model processes the video stream to obtain a first confidence coefficient and a second confidence coefficient, and if the first confidence coefficient is not higher than the second confidence coefficient, the disturbance processing is performed again on the at least one target object in the simulation test scene after the countermeasure pattern is adjusted to obtain a countermeasure sample;
The first confidence is a confidence when the vehicle perception model identifies the target challenge sample as being a non-target object;
the second confidence is a confidence that the vehicle perception model recognizes the target challenge sample as the target object.
Embodiments of the present application also provide a non-transitory machine-readable storage medium having stored thereon executable code that, when executed by a processor of an electronic device, causes the processor to perform a challenge test method as described above.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
Firstly, the technical scheme of the embodiment of the application provides a countermeasure test system, a user can manually and intuitively add a countermeasure pattern in a simulation platform into an effective area of an object to be attacked to obtain a target countermeasure sample by interacting with the simulation platform in the countermeasure test system, and then the system automatically inputs the obtained target countermeasure sample into an automatic driving vehicle perception model to perform countermeasure test on the automatic driving vehicle perception model. And the user and the system can generate the countermeasure sample through simple interaction (for example, selecting the target countermeasure pattern in the display interface and dragging the target countermeasure pattern to the object to be attacked), and the system is simple to operate.
Secondly, the technical scheme of the embodiment of the application can realize the countermeasure test on the automatic driving vehicle perception model in the countermeasure test system, simulate the test flow of the physical world through the simulation scene, do not need to perform the countermeasure test on the automatic driving vehicle perception model in the field, can ensure the safety of the countermeasure test, and can continuously generate the countermeasure sample in the running process of the vehicle through user interaction because the target countermeasure sample in the scheme is generated based on the simulation platform and does not need to shoot and print the countermeasure pattern.
According to the embodiment of the application, the countermeasure sample is obtained by selecting the target object required by the current test environment from the preset target object library based on the simulation platform, the countermeasure sample can be dynamically adjusted in real time according to the current test environment, and the countermeasure test of the automatic driving algorithm is more comprehensively completed from each dimension, so that the problem that the automatic driving countermeasure test scene is not comprehensive enough in the prior art is solved, and the countermeasure detection efficiency is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of embodiments of the application.
Drawings
The foregoing and other objects, features and advantages of embodiments of the application will be apparent from the following more particular descriptions of exemplary embodiments of the application as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts throughout exemplary embodiments of the application.
Fig. 1 is a schematic diagram of an application scenario of a challenge test method according to an embodiment of the present application;
FIG. 2 is a flow chart of a challenge test method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a specific application scenario of an challenge test method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of another embodiment of an countermeasure testing method according to the present application;
FIG. 5 is a schematic diagram of a specific application scenario of an challenge test method according to an embodiment of the present application;
FIG. 6 is a schematic diagram of an automatic driving algorithm testing system according to an embodiment of the present application;
Fig. 7 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
The implementation of the embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While the embodiments of the present application are illustrated in the drawings, it should be understood that the embodiments of the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the embodiments to those skilled in the art.
The terminology used in the embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of embodiments of the application. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms "first," "second," "third," etc. may be used in embodiments of the present application to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information, without departing from the scope of embodiments of the present application. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the embodiments of the present application, the meaning of "plurality" is two or more, unless explicitly defined otherwise.
The solution of the embodiment of the application can be realized based on cloud technology, in particular to artificial intelligence (ARTIFICIAL INTELLIGENCE, AI), computer Vision (CV), machine learning (MACHINE LEARNING, ML) and other technologies, and the description is specifically given by the following embodiments:
The AI is a theory, a method, a technology and an application system which simulate, extend and extend human intelligence by using a digital computer or a machine controlled by the digital computer, sense environment, acquire knowledge and acquire an optimal result by using the knowledge. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
AI technology is a comprehensive discipline, and relates to a wide range of technologies, both hardware and software. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
CV is a science of how to make a machine "look at", and more specifically, it means that a camera and a computer are used to replace human eyes to recognize, track and measure targets, and further perform graphic processing, so that the computer is processed into images more suitable for human eyes to observe or transmit to an instrument to detect. As a scientific discipline, computer vision research-related theory and technology has attempted to build artificial intelligence systems that can acquire information from images or multidimensional data. Computer vision techniques typically include techniques for anti-disturbance generation, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D techniques, virtual reality, augmented reality, synchronous positioning, and map construction, as well as common biometric techniques such as face recognition, fingerprint recognition, and the like. Technical fields of cloud computing, cloud storage, databases and the like in the cloud technology are respectively described below.
Cloud technology (Cloud technology) refers to a hosting technology for integrating hardware, software, network and other series resources in a wide area network or a local area network to realize calculation, storage, processing and sharing of data. Cloud technology (Cloud technology) is based on the general terms of network technology, information technology, integration technology, management platform technology, application technology and the like applied by Cloud computing business models, and can form a resource pool, so that the Cloud computing business model is flexible and convenient as required. Cloud computing technology will become an important support. Background services of technical networking systems require a significant amount of computing, storage resources, such as video websites, image-like websites, and more portals. Along with the high development and application of the internet industry, each article possibly has an own identification mark in the future, the identification mark needs to be transmitted to a background system for logic processing, data with different levels can be processed separately, and various industry data needs strong system rear shield support and can be realized only through cloud computing. According to the embodiment of the application, the identification result can be stored through the cloud technology.
Cloud storage (cloud storage) is a new concept that extends and develops in the concept of cloud computing, and a distributed cloud storage system (hereinafter referred to as a storage system for short) refers to a storage system that integrates a large number of storage devices (storage devices are also referred to as storage nodes) of various types in a network to work cooperatively through application software or application interfaces through functions such as cluster application, grid technology, and a distributed storage file system, so as to provide data storage and service access functions for the outside. In the embodiment of the application, the information such as network configuration and the like can be stored in the storage system, so that the server can conveniently call the information.
At present, the storage method of the storage system is as follows: when creating logical volumes, each logical volume is allocated a physical storage space, which may be a disk composition of a certain storage device or of several storage devices. The client stores data on a certain logical volume, that is, the data is stored on a file system, the file system divides the data into a plurality of parts, each part is an object, the object not only contains the data but also contains additional information such as a data Identification (ID) and the like, the file system writes each object into a physical storage space of the logical volume, and the file system records storage position information of each object, so that when the client requests to access the data, the file system can enable the client to access the data according to the storage position information of each object.
The process of allocating physical storage space for the logical volume by the storage system specifically includes: physical storage space is divided into stripes in advance according to the set of capacity measures for objects stored on a logical volume (which measures tend to have a large margin with respect to the capacity of the object actually to be stored) and redundant array of independent disks (RAID, redundant Array of INDEPENDENT DISK), and a logical volume can be understood as a stripe, whereby physical storage space is allocated for the logical volume.
The Database (Database), which can be considered as an electronic filing cabinet, is a place for storing electronic files, and users can perform operations such as adding, inquiring, updating, deleting and the like on the data in the files. A "database" is a collection of data stored together in a manner that can be shared with multiple users, with as little redundancy as possible, independent of the application.
The Database management system (DBMS for short, english: database MANAGEMENT SYSTEM) is a computer software system designed for managing databases, and generally has basic functions of storage, interception, security assurance, backup and the like. The database management system may classify according to the database model it supports, e.g., relational, XML (Extensible Markup Language ); or by the type of computer supported, e.g., server cluster, mobile phone; or by the query language used, e.g., SQL (structured query language ), XQuery; or by performance impact emphasis, such as maximum scale, maximum speed of operation; or other classification schemes. Regardless of the manner of classification used, some DBMSs are able to support multiple query languages across categories, for example, simultaneously. In the embodiment of the application, the identification result can be stored in the database management system, so that the server can conveniently call.
It should be specifically noted that, the service terminal according to the embodiments of the present application may be a device that provides voice and/or data connectivity to the service terminal, a handheld device with a wireless connection function, or other processing device connected to a wireless modem. Such as mobile telephones (or "cellular" telephones) and computers with mobile terminals, which can be portable, pocket, hand-held, computer-built-in or car-mounted mobile devices, for example, which exchange voice and/or data with radio access networks. For example, personal communication services (english: personal Communication Service, english: PCS) telephones, cordless telephones, session Initiation Protocol (SIP) phones, wireless local loop (Wireless Local Loop, english: WLL) stations, personal digital assistants (english: personal DIGITAL ASSISTANT, english: PDA) and the like.
The applicant found in the research that in the automatic driving algorithm countermeasure test, the test result is extremely easy to be interfered by a countermeasure sample, namely, the original image is subjected to disturbance addition to generate the countermeasure sample, the classifier can correctly classify the original image as 'speed limit 40 km/h', but by disturbance addition, the system can correctly classify the image subjected to disturbance addition as 'speed limit 5 km/h', or the original image is correctly classified as 'obstacle in front', and the image subjected to disturbance addition is classified as 'no obstacle in front', but the human eyes hardly see the difference between the two images. Once such an attack is successful, it is highly likely that the automated driving system will be misdirected, creating serious hazards that threaten the safety of the vehicle and driver.
The method for improving the automatic driving algorithm needs to be subjected to a large number of tests, for example, an attack image made on a digital image is directly printed into a real object, then image information after attack resistance is input into a vehicle perception model through a camera in a real scene, and simulation interference is carried out on the test scene so as to verify the countermeasure effect of the vehicle perception model. However, the current practice is low in efficiency and does not meet the requirement of rapid iteration of the vehicle perception model.
Therefore, the embodiment of the application provides a countermeasure test method, a countermeasure test system and a storage medium, which aim to: the method is used for solving the problem that the countermeasure sample is not comprehensive enough in the automatic driving countermeasure test scene in the prior art, and the countermeasure sample is dynamically adjusted in real time so as to realize high-efficiency and low-cost safe countermeasure evaluation, thereby improving the countermeasure detection efficiency.
As shown in fig. 2, a possible challenge test method is provided for an embodiment of the present application, and the method is applied to the challenge test system shown in fig. 1, where the challenge test system includes a simulation platform 10 and an autopilot vehicle perception model 20, and a display interface of the simulation platform 10 includes a first display area and a second display area. The first display area comprises a plurality of function icons or menus corresponding to the candidate countermeasure patterns; the second display area displays a target vehicle in a running state according to a preset script, the target driving scene comprises the running target vehicle and at least one attack object, and an image of the target vehicle in the second display area is transmitted to an automatic driving vehicle perception model through a port. Specifically, the challenge test method shown includes the steps of:
S101: obtaining a challenge sample;
The challenge samples are dynamically generated, and the embodiment of the application does not limit the number, the attack type, the attack object and the like of the challenge samples generated in the same time period or different time periods of the same road section.
In some implementations, the embodiment of the application can add the target object from the preset target object library to the simulation test scene according to the current test environment; and performing disturbance processing on the added target object by using the countermeasure pattern to obtain a countermeasure sample. In a specific embodiment, a target object database, an countermeasure pattern database and a driving scene database are preset in the method of the embodiment of the application.
The target objects in the preset target object library are objects to be attacked. In a specific example, the preset target object library may include: traffic signs, traffic facilities, buildings or vehicles, etc. Wherein traffic signs such as, for example, traffic lights, signs, lane lines, etc.; traffic facilities such as cones, isolation strips, rails, and the like.
At least one target object is selected from a preset target object library according to the current test environment (namely, the target object library is used for generating the countermeasure sample), for example, a sign in a traffic sign or a cone in a traffic facility can be selected independently, and the sign in the traffic sign and the cone in the traffic facility can be selected in a combined way.
Fig. 3 is a schematic diagram of a specific application scenario of the challenge test method according to an embodiment of the present application, and referring to fig. 3, a driving scenario and a target vehicle according to a preset script driving state are displayed in a second display area of the application scenario of the challenge test method. The target object display area displays selectable target objects, and as shown in fig. 3, the target objects that may be added in this embodiment include a traffic sign stop sign, a traffic sign speed limit 40 sign, a traffic sign road end sign, a traffic facility traffic light, and pedestrians. The user may add a corresponding object to the driving scene, for example, in the second display area of fig. 3, by dragging an icon in the target object area for adding the traffic sign road end sign in the driving scene.
Fig. 4 is an interface schematic diagram of another specific application scenario of the challenge test method according to the embodiment of the present application. As shown, the interface in the application scene includes a first display area and a second display area. Icons of a plurality of alternative countermeasure patterns (e.g., countermeasure pattern 1, countermeasure pattern 2, countermeasure pattern 3, countermeasure pattern 4) are displayed in the first display area; the second display area displays a target vehicle in a driving state according to a preset script, in which a traffic sign road end sign is added based on the embodiment shown in fig. 3. Based on the embodiment shown in fig. 4, after the user selects the countermeasure pattern 1 in the first display area, the countermeasure pattern 1 is loaded into the effective area of the target object in the first area, for example, the area of the traffic sign road end sign. It can be seen that since the countermeasure pattern 1 is added to the effective area of the target object, it is equivalent to applying a disturbance process to the effective area of the target object through the countermeasure pattern 1, so that after the video stream a containing the target object and the countermeasure pattern 1 in the effective area of the target object is acquired at the first person viewing angle of the target vehicle, the video frame in the video stream a regarding the simultaneous presence of the countermeasure pattern 1 and the target object becomes a target countermeasure sample capable of attacking the vehicle perception model.
It can be seen that, since the challenge pattern 1 is added to the effective area of the target object, after the video stream a of the challenge pattern 1 in the effective area of the target object including the target object is acquired at the first person viewing angle of the target vehicle, after the video stream a is imported into the vehicle perception model through the program interface, the vehicle perception model may misrecognize the target object to which the challenge pattern 1 is added in the second display area, which means that a dynamic attack is achieved by loading the challenge pattern 1 into the effective area of the target object in the first area.
In some specific examples, considering that not all the countermeasure patterns may be applicable to a certain target object, or that different countermeasure patterns or attack effects of the same countermeasure pattern are different in the same target object under different scenes, in order to achieve attack effects corresponding to the countermeasure patterns called in each round of attack test, and accuracy of the calling, the embodiment of the present application also establishes a correspondence between the countermeasure patterns and the target object in advance. Therefore, when the countermeasure pattern for generating the countermeasure sample is selected based on the preset correspondence, the adapted countermeasure pattern can be selected quickly and pertinently, so that the attack effect of each round of attack test is ensured, and the problem of low test efficiency caused by randomly selecting the countermeasure sample is avoided. In these embodiments, the corresponding countermeasure pattern can be accurately and rapidly acquired according to the preset correspondence between the countermeasure pattern and the target object. For some target objects, the countermeasure patterns that can be employed are not particularly limited.
The perturbation processing of the target object in the simulated test scene using the countermeasure pattern may include: at least one countermeasure pattern is loaded into an effective area of a target object in a simulated test scene with the countermeasure pattern, thereby changing the shape or/and color of the target object in machine vision. The embodiment of the application does not limit the shape and the color, as long as the initial shape or the color can be distinguished and the attack effect on the vehicle perception model can be achieved. The present embodiments are not limited in the number of resist patterns that can be used in the present embodiments.
In a specific example, the embodiment of the application can realize flexible replacement of the target object. The method aims at judging whether the traffic sign is changed or not to influence the perception model of the target vehicle. On the other hand, the target object is flexibly selected from the target object database and is loaded into the simulation scene, so that the test of richer countermeasure scenes can be realized.
In order to highlight the flexibility of the target countermeasure sample, in a specific example, a target object can be newly added into a preset target object library according to the driving scene requirement; or/and, adding the challenge pattern to the challenge pattern library.
Because the driving scene database is preset in the embodiment of the application, a certain driving scene can be selected from the driving scene database according to the service requirement and loaded on the simulation platform. One or more target objects are further added to the driving scene and a challenge sample is generated. It should be noted that, on the premise of presetting or not presetting the driving scene database, the embodiment of the application can still build the driving scene before developing the countermeasure test instead of selecting the existing driving scene from the driving scene database.
The driving scenario may be: urban traffic intersections with traffic lights, such as T-junctions or intersections, or multi-road intersections; a traffic roundabout converged at multiple intersections; park roads (e.g., supply vehicles or intelligent emergency vehicles traveling on park roads, which can automatically travel to an incident based on a park hot event, supply or should be assisted to the incident), such as scenes with speed limit requirements, or with designated driving routes or parking areas for park autopilot vehicles; the highway and other closed road driving environments; open road driving environments such as national roads and provinces. The above list of test driving scenarios is not exhaustive and embodiments of the present application are not intended to be limiting.
Optionally, in some embodiments of the present application, real-time and dynamic generation of the challenge sample for dynamic attack testing on a real-time road segment may be further implemented based on user interaction, which mainly provides a custom, targeted and dynamically adjustable challenge evaluation manner for the challenge test. Specifically, S1011-S1012 may be included:
s1011: and receiving a first operation instruction of a user aiming at the target object.
Wherein the first operation instruction is for instructing to add a target countermeasure pattern for the target object.
The target object is an object to be attacked currently displayed in the second display area, and the countermeasure pattern is a countermeasure pattern selected by a user from a plurality of candidate countermeasure patterns in a preset target object library.
In a specific embodiment, when a user wants to attack the vehicle perception model by using a target object currently displayed in the target driving scene, the user first needs to select an countermeasure pattern for the target object, and adds the selected countermeasure pattern to the target object in an interface interaction manner.
In this embodiment, a plurality of countermeasure patterns in the countermeasure pattern database are presented in the first display area. The user may select a target countermeasure pattern from among the plurality of candidate countermeasure patterns displayed in the first display area by means of a mouse or a touch screen or the like. And then dragging the selected target countermeasure pattern into the effective area of the target object currently displayed in the second display area (namely, the object to be attacked currently displayed in the second display area), thereby obtaining a countermeasure sample. The second display area target driving scene picture in the simulation platform comprises a countermeasure sample.
The simulation platform is preset with a plurality of candidate driving scenes, and the display interface further comprises a third display area, wherein the third display area comprises scene icons corresponding to each candidate driving scene respectively.
Before executing the step of receiving a first operation instruction of a user aiming at a target object, determining a driving scene to be played on a display interface of a current simulation platform, wherein the driving scene specifically comprises the following steps: receiving a second operation instruction of a user aiming at a target scene icon in a third display area, wherein the target scene icon is a scene icon selected by the user from a plurality of scene icons; and setting the driving scene in the second display area as a target driving scene corresponding to the target scene icon according to the second operation instruction.
The driving scenes preset by the simulation platform comprise driving scenes such as zebra crossings, traffic lights, lane changing, overtaking and meeting, the background of each driving scene comprises expressways, rural roads, urban roads, commercial streets, mountain-turning roads and the like, and the simulation platform can display the driving scenes under different backgrounds according to the selection of users.
In the embodiment of the application, the countermeasure test condition and the process can be written as the script in advance, so that the test is executed according to the condition and the process in the script in the execution process of the test process. In this embodiment, the simulation platform display interface may include a pause button and a play button to interrupt or continue the testing process. On the other hand, before adding the target countermeasure pattern to the target object, the pause button is clicked to pause the currently displayed picture of the target driving scene, the first operation instruction is triggered, and after the required target countermeasure sample is obtained, the play button is clicked again to continue playing the picture of the target driving scene.
S1012: and the simulation platform responds to the first operation instruction, and adds the target countermeasure pattern into the effective area of the target object to obtain a target countermeasure sample.
When the target countermeasure pattern is within the effective area of the target object, the target object may be made to be a target countermeasure sample, so that the acquired recognition result of the vehicle perception model of the target object may be confused, that is, an area in which the countermeasure pattern may be made to have a countermeasure attack effect on the vehicle perception model when acting on the target object may be given.
It can be seen that, by means of user interaction, for example, the countermeasure pattern can be displayed and imported by means of automatic icon dragging, so that the importing process of the countermeasure pattern can be intuitively presented to the user, and the interestingness and the user experience of the model robustness detection are improved.
Optionally, in other embodiments of the present application, the countermeasure pattern may be further adjusted according to a preset strategy, where the preset strategy includes at least one of the following: adjusting frequency, attack effect against samples, against attack scenario, against attack type.
Therefore, by setting the preset strategy, in the actual countermeasure test, the multidimensional dynamic adjustment can be performed based on the requirements of service requirements or evaluation accuracy and the like, the user can adapt to the effect of the actual countermeasure test at any time regardless of the fixed operation logic (namely, the pre-written automatic attack flow), the better and faster evaluation can be achieved through continuous test, the countermeasure pattern with better attack effect can be better screened out, the countermeasure pattern with worse attack effect is abandoned, and the whole simulation evaluation platform is continuously optimized.
S102: and acquiring images of the simulation test scene containing the countermeasure sample by using the vehicle-mounted camera view angle of the target vehicle to obtain a video stream.
Wherein the challenge sample is included in the video stream.
In a specific example, a video stream of a virtual simulation test scene obtained from the angle of a vehicle camera is simulated on a simulation platform, namely, a virtual vehicle runs in the scene, and the video stream of the virtual simulation test scene is obtained from the angle of the vehicle camera of a target vehicle, so that a driving scene obtained from a front camera by a vehicle automatic driving system in a real driving environment is simulated, and then the video stream is input into a vehicle perception model for recognition.
S103: the video stream containing the target challenge sample is input into the vehicle perception model 20. And processing the video stream based on a vehicle perception model of an automatic driving algorithm, and identifying the target countermeasure sample to obtain an identification result.
And the vehicle perception model identifies the 2D information and the 3D position information of the driving scene carried in the video stream through the visual model. Objects that the visual model can recognize include red and green lights, lane lines, traffic signs, falling rocks, pedestrians, vehicles, and the like. According to the embodiment of the application, various 2D and 3D countermeasure patterns of countermeasure samples are provided for different scenes of road sides and road scenes, and the countermeasure samples are dynamically generated in the simulation platform so as to deceive the vehicle perception model, so that the vehicle perception model can mistakenly identify the countermeasure samples as real results, and driving decision errors or malfunctions are caused.
For example, the simulation platform can simulate various weather environments, at this time, the simulation platform is preset with various weather parameters, and different weather parameters can render different weather environments. Furthermore, the target countermeasure sample acquired by the vehicle perception model also has the corresponding characteristics of the target weather parameter, for example, in a simulation scene, if the weather is sunny, the reflection intensity of an object is improved or the color and chromaticity of the object are increased in a video acquired by a camera; and if the weather is rainy or foggy, adjusting parameters such as the definition of the object according to a corresponding algorithm. Therefore, the system can perform the countermeasure test on the vehicle perception model in various weather scenes, and the model can realize more comprehensive countermeasure test.
The vehicle perception model identifies various objects in the video stream, wherein the recognition result of the countermeasure sample is particularly focused on. For the challenge sample, a first confidence and a second confidence are identified. The first confidence coefficient is the confidence coefficient when the vehicle perception model recognizes the target countermeasure sample as a non-target object, and the second confidence coefficient is the confidence coefficient when the vehicle perception model recognizes the target countermeasure sample as a target object.
Comparing the first confidence with the second confidence, if the first confidence is higher than the second confidence, the vehicle perception model can not correctly identify the original target object under the action of the countermeasure pattern, and the countermeasure sample successfully deceives the vehicle perception model. After obtaining the recognition result of the non-target object, the method further comprises: and taking the first challenge sample corresponding to the non-target object as a valid challenge sample.
If the first confidence is not higher than the second confidence, it indicates that the challenge sample was not successful in spoofing the perception model, and therefore further optimization of the challenge sample is required until the vehicle perception model is successfully spoofed. Therefore, to achieve a successful spoofing of the vehicle perception model, it is generally necessary to adjust at least one challenge pattern used to generate a challenge sample that attacks the vehicle perception model, and after each adjustment, to re-perturb the at least one target object in the simulated test scene with the adjusted challenge pattern to obtain a challenge sample until the first confidence is higher than the second confidence. Therefore, the challenge test system of the embodiment of the application can obtain the most reliable challenge sample after multiple rounds of test and iteration, and the embodiment of the application does not limit the iteration times.
In some embodiments, the challenge pattern may be adjusted according to a policy, such as adjusting the challenge pattern frequency, adjusting the challenge scene, changing the challenge type, etc.
Taking the adjustment of the countermeasure scene as an example, after the driving scene is adjusted, the disturbance processing is performed again on the target object in the simulation test scene to obtain a target countermeasure sample. For example, in a specific embodiment, a cone is added to a built test scene, and disturbance processing is performed on the cone, for example, color change is performed on the cone, and the cone after the color change is an countermeasure sample. And simulating a video stream of the virtual scene obtained from the view angle of the vehicle camera on a simulation platform, wherein the video stream comprises the cone barrel after the color change in the virtual scene, and inputting the video stream into a vehicle perception model for recognition processing. The vehicle perception model is used to identify obstacles or traffic signs in the image and then guide the vehicle to avoid. For example, if the cone in the above embodiment can be identified, if the cone cannot be identified, the virtual car on the simulation platform may collide with the cone, and the challenge test is successful. If the virtual car well recognizes the cone, it is indicated that the attack effect of the target challenge sample for attacking the virtual car is poor, so that the attack effect of the cone when the cone is used as the challenge sample can be improved by adjusting other attributes of the cone, for example, the color or shape is changed again, and the expected challenge effect can be obtained by continuously trying to change the characteristics of the material (for example, after a color or shape is changed, the cone with the changed color or shape is set in the effective area of the virtual car on the target road section, so that the virtual car bumps over the cone with the changed color or shape). The test has the significance of being used for optimizing an automatic driving algorithm of the automobile, for example, knowing that the algorithm can not identify the cone barrel with certain color, and finally optimizing the perception model of the automobile to achieve the aim of being capable of well identifying the cone barrel with various color interference.
In the embodiment of the application, an automatic driving algorithm refers to an intelligent driving algorithm for supporting automatic driving in an automobile, wherein the intelligent driving algorithm is used for identifying obstacles in an image, guiding the automobile to run according to an image identification result, and then looking at how a detection algorithm of the automobile can react when encountering a target countering sample.
And the virtual controller generates a driving instruction of the target vehicle according to the identification result and controls the target vehicle in the simulation platform according to the driving instruction.
The vehicle perception model sends the identification result to the virtual controller, and the virtual controller generates a driving instruction of the target vehicle according to the identification result and controls the driving decision of the target vehicle in the simulation platform according to the driving instruction. When the countermeasure sample is successfully deceived into the vehicle perception model, the driving instruction of the virtual controller sends out wrong driving decisions to the vehicle, and the driving errors or traffic accidents are reflected on the simulation platform.
According to the embodiment of the application, the countermeasure sample is obtained by selecting the target object required by the current test environment from the preset target object library, the countermeasure sample can be dynamically adjusted in real time according to the current test environment, and the countermeasure test can be more comprehensively completed from each dimension, so that the problem that the automatic driving countermeasure test scene is not comprehensive enough in the prior art is solved, and the countermeasure detection efficiency is improved.
Further, the method also includes materializing the target challenge sample for reproducing security risks in a physical environment to verify a gap between a real effect of the sample produced based on the simulation platform and a theoretical effect under simulation. For example, printing a cone pattern serving as a target countermeasure sample in a countermeasure test system, preparing the cone pattern into a cone, placing the cone pattern in an automatic driving road test scene, and testing driving decisions of an automatic driving vehicle provided with a vehicle perception model in the physical environment; and comparing the test result of the automatic driving vehicle with the test result of the target vehicle of the countermeasure test system to verify the difference between the actual effect of the sample manufactured based on the simulation platform and the theoretical effect under simulation.
For easy understanding, the challenge test method according to the embodiment of the present application is described below by taking a specific application scenario of the challenge test method shown in fig. 5 as an example.
Referring to fig. 5, in this embodiment, the user first selects a pedestrian as a target object in the target object library to be added to the second area by the target object library at a position of at least one pedestrian on the zebra stripes in the first period. And performing disturbance treatment on the pedestrian by selecting the countermeasure pattern to obtain the pedestrian which is subjected to the countermeasure pattern and serves as a target countermeasure sample.
The virtual vehicle obtains a video stream of a test scene containing a target countermeasure sample from the angle of the vehicle-mounted camera and inputs the video stream into the vehicle perception model to obtain a recognition result.
The process comprises the following steps: receiving a first message, wherein the first message is used for indicating that a target vehicle passes through a zebra crossing in a first period; and inputting the target countermeasure sample into a vehicle perception model to obtain a recognition result that no pedestrian passes through the zebra stripes, wherein the first message can be input by a user or preset by a system, for example, a script is written in advance, and the method is not limited in the embodiment.
When the target vehicle responds to the first message and is ready to drive the zebra crossing in the first period, the target vehicle firstly judges whether pedestrians exist on the zebra crossing through the vehicle perception model, if so, the vehicle stops in front of the zebra crossing to wait for the pedestrians, and if not, the vehicle normally decelerates to pass through the zebra crossing.
If no pedestrian passes through the zebra stripes, the target vehicle has the following conditions:
1) Traffic accidents occur, and a target vehicle passes through the zebra crossings according to the condition that no pedestrians are present, so that the pedestrians are knocked down by the vehicle; 2) In violation of the traffic rules, the target vehicle is normally decelerated by the zebra crossing narrowly to hit the pedestrian without the pedestrian. 3) In a complex situation, traffic accidents occur, at this time, a plurality of pedestrians on the zebra crossing in the first period all use the two-to-one countercheck samples of the countercheck pattern, at this time, the target vehicle cannot recognize the pedestrians, so that part of the pedestrians are knocked down, and traffic accidents occur.
In a specific embodiment, if the driving scene displayed in the second display area is a traffic light crossing driving scene, the target object is a red-green light device displaying a red light sign in the second period, and the target countermeasure sample is a traffic light device added with a target countermeasure pattern; the receiving the first operation instruction of the user aiming at the target object includes: receiving a fifth operation instruction of a user aiming at the traffic light device, wherein the fifth operation instruction is used for indicating that a first countermeasure pattern with a classification label of green light is added to the traffic light device; at this time, in response to the first operation instruction, adding the target countermeasure pattern into the effective area of the target object to obtain a target countermeasure sample, including: responding to a fifth operation instruction, and overlapping the first countermeasure pattern into an effective area of the traffic light device to obtain a first countermeasure sample; at this time, the video stream containing the target countermeasure sample is input into the vehicle perception model to obtain the recognition result, including: receiving a second message, wherein the second message is used for indicating the target vehicle to drive through a traffic light intersection in a second period; and inputting the first countermeasure sample into a vehicle perception model to obtain a recognition result of the traffic light device displaying the green light mark. The second message may be input by the user, or may be preset for the system, which is not limited herein. Specifically, when the target vehicle responds to the second message and is ready to drive through the traffic light intersection in the second period, firstly, the target vehicle judges the current lighting color of the traffic light through the vehicle perception model, if the traffic light is lighted, the target vehicle does not need to wait, and normally drives through the traffic light intersection, and if the traffic light is lighted, the target vehicle needs to stop waiting before the traffic light.
In summary, the embodiment of the application has the following beneficial effects:
Firstly, a user can intuitively and custom add a countermeasure pattern in a simulation platform into an effective area of an object to be attacked by interacting with the simulation platform in the countermeasure detection system to obtain a target countermeasure sample, and then the system automatically inputs the obtained target countermeasure sample into a vehicle perception model to carry out robustness detection on the vehicle perception model.
Secondly, the technical scheme of the embodiment of the application can realize the countermeasure detection of the model in the countermeasure detection system, simulate the detection flow of the physical world through a simulation scene, do not need to perform the countermeasure detection on the model in a field, ensure the detection safety, and continuously generate the countermeasure sample in the vehicle operation process through user interaction without shooting and printing the countermeasure pattern because the target countermeasure sample in the scheme 0 is generated based on the simulation platform, so the technical scheme of the embodiment of the application can continuously attack the vehicle perception model in the vehicle operation process, more, faster and more comprehensive test the vehicle perception model in a short time, improve the attack effect, reduce the time cost of the countermeasure test on the vehicle perception model and further shorten the iteration period of the vehicle perception model.
Corresponding to the embodiment of the application function implementation method, the embodiment of the application also provides a countermeasure test system.
An embodiment of the present application provides an autopilot algorithm test system, referring to fig. 6, the system includes:
the simulation platform 61, the display interface of the simulation platform includes a first display area and a second display area, the first display area includes a plurality of candidate countermeasure patterns, and the second display area displays a target vehicle according to a preset script running state;
a target object library 62 for storing target objects;
An antagonism sample generating unit 63 for performing disturbance processing on the target objects in the target object library using the antagonism pattern to obtain the antagonism sample;
a first simulation processor 64, configured to acquire images of a simulation test scene including the countermeasure sample at a vehicle-mounted camera perspective, to obtain a video stream;
and the vehicle perception model 65 is used for processing the video stream based on an automatic driving algorithm to obtain a test result.
The test result is obtained as a first confidence coefficient and a second confidence coefficient, if the first confidence coefficient is not higher than the second confidence coefficient, the at least one target object in the simulation test scene is subjected to disturbance treatment again after the countermeasure pattern is adjusted to obtain a countermeasure sample;
The first confidence is a confidence when the vehicle perception model identifies the target challenge sample as being a non-target object;
the second confidence is a confidence that the vehicle perception model recognizes the target challenge sample as the target object.
Furthermore, the method according to the embodiments of the present application may also be implemented as a computer program or computer program product comprising computer program code instructions for performing some or all of the steps of the above-described method of embodiments of the present application.
Or an embodiment of the application may also be implemented as a non-transitory machine-readable storage medium (or computer-readable storage medium, or machine-readable storage medium) having stored thereon executable code (or a computer program, or computer instruction code) that, when executed by a processor of an electronic device (or electronic device, server, etc.), causes the processor to perform some or all of the steps of the above-described method according to an embodiment of the application.
A non-transitory machine-readable storage medium provided by an embodiment of the present application has executable code stored thereon that, when executed by a processor of an electronic device, causes the processor to perform the challenge test method described above.
The Processor may be a central processing unit (Central Processing Unit, CPU), other general purpose Processor, digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The embodiment of the present application further provides another terminal, as shown in fig. 7, for convenience of explanation, only the portion relevant to the embodiment of the present application is shown, and specific technical details are not disclosed, please refer to the method portion of the embodiment of the present application. The terminal can be any terminal including a mobile phone, a tablet Personal computer, a Personal digital assistant (English: personal DIGITAL ASSISTANT, english: PDA), a Sales terminal (English: point of Sales, english: POS), a vehicle-mounted computer, and the like, taking the mobile phone as an example: fig. 7 is a block diagram showing a part of the structure of a mobile phone related to a terminal provided by an embodiment of the present application. Referring to fig. 7, the mobile phone includes: radio Frequency (RF) circuit 710, memory 720, input unit 730, display unit 740, sensor 750, audio circuit 760, wireless fidelity (WIRELESSFIDELITY, wi-Fi) module 770, processor 780, and power supply 790. It will be appreciated by those skilled in the art that the handset construction shown in fig. 7 is not limiting of the handset and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The following describes the components of the mobile phone in detail with reference to fig. 7:
The RF circuit 710 may be configured to receive and transmit signals during a message or a call, and specifically, receive downlink information of a base station and process the downlink information with the processor 780; in addition, the data of the design uplink is sent to the base station. Generally, RF circuitry 710 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (English full name: low Noise Amplifier, english short name: LNA), a duplexer, and the like. In addition, the RF circuitry 710 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to global system for mobile communications (Global System of Mobile communication, GSM), general packet Radio Service (GENERAL PACKET Radio Service, GPRS), code division multiple access (Code Division Multiple Access, CDMA), wideband code division multiple access (Wideband Code Division Multiple Access, WCDMA), long term evolution (Long Term Evolution, LTE), email, short message Service (Short MESSAGING SERVICE, SMS), etc.
The memory 720 may be used to store software programs and modules, and the processor 780 performs various functional applications and data processing of the handset by running the software programs and modules stored in the memory 720. The memory 720 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, application programs required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, memory 720 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The input unit 730 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the handset. In particular, the input unit 730 may include a touch panel 731 and other input devices 732. The touch panel 731, also called a touch screen, may collect touch operations on or near the touch panel by a user and drive the corresponding connection device according to a predetermined program. Alternatively, the touch panel 731 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into touch point coordinates, which are then sent to the processor 780, and can receive commands from the processor 780 and execute them. In addition, the touch panel 731 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. The input unit 730 may include other input devices 732 in addition to the touch panel 731. In particular, the other input devices 732 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, mouse, joystick, etc.
The display unit 740 may be used to display information input by a user or information provided to the user and various menus of the mobile phone. The display unit 740 may include a display panel 741, and optionally, the display panel 741 may be configured in the form of a Liquid crystal display (hereinafter referred to as "Liquid CRYSTAL DISPLAY"), an Organic Light-Emitting Diode (hereinafter referred to as "OLED"), or the like. Further, the touch panel 731 may cover the display panel 741, and when the touch panel 731 detects a touch operation thereon or thereabout, the touch operation is transferred to the processor 780 to determine the type of touch event, and then the processor 780 provides a corresponding visual output on the display panel 741 according to the type of touch event.
Although in fig. 7, the touch panel 731 and the display panel 741 are two separate components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 731 and the display panel 741 may be integrated to implement the input and output functions of the mobile phone.
The handset may also include at least one sensor 750, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 741 according to the brightness of ambient light, and the proximity sensor may turn off the display panel 741 and/or the backlight when the mobile phone moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when stationary, and can be used for applications of recognizing the gesture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc. that may also be configured with the handset are not described in detail herein.
Audio circuitry 760, speaker 761, and microphone 762 may provide an audio interface between a user and a cell phone. The audio circuit 760 may transmit the received electrical signal converted from audio data to the speaker 761, and the electrical signal is converted into a sound signal by the speaker 761 to be output; on the other hand, microphone 762 converts the collected sound signals into electrical signals, which are received by audio circuit 760 and converted into audio data, which are processed by audio data output processor 780 for transmission to, for example, another cell phone via RF circuit 710 or for output to memory 720 for further processing.
Wi-Fi belongs to a short-distance wireless transmission technology, and a mobile phone can help a user to send and receive e-mails, browse webpages, access streaming media and the like through the Wi-Fi module 770, so that wireless broadband Internet access is provided for the user. Although fig. 7 shows Wi-Fi module 770, it is to be understood that it does not belong to the necessary constitution of the cell phone, and can be omitted entirely as needed within the scope of not changing the essence of the application.
The processor 780 is a control center of the mobile phone, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions and processes of the mobile phone by running or executing software programs and/or modules stored in the memory 720 and calling data stored in the memory 720, thereby performing overall monitoring of the mobile phone. Optionally, the processor 780 may include one or more processing units; the processor 780 may integrate an application processor that primarily processes operating systems, user interfaces, applications, etc., with a modem processor that primarily processes wireless communications.
It will be appreciated that the modem processor described above may not be integrated into the processor 780. The handset further includes a power supply 790 (e.g., a battery) for powering the various components, which may be logically connected to the processor 780 through a power management system, thereby performing functions such as managing charging, discharging, and power consumption by the power management system.
Fig. 8 is a schematic diagram of a server structure according to an embodiment of the present application, where the server 820 may have a relatively large difference between configurations or performances, and may include one or more central processing units (english: central processingunits, english: CPU) 822 (e.g., one or more processors) and a memory 832, and one or more storage mediums 830 (e.g., one or more mass storage devices) for storing application programs 842 or data 844. Wherein the memory 832 and the storage medium 830 may be transitory or persistent. The program stored in the storage medium 830 may include one or more modules (not shown), each of which may include a series of instruction operations on a server. Still further, the central processor 822 may be configured to communicate with a storage medium 830 to execute a series of instruction operations in the storage medium 830 on the server 820. The Server 820 may also include one or more power supplies 826, one or more wired or wireless network interfaces 850, one or more input/output interfaces 858, and/or one or more operating systems 841, such as Windows Server, mac OS X, unix, linux, freeBSD, and the like.
The steps performed by the server in the above embodiments may be based on the structure of the server 820 shown in fig. 8. The steps illustrated by fig. 2 in the above embodiments, for example, may be based on the server structure illustrated in fig. 8. For example, the processor 822 may perform the following by invoking instructions in the memory 832:
Obtaining a target challenge sample through the input-output interface 858;
image acquisition is carried out on a target object in a simulation test scene containing the target countermeasure sample by using a vehicle-mounted camera view angle of the target vehicle, so as to obtain a video stream;
Inputting the video stream into the vehicle perception model through the input/output interface 858 to obtain a recognition result, wherein the recognition result is used for controlling the virtual controller to generate a driving instruction of the target vehicle, the recognition result indicates that a first confidence is higher than a second confidence, the first confidence is a confidence when the vehicle perception model recognizes the target countermeasures sample as being not the target object, and the second confidence is a confidence when the vehicle perception model recognizes the target countermeasures sample as being the target object;
the method for acquiring the target countermeasure sample comprises the following steps:
acquiring at least one material required by a current testing environment from a preset target object library;
and performing anti-disturbance processing on the target object based on the at least one material to obtain the target anti-disturbance sample.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the systems, apparatuses and modules described above may refer to the corresponding processes in the foregoing method embodiments, which are not repeated herein.
In the embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules, which may be in electrical, mechanical, or other forms.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When the computer program is loaded and executed on a computer, the flow or functions according to the embodiments of the present application are fully or partially produced. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be stored by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media.
The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk Solid STATE DISK (SSD)), etc.
The aspects of the embodiments of the present application have been described in detail hereinabove with reference to the accompanying drawings. In the foregoing embodiments, the descriptions of the embodiments are focused on, and for those portions of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments. Those skilled in the art will also appreciate that the acts and modules referred to in the specification are not necessarily required in order to implement the embodiments of the application. In addition, it can be understood that the steps in the method of the embodiment of the present application may be sequentially adjusted, combined and pruned according to actual needs, and the modules in the device of the embodiment of the present application may be combined, divided and pruned according to actual needs.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems and methods according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing description of embodiments of the application has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
Claims (9)
1. A challenge test method, wherein the method is applied to a challenge test system, the challenge test system comprises a simulation platform and an automatic driving vehicle perception model, and a display interface of the simulation platform comprises a first display area and a second display area; the second display area displays a target vehicle according to a running state of a preset script, and the first display area comprises a plurality of candidate countermeasure patterns; the method comprises the following steps:
Obtaining a challenge sample;
Image acquisition is carried out on a simulation test scene containing the countermeasure sample at a vehicle-mounted camera view angle, a video stream is obtained, and the automatic driving vehicle perception model is imported based on an automatic interface;
Processing the video stream based on the automatic driving vehicle perception model to obtain a test result of the target vehicle;
The method for obtaining the challenge sample comprises the following steps:
Acquiring at least one target object required by the current test environment from a preset target object library;
Performing countermeasure disturbance processing on the at least one target object to obtain a countermeasure sample;
The processing the video stream based on the autonomous vehicle perception model includes:
Identifying the countermeasure sample in the video stream based on the automatic driving vehicle perception model to obtain a first confidence coefficient and a second confidence coefficient, and comparing the first confidence coefficient with the second confidence coefficient to obtain a driving instruction of a target vehicle;
The first confidence level is a confidence level when the autonomous vehicle perception model identifies a target countermeasure sample as being not the target object;
the second confidence level is the confidence level when the automatic driving vehicle perception model recognizes the target countermeasure sample as the target object;
the obtaining at least one target object required by the current test environment from a preset target object library, and performing anti-disturbance processing on the at least one target object, the obtaining an anti-disturbance sample includes:
Selecting a target object from the second display area in response to a user instruction to select the target object from the second display area;
responding to a first operation instruction of the user for selecting the countermeasure pattern, and selecting the countermeasure pattern from a preset algorithm library;
adding an countermeasure disturbance to an effective area of the target object by using the countermeasure pattern to obtain the countermeasure sample;
The processing the video stream based on the autonomous vehicle perception model further includes:
And if the first confidence coefficient is not higher than the second confidence coefficient, adjusting the countermeasure pattern, and carrying out disturbance processing on the at least one target object in the simulation test scene again by using the adjusted countermeasure pattern to obtain a countermeasure sample until the first confidence coefficient is higher than the second confidence coefficient.
2. The method according to claim 1, characterized in that: the countermeasures and the target object are preset with corresponding relations;
And selecting the countermeasure pattern from the preset algorithm library in response to a first operation instruction of selecting the countermeasure pattern by a user, wherein the step of selecting the countermeasure pattern comprises:
And selecting a determined countermeasure pattern among a plurality of countermeasure patterns having a correspondence with the target object in response to the first operation instruction.
3. The method according to claim 2, wherein the method further comprises:
receiving a second operation instruction of the user;
And responding to the second operation instruction, selecting one driving scene from a plurality of candidate driving scenes in a driving scene database preset by the simulation platform, and loading and displaying the driving scene in the second display area.
4. A method according to claim 3, wherein said adding an countermeasure disturbance to an active area of the target object using the countermeasure pattern comprises:
and carrying out disturbance processing on the shape or/and the color of the target object.
5. The method of claim 1, wherein the countermeasure pattern is adjusted according to a preset strategy, the preset strategy including at least one of:
adjusting frequency, adjusting attack effects against samples, adjusting against attack scenarios, adjusting against attack types.
6. The method according to claim 1, wherein the method further comprises:
The target countermeasure sample is materialized, and the materialized target countermeasure sample is placed in a physical environment to obtain a test result of an automatic driving vehicle provided with a vehicle perception model in the physical environment;
and comparing the autonomous vehicle test results with the target vehicle test results of the challenge test system.
7. The method according to claim 1, wherein the method further comprises:
adding a new target object into the preset target object library; and
Adding an countermeasure pattern to the countermeasure pattern library for the added new target object.
8. A challenge test system for performing the method of claim 1, comprising a simulation platform, a vehicle perception model;
the simulation platform comprises a display interface of the simulation platform and a display interface of the simulation platform, wherein the display interface of the simulation platform comprises a first display area and a second display area, the first display area comprises a plurality of candidate countermeasure patterns, and the second display area displays a target vehicle according to a running state of a preset script;
the target object library is used for storing target objects;
A challenge sample generation unit for performing disturbance processing on the target objects in the target object library by using a challenge pattern to obtain a challenge sample;
The first simulation processor is used for acquiring images of a simulation test scene containing the countermeasure sample at the vehicle-mounted camera view angle to obtain a video stream;
and the vehicle perception model is used for processing the video stream based on an automatic driving algorithm to obtain a test result.
9. A non-transitory machine-readable storage medium having stored thereon executable code, which when executed by a processor of an electronic device, causes the processor to perform the challenge test method of any of claims 1-7.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211214764 | 2022-09-30 | ||
CN2022112147641 | 2022-09-30 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115984792A CN115984792A (en) | 2023-04-18 |
CN115984792B true CN115984792B (en) | 2024-04-30 |
Family
ID=85960840
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211656749.2A Active CN115984792B (en) | 2022-09-30 | 2022-12-22 | Countermeasure test method, system and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115984792B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113361386A (en) * | 2021-06-03 | 2021-09-07 | 苏州智加科技有限公司 | Virtual scene processing method, device, equipment and storage medium |
CN113609784A (en) * | 2021-08-18 | 2021-11-05 | 清华大学 | Traffic limit scene generation method, system, equipment and storage medium |
CN114368394A (en) * | 2021-12-31 | 2022-04-19 | 北京瑞莱智慧科技有限公司 | Method and device for attacking V2X equipment based on Internet of vehicles and storage medium |
CN114565513A (en) * | 2022-03-15 | 2022-05-31 | 北京百度网讯科技有限公司 | Method and device for generating confrontation image, electronic equipment and storage medium |
CN114997393A (en) * | 2021-03-01 | 2022-09-02 | 罗伯特·博世有限公司 | Functional testing of movable objects using spatial representation learning and countermeasure generation |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111401138B (en) * | 2020-02-24 | 2023-11-07 | 上海理工大学 | Countermeasure optimization method for generating countermeasure neural network training process |
-
2022
- 2022-12-22 CN CN202211656749.2A patent/CN115984792B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114997393A (en) * | 2021-03-01 | 2022-09-02 | 罗伯特·博世有限公司 | Functional testing of movable objects using spatial representation learning and countermeasure generation |
CN113361386A (en) * | 2021-06-03 | 2021-09-07 | 苏州智加科技有限公司 | Virtual scene processing method, device, equipment and storage medium |
CN113609784A (en) * | 2021-08-18 | 2021-11-05 | 清华大学 | Traffic limit scene generation method, system, equipment and storage medium |
CN114368394A (en) * | 2021-12-31 | 2022-04-19 | 北京瑞莱智慧科技有限公司 | Method and device for attacking V2X equipment based on Internet of vehicles and storage medium |
CN114565513A (en) * | 2022-03-15 | 2022-05-31 | 北京百度网讯科技有限公司 | Method and device for generating confrontation image, electronic equipment and storage medium |
Non-Patent Citations (1)
Title |
---|
基于对抗技术的深度学习模型安全检测技术研究;李德印;《中国优秀硕士学位论文全文数据库 (信息科技辑)》;20220415(第04期);第I138-100页 * |
Also Published As
Publication number | Publication date |
---|---|
CN115984792A (en) | 2023-04-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109919251B (en) | Image-based target detection method, model training method and device | |
CN110443190B (en) | Object recognition method and device | |
CN110147705A (en) | A kind of vehicle positioning method and electronic equipment of view-based access control model perception | |
CN115588131B (en) | Model robustness detection method, related device and storage medium | |
US20220058436A1 (en) | Method and apparatus for generating training sample of semantic segmentation model, storage medium, and electronic device | |
CN112802111A (en) | Object model construction method and device | |
CN115239941B (en) | Countermeasure image generation method, related device and storage medium | |
CN104464730A (en) | Apparatus and method for generating an event by voice recognition | |
CN113535055B (en) | Method, equipment and storage medium for playing point-to-read based on virtual reality | |
CN109064746A (en) | A kind of information processing method, terminal and computer readable storage medium | |
CN106874936A (en) | Image propagates monitoring method and device | |
CN115376192B (en) | User abnormal behavior determination method, device, computer equipment and storage medium | |
CN115471495B (en) | Model robustness detection method, related device and storage medium | |
CN115526055B (en) | Model robustness detection method, related device and storage medium | |
CN113706446B (en) | Lens detection method and related device | |
CN115984792B (en) | Countermeasure test method, system and storage medium | |
CN116071614A (en) | Sample data processing method, related device and storage medium | |
CN113361386B (en) | Virtual scene processing method, device, equipment and storage medium | |
CN115081643A (en) | Countermeasure sample generation method, related device and storage medium | |
CN115623271A (en) | Processing method of video to be injected and electronic equipment | |
CN115412726A (en) | Video authenticity detection method and device and storage medium | |
CN115909020B (en) | Model robustness detection method, related device and storage medium | |
CN110795994B (en) | Intersection image selection method and device | |
CN113887534A (en) | Determination method of object detection model and related device | |
CN117853859B (en) | Image processing method, related device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |