CN115223010A - Countermeasure sample generation method and system for intelligent driving target detection scene - Google Patents
Countermeasure sample generation method and system for intelligent driving target detection scene Download PDFInfo
- Publication number
- CN115223010A CN115223010A CN202210797167.XA CN202210797167A CN115223010A CN 115223010 A CN115223010 A CN 115223010A CN 202210797167 A CN202210797167 A CN 202210797167A CN 115223010 A CN115223010 A CN 115223010A
- Authority
- CN
- China
- Prior art keywords
- attack
- picture
- target detection
- sample
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Molecular Biology (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a confrontation sample generation method for an intelligent driving target detection scene, which comprises the steps of receiving an original picture, and obtaining a target detection picture based on a neural network model of the target detection scene; determining an attack area on a target detection picture, and adding disturbance information through a BA algorithm so as to enable the target detection picture with the disturbance information to be an attack sample; inputting the attack sample into a neural network model to obtain an attack detection result picture, and when the attack detection result picture is compared with a preset attack result and does not accord with a preset condition, adding disturbance information to an attack area in the attack detection result picture again to form the attack sample, and inputting the attack sample into the neural network model again for iteration until the comparison accords with the preset condition; and outputting the attack detection result picture obtained by the last iteration as a countercheck sample. By implementing the method, the black box attack algorithm is adapted to make the sample closer to reality.
Description
Technical Field
The invention relates to the technical field of automobiles, in particular to a method and a system for generating confrontation samples of an intelligent driving target detection scene.
Background
The confrontation sample is an input sample which is added with some disturbance that cannot be perceived by human eyes so that the machine learning model generates wrong judgment. With the rapid development of modern artificial intelligence technology, the accuracy of a machine learning model is higher and higher, so that a large amount of manpower can be liberated from heavy tasks, and the dependence of human beings on artificial intelligence is higher and higher. Therefore, for intelligent driving which is highly dependent on artificial intelligence technology, the research on the countermeasure sample generation method and the countermeasure attack has important safety significance.
Currently, there are two main attack directions for the countersample in the image domain: one is image classification, namely, only one object is in the whole picture, and the object is not required to be positioned as long as the object type is distinguished; the other attack direction is target detection, namely positioning and classifying all targets in the image.
In the intelligent driving reality application, more scenes meeting target detection are compared with the scenes classified by images. In view of the fact that a neural network model required by a detection task of a target detection scene is more complex, and subtle disturbance applicable to a simple neural network in an image classification scene cannot be applied to the complex neural network model, it is more difficult to attack the target detection scene than the image classification task. Therefore, the current academic community uses white-box attacks on target detection tasks, namely the attack is premised on obtaining enough information to customize disturbance.
However, in the actual attack on the smart driving model, it is difficult to imagine obtaining enough information to perform the white-box attack. Meanwhile, in view of intelligent driving, each manufacturer often highly packages the whole path from the acquired data to the final output, and the risk that the outside cannot acquire the internal information basically is caused. Therefore, the white box attack under the actual target detection scene has low feasibility on the intelligent driving model, and the black box attack is closer to reality. However, at present, no black box attack algorithm directly aiming at a target detection scene exists.
Disclosure of Invention
The technical problem to be solved by the embodiment of the invention is to provide a method and a system for generating an confrontation sample of an intelligent driving target detection scene, and the sample is closer to reality by adapting to a black box attack algorithm.
In order to solve the above technical problem, an embodiment of the present invention provides a method for generating a countermeasure sample of an intelligent driving target detection scenario, where the method includes:
receiving an original picture, and inputting the original picture into a neural network model based on a target detection scene to obtain a target detection picture;
determining an attack area on the target detection picture, and adding disturbance information on the attack area through a boundary attack BA algorithm so as to enable the target detection picture with the disturbance information corresponding to the attack area to become an attack sample;
inputting the attack sample into the neural network model to obtain an attack detection result picture, adding disturbance information into an attack area in the attack detection result picture again to form the attack sample when the result obtained by comparing the attack detection result picture with a preset attack result is judged to be not in accordance with a preset condition, and inputting the attack area into the neural network model again to carry out iterative computation until the result obtained by comparing the attack detection result picture obtained by iteration of a certain time with the preset attack result is in accordance with the preset condition;
and after the iterative computation is finished, outputting an attack detection result picture obtained by the final iteration as a countermeasure sample of the target detection scene.
Wherein the predetermined condition is that the target within the attack region is not successfully identified and the disturbance is not easily discovered by the human eye.
The neural network model is an SSD model in a target detection scene.
Wherein the method further comprises:
training the neural network model using the challenge samples.
Wherein the original picture is manually uploaded by a user; or, the original picture is read from a specified directory.
The embodiment of the invention also provides a confrontation sample generation system for the intelligent driving target detection scene, which comprises the following steps:
the original picture receiving unit is used for receiving an original picture and inputting the original picture into a neural network model based on a target detection scene to obtain a target detection picture;
the image attack area disturbing unit is used for determining an attack area on the target detection image and adding disturbance information on the attack area through a boundary attack BA algorithm so as to enable the target detection image with the disturbance information corresponding to the attack area to become an attack sample;
an attack area iterative perturbation unit, configured to input the attack sample into the neural network model to obtain an attack detection result picture, and when it is determined that a result obtained by comparing the attack detection result picture with a preset attack result does not meet a predetermined condition, add perturbation information to an attack area in the attack detection result picture again to obtain the attack sample, and input the attack sample into the neural network model again to perform iterative computation until a result obtained by comparing the attack detection result picture obtained by a certain iteration with the preset attack result meets the predetermined condition;
and the countermeasure sample generation unit is used for outputting the attack detection result picture obtained by the last iteration as a countermeasure sample of the target detection scene after the iterative computation is finished.
Wherein the predetermined condition is that the target within the attack region is not successfully identified and the disturbance is not easily discovered by the human eye.
The neural network model is an SSD model in a target detection scene.
Wherein, still include:
and the model training unit is used for training the neural network model by using the confrontation sample.
Wherein the original picture is manually uploaded by a user; or, the original picture is read from a specified directory.
The embodiment of the invention has the following beneficial effects:
1. the method is suitable for generating the countermeasure sample in a target detection scene based on a BA (Boundary Attack) algorithm, so that the sample is closer to reality;
2. the invention improves the BA algorithm, selects the attack area before generating the countermeasure sample, transmits the information of the attack area to the BA algorithm structure body, and reduces the disturbance range through the convergence of the attack area to verify the countermeasure sample of the neural network model, thereby shortening the attack time and improving the working efficiency.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is within the scope of the present invention for those skilled in the art to obtain other drawings based on the drawings without inventive labor.
Fig. 1 is a flowchart of a countermeasure sample generation method for an intelligent driving target detection scenario according to an embodiment of the present invention;
fig. 2 is a flowchart of step S3 in the countermeasure sample generation method for an intelligent driving target detection scenario according to the embodiment of the present invention;
fig. 3 is a schematic structural diagram of a countermeasure sample generation system of an intelligent driving target detection scenario according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings.
The inventor conducts intensive research based on the target detection scene and finds out the communication between the target detection scene and the image classification scene. Since the BA algorithm is originally a black box attack algorithm aiming at image classification, and is applied to a scene identified based on image classification in most cases, innovative improvement is made on the BA algorithm, namely, a countermeasure sample based on a target detection scene is generated through the BA algorithm, so that the BA algorithm can be applied to countermeasure sample attack on a neural network model under the target detection scene.
In this case, the inventors conceived of a hyper-parameter to select a designated target, and after selecting the designated target, the inventors paid attention to only the classification result of the designated target and disregarded the classification results of other targets in the original image. Therefore, the problem of target detection is approximately converted into the problem of image classification, and meanwhile, the input and the output of the problem are very close to the input and the output of the BA algorithm required by the image classification scene, so that the BA algorithm can be applied to the target detection scene.
As shown in fig. 1, in the embodiment of the present invention, an inventor provides a countermeasure sample generation method for an intelligent driving target detection scenario based on an improved BA algorithm, where the method includes the following steps:
the method comprises the following steps of S1, receiving an original picture, and inputting the original picture into a neural network model based on a target detection scene to obtain a target detection picture;
s2, determining an attack area on the target detection picture, and adding disturbance information on the attack area through a boundary attack BA algorithm so as to enable the target detection picture with the disturbance information corresponding to the attack area to become an attack sample;
s3, inputting the attack sample into the neural network model to obtain an attack detection result picture, adding disturbance information into an attack area in the attack detection result picture again to form the attack sample when the result obtained by comparing the attack detection result picture with a preset attack result is judged to be not in accordance with a preset condition, and inputting the attack detection result picture into the neural network model again to carry out iterative computation until the result obtained by comparing the attack detection result picture obtained by a certain iteration with the preset attack result is in accordance with the preset condition;
and S4, after the iterative computation is finished, outputting an attack detection result picture obtained by final iteration as a countermeasure sample of the target detection scene.
Specifically, in step S1, first, a desired original picture is selected and uploaded. Wherein the original picture supports multiple mode selections, e.g., from a user manually uploading; as another example, read from a specified directory; or the vehicle-mounted camera shoots and captures the vehicle-mounted camera in the automatic driving process.
Secondly, inputting the original picture into a neural network model SSD based on the target detection scene to obtain a picture after model detection, namely a target detection picture.
In step S2, because the core idea of the BA algorithm is to find the boundary between the countermeasure sample and the non-countermeasure sample, the attack process can only add disturbance from the whole picture, and verify the effectiveness of the countermeasure sample by continuously reducing the disturbance range and implementing multiple rounds of attacks, but the attack efficiency is not too high. The BA algorithm is improved and innovated by combining with the operation logic of the target detection model, namely, a target range (namely, the target of an attack area is calibrated) to be attacked is selected before a countermeasure sample is generated, and the information of the target of the attack area is transmitted to a BA algorithm structure body when the BA algorithm is called, so that the initially added disturbance area is a set target attack point, and the attack range is narrowed through the convergence of the target of the attack area.
Therefore, an attack area is determined on the target detection picture at the beginning, and disturbance information is added on the attack area through a BA algorithm, so that the target detection picture with the disturbance information in the attack area becomes an attack sample, and the attack area becomes a subsequent attack range.
In step S3, as shown in fig. 2, step S31, inputting the attack sample into the neural network model to obtain an attack detection result picture;
s32, judging whether the result of comparing the attack detection result picture with a preset attack result meets a preset condition or not; if not, executing step S33; if yes, jumping to step S34;
step S33, adding the disturbance information to the attack area in the attack detection result picture again to form an attack sample, and returning to the step S31;
and S34, outputting the finally obtained attack detection result picture.
It should be noted that the predetermined condition is that the target within the attack area is not successfully identified and the disturbance is not easily detectable by the human eye. Namely, if the attack detection result picture is not enough to make the SSD model generate an erroneous judgment or the attack detection result picture has not high similarity to the original picture and can be recognized by human eyes, the iterative loop is continued, and the disturbance information is continuously modified until the SSD model fails to successfully recognize the target in the attack area and the disturbance is not easily found by human eyes.
In step S4, the attack detection result picture obtained by iteration in the last step S3 is output as a countermeasure sample of the target detection scene.
It should be noted that the BA algorithm tries with binary search at the decision boundary, and because of random binary search, the search efficiency cannot be guaranteed, and it often takes a long time to converge to a desired range. The modified BA algorithm of the inventor only perturbs a specified area (namely an attack area) in the picture, so that the attack range is reduced, and the modified BA algorithm can be approximately regarded as reducing the input picture. At this time, the amount of calculation required for attack is greatly reduced compared with the amount of calculation for attack on the complete picture, so that the attack speed can be increased, and the average attack time is only 70% of the original time.
In the embodiment of the invention, the neural network model can be trained based on the effective countermeasure sample after the verification, so that the safety and the robustness of the neural network model are improved. Accordingly, the method further comprises: the countermeasure sample is used for training a neural network model (namely, an SSD model) so as to obtain the neural network model with better safety and robustness.
In one embodiment, the original picture is before the attack starts, the SSD model can detect two cars distributed from left to right and accurately classify them as cars with 100% confidence. The SSD locates and classifies the detected object, draws detection frames of two detected cars according to the locating coordinates, and marks a classification result (car) and a confidence coefficient 1.0 above the detection frames.
During attack, the area where the right automobile is located is selected as an attack area (car 1.0), a BA algorithm is applied to the attack area, and the right automobile is replaced by another designated picture as an attack starting point.
Before attack iteration, objects in an attack area are classified into other classes (for convenience of explanation, the class A is assumed) instead of automobiles, and on the premise that the classification result is always guaranteed to be non-automobiles in the attack iteration process, the appearance of a designated image replaced in the attack area is gradually close to that of an original right automobile image, and the final result is that the image after attack is almost the same as the original image, but the objects in the attack area are classified into the class A instead of automobiles.
Meanwhile, because the actual correct type is the automobile, the confidence level of the type A is low (< 0.5), the detection confidence level threshold value of 0.6 cannot be reached, the detection result is considered to be abandoned without confidence, namely the object is considered to be absent, so that after the attack, the detection frame of the original right automobile area disappears, and only the left automobile can be normally detected.
As shown in fig. 3, a countermeasure sample generation system for an intelligent driving target detection scenario provided in an embodiment of the present invention includes:
an original picture receiving unit 110, configured to receive an original picture, and input the original picture into a neural network model based on a target detection scene to obtain a target detection picture;
the image attack area perturbation unit 120 is configured to determine an attack area on the target detection image, and add perturbation information to the attack area through a boundary attack BA algorithm, so that the target detection image corresponding to the attack area and having the perturbation information becomes an attack sample;
an attack region iterative perturbation unit 130, configured to input the attack sample into the neural network model to obtain an attack detection result picture, and when it is determined that a result obtained by comparing the attack detection result picture with a preset attack result does not meet a predetermined condition, add perturbation information to an attack region in the attack detection result picture again to obtain the attack sample, and input the attack sample into the neural network model again to perform iterative computation until a result obtained by comparing the attack detection result picture obtained by a certain iteration with the preset attack result meets the predetermined condition;
and the countermeasure sample generation unit 140 is configured to output an attack detection result picture obtained by the last iteration as a countermeasure sample of the target detection scene after the iterative computation is finished.
Wherein the predetermined condition is that the target within the attack region is not successfully identified and the disturbance is not easily discovered by the human eye.
The neural network model is an SSD model in a target detection scene.
Wherein, still include:
and the model training unit is used for training the neural network model by using the confrontation sample.
The embodiment of the invention has the following beneficial effects:
1. the method is suitable for generating the confrontation sample in a target detection scene based on the BA algorithm, so that the sample is closer to reality;
2. the method improves the BA algorithm, selects the attack area before generating the countermeasure sample, transmits the information of the attack area to the BA algorithm structure body, reduces the disturbance range through the convergence of the attack area to verify the countermeasure sample of the neural network model, thereby shortening the attack time and improving the working efficiency, and trains the neural network model based on the effective countermeasure sample after verification to improve the safety and the robustness of the neural network model.
It should be noted that, in the foregoing system embodiment, each included system unit is only divided according to functional logic, but is not limited to the above division as long as the corresponding function can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
It will be understood by those skilled in the art that all or part of the steps in the method for implementing the above embodiments may be implemented by relevant hardware instructed by a program, and the program may be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present invention, and it is therefore to be understood that the invention is not limited by the scope of the appended claims.
Claims (10)
1. A confrontation sample generation method for an intelligent driving target detection scene is characterized by comprising the following steps:
receiving an original picture, and inputting the original picture into a neural network model based on a target detection scene to obtain a target detection picture;
determining an attack area on the target detection picture, and adding disturbance information on the attack area through a boundary attack BA algorithm so as to enable the target detection picture with the disturbance information corresponding to the attack area to become an attack sample;
inputting the attack sample into the neural network model to obtain an attack detection result picture, adding disturbance information into an attack area in the attack detection result picture again to form the attack sample when the result obtained by comparing the attack detection result picture with a preset attack result is judged to be not in accordance with a preset condition, and inputting the attack area into the neural network model again to carry out iterative computation until the result obtained by comparing the attack detection result picture obtained by iteration of a certain time with the preset attack result is in accordance with the preset condition;
and after the iterative computation is finished, outputting an attack detection result picture obtained by the final iteration as a countermeasure sample of the target detection scene.
2. The method as claimed in claim 1, wherein the predetermined condition is that the target in the attack area is not successfully identified and the disturbance is not easily found by human eyes.
3. The method of generating countermeasure samples for an intelligent driving objective detection scenario of claim 1, wherein the neural network model is an SSD model in an objective detection scenario.
4. The method of generating confrontational samples for a smart driving target detection scenario of claim 1, wherein the method further comprises:
training the neural network model using the challenge samples.
5. The method for generating confrontational samples for an intelligent driving target detection scenario of claim 1, wherein the raw picture is from a user manual upload; or, the original picture is read from a specified directory.
6. A confrontation sample generation system for an intelligent driving target detection scene is characterized by comprising:
the original picture receiving unit is used for receiving an original picture and inputting the original picture into a neural network model based on a target detection scene to obtain a target detection picture;
the image attack area disturbing unit is used for determining an attack area on the target detection image and adding disturbance information on the attack area through a boundary attack BA algorithm so as to enable the target detection image with the disturbance information corresponding to the attack area to become an attack sample;
an attack area iterative perturbation unit, configured to input the attack sample into the neural network model to obtain an attack detection result picture, and when it is determined that a result obtained by comparing the attack detection result picture with a preset attack result does not meet a predetermined condition, add perturbation information to an attack area in the attack detection result picture again to obtain the attack sample, and input the attack sample into the neural network model again to perform iterative computation until a result obtained by comparing the attack detection result picture obtained by a certain iteration with the preset attack result meets the predetermined condition;
and the countermeasure sample generation unit is used for outputting the attack detection result picture obtained by the last iteration as a countermeasure sample of the target detection scene after the iterative computation is finished.
7. The system for generating countermeasure samples for an intelligent driving target detection scenario of claim 6, wherein the predetermined condition is a failure to successfully identify a target within the attack area and a disturbance not easily detectable by the human eye.
8. The system for generating countermeasure samples for an intelligent driving objective detection scenario of claim 6, wherein the neural network model is an SSD model in an objective detection scenario.
9. The confrontational sample generation system for intelligent driving target detection scenario of claim 6, further comprising:
and the model training unit is used for training the neural network model by using the confrontation sample.
10. The confrontational sample generation system for intelligent driving target detection scenario of claim 6, wherein said raw picture is from a user manual upload; or, the original picture is read from a specified directory.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210797167.XA CN115223010A (en) | 2022-07-08 | 2022-07-08 | Countermeasure sample generation method and system for intelligent driving target detection scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210797167.XA CN115223010A (en) | 2022-07-08 | 2022-07-08 | Countermeasure sample generation method and system for intelligent driving target detection scene |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115223010A true CN115223010A (en) | 2022-10-21 |
Family
ID=83610515
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210797167.XA Pending CN115223010A (en) | 2022-07-08 | 2022-07-08 | Countermeasure sample generation method and system for intelligent driving target detection scene |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115223010A (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109902018A (en) * | 2019-03-08 | 2019-06-18 | 同济大学 | A kind of acquisition methods of intelligent driving system test cases |
CN111177757A (en) * | 2019-12-27 | 2020-05-19 | 支付宝(杭州)信息技术有限公司 | Processing method and device for protecting privacy information in picture |
CN113515774A (en) * | 2021-04-23 | 2021-10-19 | 北京航空航天大学 | Privacy protection method for generating countermeasure sample based on projection gradient descent method |
CN114169409A (en) * | 2021-11-18 | 2022-03-11 | 浪潮(北京)电子信息产业有限公司 | Countermeasure sample generation method and device |
-
2022
- 2022-07-08 CN CN202210797167.XA patent/CN115223010A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109902018A (en) * | 2019-03-08 | 2019-06-18 | 同济大学 | A kind of acquisition methods of intelligent driving system test cases |
CN111177757A (en) * | 2019-12-27 | 2020-05-19 | 支付宝(杭州)信息技术有限公司 | Processing method and device for protecting privacy information in picture |
CN113515774A (en) * | 2021-04-23 | 2021-10-19 | 北京航空航天大学 | Privacy protection method for generating countermeasure sample based on projection gradient descent method |
CN114169409A (en) * | 2021-11-18 | 2022-03-11 | 浪潮(北京)电子信息产业有限公司 | Countermeasure sample generation method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10733477B2 (en) | Image recognition apparatus, image recognition method, and program | |
CN108805016B (en) | Head and shoulder area detection method and device | |
CN102110228B (en) | Method of determining reference features for use in an optical object initialization tracking process and object initialization tracking method | |
CN109614907B (en) | Pedestrian re-identification method and device based on feature-enhanced guided convolutional neural network | |
CN110378837B (en) | Target detection method and device based on fish-eye camera and storage medium | |
CN112016402B (en) | Self-adaptive method and device for pedestrian re-recognition field based on unsupervised learning | |
CN111914665B (en) | Face shielding detection method, device, equipment and storage medium | |
CN112016531A (en) | Model training method, object recognition method, device, equipment and storage medium | |
CN111091739B (en) | Automatic driving scene generation method and device and storage medium | |
CN112686835B (en) | Road obstacle detection device, method and computer readable storage medium | |
US8244649B2 (en) | Structured differential learning | |
KR20210151773A (en) | Target re-recognition method and apparatus, terminal and storage medium | |
CN114359669A (en) | Picture analysis model adjusting method and device and computer readable storage medium | |
CN114220097A (en) | Anti-attack-based image semantic information sensitive pixel domain screening method and application method and system | |
JP7438365B2 (en) | Learning utilization system, utilization device, learning device, program and learning utilization method | |
CN115223010A (en) | Countermeasure sample generation method and system for intelligent driving target detection scene | |
KR20190056873A (en) | Apparatus for detecting object using multi neural network and method thereof, and method for learning detection of object | |
CN108764110B (en) | Recursive false detection verification method, system and equipment based on HOG characteristic pedestrian detector | |
CN112926515B (en) | Living body model training method and device | |
CN111860093B (en) | Image processing method, device, equipment and computer readable storage medium | |
CN115723777A (en) | Automatic driving control method and system and storage medium | |
KR100621883B1 (en) | An adaptive realtime face detecting method based on training | |
KR102334388B1 (en) | Method and Apparatus for Action Recognition Using Sequential Feature Data | |
CN113076840A (en) | Vehicle post-shot image brand training method | |
CN110569865B (en) | Method and device for recognizing vehicle body direction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |