CN111079851B - Vehicle type identification method based on reinforcement learning and bilinear convolution network - Google Patents
Vehicle type identification method based on reinforcement learning and bilinear convolution network Download PDFInfo
- Publication number
- CN111079851B CN111079851B CN201911371980.5A CN201911371980A CN111079851B CN 111079851 B CN111079851 B CN 111079851B CN 201911371980 A CN201911371980 A CN 201911371980A CN 111079851 B CN111079851 B CN 111079851B
- Authority
- CN
- China
- Prior art keywords
- network
- state
- vehicle type
- fine
- reinforcement learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a vehicle type identification method based on reinforcement learning and a bilinear convolutional network, which comprises the steps of constructing a deep network model, setting hyper-parameters of a fine-grained classification network and initializing the network; establishing a Markov decision model for optimizing the significance characteristics; carrying out scale transformation on the data set; optimizing the attention area: under the condition that the parameters of the fine-grained classification network are fixed, inputting the data set into the fine-grained classification network, optimizing the saliency area by adopting a reinforcement learning algorithm, and selecting the optimal attention area; establishing a loss function for updating the fine-grained classification network parameters; repeatedly training the network after the features are fused until the attention area is not changed any more; and inputting the vehicle type image to be tested into the trained model to obtain a corresponding detection result. The method utilizes the reinforcement learning network to extract the bottom significant features, and fuses the high-level semantic features and the low-level significant features through a bilinear interpolation method to improve the identification accuracy.
Description
Technical Field
The invention relates to a vehicle type identification method, in particular to a vehicle type identification method based on reinforcement learning and a bilinear convolution network.
Background
The vehicle type identification problem can be regarded as an application branch of a fine-grained classification problem, namely, different subclasses of the same class with very similar appearance are classified. Because the vehicle type picture of daily collection receives factor influences such as posture, visual angle and sheltering from easily for there is less difference between the motorcycle type of different brands, and there is great difference on the contrary between the motorcycle type of same brand. How to effectively identify the vehicle type is an application problem which needs to be solved urgently in fine particle classification.
The bilinear convolutional network is a model which can realize fine-grained classification with higher precision in recent years, has the advantages of simple structure and high training efficiency, but only takes the characteristics of the last layer as input characteristics of classification, and when the characteristics are used for training, more detailed information is lost, and most high-level characteristics are reserved. Because objects of fine-grained classification are objects which are similar in appearance but different in detailed representation, the depiction of the detailed features has a great influence on the recognition rate of the fine-grained classification. If the bottom-layer features and the high-layer features of the bilinear network are directly fused, because the scale of the bottom-layer features is large, some methods are needed to reduce the dimension when the bottom-layer features and the high-layer features are fused. When the main information of the feature loss obtained after dimensionality reduction is detail information, the accuracy of classification cannot be improved, and the training time of the network and the final classification efficiency can be prolonged.
Reinforcement learning is used as a solving method of a sequence decision problem, a problem to be solved is modeled into an MDP model, and then a classical method in reinforcement learning, such as a time difference algorithm, a least square time difference algorithm, an actor critic algorithm and the like, is adopted to solve an optimal strategy. Therefore, reinforcement learning is a well-suited method for extracting significance in underlying features.
Disclosure of Invention
The invention aims to provide a vehicle type identification method based on reinforcement learning and a bilinear convolutional network, which improves the vehicle type identification accuracy rate under the condition of less vehicle type pictures.
The technical scheme of the invention is as follows: a vehicle type identification method based on reinforcement learning and bilinear convolutional network comprises the following steps:
(1) constructing a depth network model: constructing a fine-grained classification network for vehicle identification based on reinforcement learning and a bilinear convolutional network;
(2) setting the hyper-parameters of the fine-grained classification network: the hyper-parameters comprise the learning rate, the iteration times and the batch size of the network;
(3) initializing the network: initializing a weight value and a threshold value of the fine-grained classification network;
(4) establishing a Markov decision model for optimizing the significance characteristics;
(5) preprocessing a data set: carrying out scale transformation on the data set;
(6) optimizing the attention area: under the condition that the parameters of the fine-grained classification network are fixed, inputting the data set into the fine-grained classification network, optimizing the saliency area by adopting a reinforcement learning algorithm, and selecting the optimal attention area;
(7) constructing a loss function: establishing a loss function for updating the fine-grained classification network parameters, wherein the loss function is defined as the sum of squares of errors of a real label of the data and a predicted label of the data;
(8) fusion characteristics: for each data in the data set, obtaining a final fusion result by using the attention area optimized in the step (6) and the characteristics of the fifth convolution layer, and using the final fusion result for classification;
(9) training a network: under the condition of fixing the optimal attention area, the data set is utilized and a gradient descent method is adopted to train the fine-grained classification network again until the training error is smaller than a preset threshold value;
(10) alternate training: repeating the steps (6) to (9) until the attention area is not changed any more;
(11) and inputting the vehicle type image to be tested into the trained deep network model to obtain a corresponding detection result.
Further, the parallel feature extraction layer of the bilinear convolutional network in the step (1) adopts a first convolutional layer to a fifth convolutional layer of VGG16, features output by the first convolutional layer to the fifth convolutional layer are transited from detailed features to high-level semantic feature attention, a bilinear vector is obtained through an outer product operation after the fifth convolutional layer, and finally, a full connection layer is connected, and a soft maximization operation is performed on the output, so that the recognition and classification of the vehicle type are realized.
Further, the step (4) of establishing a markov decision model for optimizing the significance characteristics comprises:
401) the state space X is set as a set formed by all sub-features of the fifth convolutional layer with the feature middle scale size generated by the third convolutional layer, and X is { X ═ X1,x2,…,xn};
402) The motion space U is a set of movements of the state in the state space up, down, left, and right;
403) the state transition function is f, multiplied by U → X, for any state X in the state space belonging to X, any action U belonging to U in the action space, the next state is the state after the action U occurs, and the state is a certain sub-feature of the fifth convolutional layer in the feature generated by the third convolutional layer;
404) the reward function is: r X U → R, for any X ∈ X in the state space, an immediate reward is obtained from any action U ∈ U in the action space.
Preferably, the motion space U is {0,1,2,3}, where 0 denotes a state upward movement, 1 denotes a state leftward movement, 2 denotes a state downward movement, and 3 denotes a state rightward movement.
Further, the (6) optimizing the attention area comprises the steps of:
601) setting the values of the parameters: discount rate gamma, attenuation factor lambda, iteration round number e, maximum time step T corresponding to each iteration, learning rate alpha and exploration rate;
603) Judging that the number of the episodes reaches the maximum value E: if so, go to step 612); otherwise go to step 604);
604) judging whether the maximum time step is reached: if yes, go to step 603); otherwise, go to step 605)
605) Initializing current state x ═ x0;
606) Randomly generating a probability p between (0,1), judging p<Whether or not: if so, the actions selected in the current state are: u-argmaxu(Q1(x,u)+Q2(x, u)); otherwise, randomly selecting any one action in the action set;
607) executing the currently selected action u to obtain a next state x' corresponding to the action u;
608) judging whether the classification result obtained by the output layer is the same as the real label or not: if the two are the same, immediately rewarding r to be 1; otherwise, the prize r is 0 immediately;
609) randomly generating a probability p between (0,1), judging p<Whether or not 0.5 holds: if so, updating the Q value: q1(x,u)=r+γmaxuQ1(x', u); otherwise, updating the Q value: q2(x,u)=r+γmaxuQ2(x′,u);
610) Updating the current time step: t +1, and go to step 604) to make a determination;
611) and updating the current plot: e + 1;
612) outputting the current optimal strategy and value function Q1(x,u)、Q2(x,u)。
Further, the loss function in the step (7) is:
wherein y represents the vehicle type classification result obtained by the network, and y' represents the real label of the vehicle type picture.
The technical scheme provided by the invention has the beneficial effects that the bilinear convolutional network is used as a basic deep network framework, the reinforcement learning network is used for extracting the salient features of the bottom layer, the bilinear interpolation method is used for fusing the high-layer semantic features and the low-layer salient features, and finally, the full connection layer and the soft maximization operation of the bilinear convolutional network are used for carrying out specific vehicle type identification, so that the vehicle type identification accuracy rate is improved. The method is combined with a reinforcement learning network, can well extract the significant characteristics of the vehicle type pictures when the vehicle type pictures are few, is suitable for online vehicle type recognition, and can be applied to online real-time recognition in the field of video monitoring.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a network model diagram of the method of the present invention;
FIG. 3 is a single network model refinement diagram of the bilinear model in the method of the present invention.
Detailed Description
Referring to fig. 1, the method for recognizing a vehicle type based on reinforcement learning and bilinear convolutional network according to the present embodiment includes the following steps:
(1) constructing a depth network model: and constructing a fine-grained classification network for vehicle identification based on reinforcement learning and bilinear convolutional network, wherein model diagrams are shown in fig. 2 and fig. 3. The parallel feature extraction layer of the bilinear convolutional network adopts the first convolutional layer to the fifth convolutional layer of VGG16, the features output by the first convolutional layer to the fifth convolutional layer are transited from detail features to high-level semantic feature attention, a bilinear vector is obtained through outer product operation after the fifth convolutional layer, and finally the full connection layer is connected, soft maximization operation is carried out on output, and vehicle type identification and classification are achieved.
(2) Setting the hyper-parameters of the network: the learning rate of the network is 0.02, the iteration times are 10000 times, the batch size is 10 pictures, and the training threshold is 0.01;
(3) initializing the network: setting all weights and thresholds of the network to be 0.00001;
(4) constructing an MDP model: constructing a Markov decision model for optimizing the significance characteristics, wherein the MDP model is established as follows:
401) modeling a state space: the state space is formed by all the features which can be obtained on the output features of the third convolution layer by adopting the dimension of the fifth convolution layer on the basis of the output feature diagram of Conv3, wherein the state space comprises 4 feature diagrams containing four corners of edges;
402) modeling an action space: the motion space modeling moves upwards, downwards and rightwards, and the numbers 0,1,2 and 3 are respectively adopted to depict the motion;
403) modeling a migration function: assuming that the position corresponding to the feature corresponding to the current state is (x, y), then:
if an upward action is taken, the position of the next state is (x, y-1);
if an action to the left is taken, the position of the next state is (x-1, y);
if a downward action is taken, the next state is in the position (x, y +1)
If action is taken to the right, the next state position is (x +1, y)
404) Modeling a reward function: the modeling of the reward function depends on the current output of the depth network, namely, the type class of the vehicle obtained by adopting the current optimal attention area when a certain vehicle type graph is input into the depth network. When the vehicle type category is the same as the real category, the reward is 1 immediately; otherwise, the reward is 0.
(5) Preprocessing a data set: the data set is downloaded, the data set is subjected to scale transformation, namely translation, rotation and other operations, the original data set is expanded, and the purpose of expansion is to increase the robustness of the network, namely the network has good recognition capability on some noisy images, and meanwhile, the overfitting phenomenon during training is prevented. The address at which the data set Car-196 is downloaded is: car-196 https:// ai.stanford.edu/. jkrause/cars/Car _ dataset. html.
In order to enable the network to have better generalization capability, the bird data set CUB-200 and the airplane FGVC-Aircraft are also adopted for training in the training stage, and the download addresses are respectively:
CUB-200 http:// www.vision.caltech.edu/visipedia/CUB-200.html and FGVC-Aircraft:// www.robots.ox.ac.uk/. about vgg/data/FGVC-AIrcraft/.
(6) Optimizing the attention area: optimizing the saliency region by adopting a reinforcement learning algorithm, training a network, and selecting an optimal attention region, wherein the specific implementation process of the optimization can be described as follows:
601) setting the values of the parameters: the discount rate γ is 0.9, the attenuation factor λ is 0.95, the number of iterations E is 200, the maximum time step T corresponding to each iteration is 1000, the learning rate α is 0.5, and the exploration rate is 0.1;
602) for theInitialization Q01(x,u)=0,Q02(x, u) ═ 0, the number of episodes is judged to have reached the maximum value E:
if so:
go to the step
Otherwise:
go to step 604);
603) judging whether the maximum time step is reached:
if so:
step 603)
Otherwise:
turning to step 605)
604) Randomly initializing current state x ═ x0;
605) Randomly generating a probability p between (0,1), judging whether p < is true:
if so:
the actions selected in the current state are: u-argmaxu(Q1(x,u)+Q2(x,u))
Otherwise:
randomly selecting any one of the four actions in the action set;
606) executing the currently selected action u to obtain the corresponding next state x'
607) Judging whether the classification result obtained by the output layer is the same as the real label or not:
if the two are the same:
immediate award r 1
Otherwise:
immediate award r is 0
608) Randomly generating a probability p between (0,1), judging whether p <0.5 holds:
if so:
updating the Q value: q1(x,u)=r+γmaxuQ1(x′,u)
Otherwise:
updating the Q value: q2(x,u)=r+γmaxuQ2(x′,u)
609) Updating the current time step: t +1, and go to step 4) to make a determination
610) And updating the current plot: e +1
611) Outputting the current optimal strategy and value function Q1(x,u)、Q2(x,u)
(7) Constructing a loss function: the loss function for constructing the network training is:
wherein y represents the vehicle type classification result obtained by the network, and y' represents the real label of the vehicle type picture.
(8) Fusion characteristics: after the feature region with the optimal value is obtained, the region is fixed and is used for fusing the high-level features (the output of the 5 th convolution module) in an adding mode to obtain fused high-level features, and the output of each layer of features and the output of the fused features are shown in fig. 2;
(9) training a network: under the condition of fixing the optimal attention area, the data set is utilized, and a gradient descent method is adopted to train the network again until the training error is smaller than a preset threshold value;
(10) alternate training: repeating the steps (6) to (9) until the attention area is not changed any more;
(11) and inputting the vehicle type image to be tested into the depth network model to obtain a corresponding detection result.
The recognition accuracy of the vehicle type recognition method in each data set by adopting the method of the invention is as follows:
Claims (4)
1. a vehicle type identification method based on reinforcement learning and bilinear convolutional network is characterized by comprising the following steps:
(1) constructing a depth network model: constructing a fine-grained classification network for vehicle identification based on reinforcement learning and a bilinear convolutional network;
(2) setting the hyper-parameters of the fine-grained classification network: the hyper-parameters comprise the learning rate, the iteration times and the batch size of the network;
(3) initializing the network: initializing a weight value and a threshold value of the fine-grained classification network;
(4) establishing a Markov decision model for optimizing the significance characteristics:
401) the state space X is set as a set formed by all sub-features of the fifth convolutional layer with the feature middle scale size generated by the third convolutional layer, and X is { X ═ X1,x2,…,xn};
402) The motion space U is a set of movements of the state in the state space up, down, left, and right;
403) the state transition function is f, multiplied by U → X, for any state X in the state space belonging to X, any action U belonging to U in the action space, the next state is the state after the action U occurs, and the state is a certain sub-feature of the fifth convolutional layer in the feature generated by the third convolutional layer;
404) the reward function is: r is X U → R, and for any X in the state space belonging to X, any action U in the action space belonging to U is rewarded immediately;
(5) preprocessing a data set: carrying out scale transformation on the data set;
(6) optimizing the attention area: under the condition that the parameters of the fine-grained classification network are fixed, the data set is input into the fine-grained classification network, a reinforcement learning algorithm is adopted to optimize the saliency region, and the optimal attention region is selected, wherein the method comprises the following steps:
601) setting the values of the parameters: discount rate gamma, attenuation factor lambda, iteration round number e, maximum time step T corresponding to each iteration, learning rate alpha and exploration rate;
603) Judging that the number of the episodes reaches the maximum value E: if so, go to step 612); otherwise go to step 604);
604) judging whether the maximum time step is reached: if yes, go to step 603); otherwise, go to step 605)
605) Initializing current state x ═ x0;
606) Randomly generating a probability p between (0,1), judging p<Whether or not: if so, the actions selected in the current state are: u-argmaxu(Q1(x,u)+Q2(x, u)); otherwise, randomly selecting any one action in the action set;
607) executing the currently selected action u to obtain a next state x' corresponding to the action u;
608) judging whether the classification result obtained by the output layer is the same as the real label or not: if the two are the same, immediately rewarding r to be 1; otherwise, the prize r is 0 immediately;
609) randomly generating a probability p between (0,1), judging p<Whether or not 0.5 holds: if true, update Q: q1(x,u)=r+γmaxuQ1(x', u); otherwise, updating the Q value: q2(x,u)=r+γmaxuQ2(x′,u);
610) Updating the current time step: t +1, and go to step 604) to make a determination;
611) and updating the current plot: e + 1;
612) outputting the current optimal strategy and value function Q1(x,u)、Q2(x,u);
(7) Constructing a loss function: establishing a loss function for updating the fine-grained classification network parameters, wherein the loss function is defined as the sum of squares of errors of a real label of the data and a predicted label of the data;
(8) fusion characteristics: for each data in the data set, obtaining a final fusion result by using the attention area optimized in the step (6) and the characteristics of the fifth convolution layer, and using the final fusion result for classification;
(9) training a network: under the condition of fixing the optimal attention area, the data set is utilized and a gradient descent method is adopted to train the fine-grained classification network again until the training error is smaller than a preset threshold value;
(10) alternate training: repeating the steps (6) to (9) until the attention area is not changed any more;
(11) and inputting the vehicle type image to be tested into the trained deep network model to obtain a corresponding detection result.
2. The vehicle type identification method based on reinforcement learning and bilinear convolutional network of claim 1, wherein the parallel feature extraction layer of the bilinear convolutional network in step (1) adopts the first convolutional layer to the fifth convolutional layer of VGG16, the features output by the first convolutional layer to the fifth convolutional layer are transited from the detail features to the high-level semantic feature attention, a bilinear vector is obtained by an outer product operation after the fifth convolutional layer, and finally, a full connection layer is connected, and a soft maximization operation is performed on the output, so that the vehicle type identification and classification are realized.
3. The vehicle type identification method based on reinforcement learning and bilinear convolutional network of claim 1, wherein the motion space U ═ {0,1,2,3}, 0 represents upward movement of state, 1 represents leftward movement of state, 2 represents downward movement of state, and 3 represents rightward movement of state.
4. The vehicle type identification method based on reinforcement learning and bilinear convolutional network of claim 1, wherein the loss function in step (7) is:
wherein y represents the vehicle type classification result obtained by the network, and y' represents the real label of the vehicle type picture.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911371980.5A CN111079851B (en) | 2019-12-27 | 2019-12-27 | Vehicle type identification method based on reinforcement learning and bilinear convolution network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911371980.5A CN111079851B (en) | 2019-12-27 | 2019-12-27 | Vehicle type identification method based on reinforcement learning and bilinear convolution network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111079851A CN111079851A (en) | 2020-04-28 |
CN111079851B true CN111079851B (en) | 2020-09-18 |
Family
ID=70318777
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911371980.5A Active CN111079851B (en) | 2019-12-27 | 2019-12-27 | Vehicle type identification method based on reinforcement learning and bilinear convolution network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111079851B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112149720A (en) * | 2020-09-09 | 2020-12-29 | 南京信息工程大学 | Fine-grained vehicle type identification method |
CN112183602B (en) * | 2020-09-22 | 2022-08-26 | 天津大学 | Multi-layer feature fusion fine-grained image classification method with parallel rolling blocks |
CN113191218A (en) * | 2021-04-13 | 2021-07-30 | 南京信息工程大学 | Vehicle type recognition method based on bilinear attention collection and convolution long-term and short-term memory |
CN113158980A (en) * | 2021-05-17 | 2021-07-23 | 四川农业大学 | Tea leaf classification method based on hyperspectral image and deep learning |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106096535A (en) * | 2016-06-07 | 2016-11-09 | 广东顺德中山大学卡内基梅隆大学国际联合研究院 | A kind of face verification method based on bilinearity associating CNN |
US9569736B1 (en) * | 2015-09-16 | 2017-02-14 | Siemens Healthcare Gmbh | Intelligent medical image landmark detection |
CN109086792A (en) * | 2018-06-26 | 2018-12-25 | 上海理工大学 | Based on the fine granularity image classification method for detecting and identifying the network architecture |
CN109359684A (en) * | 2018-10-17 | 2019-02-19 | 苏州大学 | Fine granularity model recognizing method based on Weakly supervised positioning and subclass similarity measurement |
CN109858430A (en) * | 2019-01-28 | 2019-06-07 | 杭州电子科技大学 | A kind of more people's attitude detecting methods based on intensified learning optimization |
CN109902562A (en) * | 2019-01-16 | 2019-06-18 | 重庆邮电大学 | A kind of driver's exception attitude monitoring method based on intensified learning |
CN110135231A (en) * | 2018-12-25 | 2019-08-16 | 杭州慧牧科技有限公司 | Animal face recognition methods, device, computer equipment and storage medium |
CN110334572A (en) * | 2019-04-04 | 2019-10-15 | 南京航空航天大学 | The fine recognition methods of vehicle under a kind of multi-angle |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8874498B2 (en) * | 2011-09-16 | 2014-10-28 | International Business Machines Corporation | Unsupervised, supervised, and reinforced learning via spiking computation |
CN108898060A (en) * | 2018-05-30 | 2018-11-27 | 珠海亿智电子科技有限公司 | Based on the model recognizing method of convolutional neural networks under vehicle environment |
CN109086672A (en) * | 2018-07-05 | 2018-12-25 | 襄阳矩子智能科技有限公司 | A kind of recognition methods again of the pedestrian based on reinforcement learning adaptive piecemeal |
-
2019
- 2019-12-27 CN CN201911371980.5A patent/CN111079851B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9569736B1 (en) * | 2015-09-16 | 2017-02-14 | Siemens Healthcare Gmbh | Intelligent medical image landmark detection |
CN106096535A (en) * | 2016-06-07 | 2016-11-09 | 广东顺德中山大学卡内基梅隆大学国际联合研究院 | A kind of face verification method based on bilinearity associating CNN |
CN109086792A (en) * | 2018-06-26 | 2018-12-25 | 上海理工大学 | Based on the fine granularity image classification method for detecting and identifying the network architecture |
CN109359684A (en) * | 2018-10-17 | 2019-02-19 | 苏州大学 | Fine granularity model recognizing method based on Weakly supervised positioning and subclass similarity measurement |
CN110135231A (en) * | 2018-12-25 | 2019-08-16 | 杭州慧牧科技有限公司 | Animal face recognition methods, device, computer equipment and storage medium |
CN109902562A (en) * | 2019-01-16 | 2019-06-18 | 重庆邮电大学 | A kind of driver's exception attitude monitoring method based on intensified learning |
CN109858430A (en) * | 2019-01-28 | 2019-06-07 | 杭州电子科技大学 | A kind of more people's attitude detecting methods based on intensified learning optimization |
CN110334572A (en) * | 2019-04-04 | 2019-10-15 | 南京航空航天大学 | The fine recognition methods of vehicle under a kind of multi-angle |
Non-Patent Citations (1)
Title |
---|
Bilinear CNN Model for Fine-Grained Classification Based on Subcategory-Similarity Measurement;Xinghua Dai et.al;《applied sciences》;20190116;第1-16页 * |
Also Published As
Publication number | Publication date |
---|---|
CN111079851A (en) | 2020-04-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111079851B (en) | Vehicle type identification method based on reinforcement learning and bilinear convolution network | |
CN111291212B (en) | Zero sample sketch image retrieval method and system based on graph convolution neural network | |
US20200250436A1 (en) | Video object segmentation by reference-guided mask propagation | |
CN110428428B (en) | Image semantic segmentation method, electronic equipment and readable storage medium | |
CN110163299B (en) | Visual question-answering method based on bottom-up attention mechanism and memory network | |
CN110728219B (en) | 3D face generation method based on multi-column multi-scale graph convolution neural network | |
CN111652124A (en) | Construction method of human behavior recognition model based on graph convolution network | |
CN108734210B (en) | Object detection method based on cross-modal multi-scale feature fusion | |
CN108399380A (en) | A kind of video actions detection method based on Three dimensional convolution and Faster RCNN | |
CN107767384A (en) | A kind of image, semantic dividing method based on dual training | |
CN111079532A (en) | Video content description method based on text self-encoder | |
CN112819833B (en) | Large scene point cloud semantic segmentation method | |
CN111862274A (en) | Training method for generating confrontation network, and image style migration method and device | |
CN110751111B (en) | Road extraction method and system based on high-order spatial information global automatic perception | |
CN114049381A (en) | Twin cross target tracking method fusing multilayer semantic information | |
CN113096138A (en) | Weak supervision semantic image segmentation method for selective pixel affinity learning | |
CN114048822A (en) | Attention mechanism feature fusion segmentation method for image | |
CN115222998B (en) | Image classification method | |
CN116664719A (en) | Image redrawing model training method, image redrawing method and device | |
CN112070040A (en) | Text line detection method for video subtitles | |
CN117237756A (en) | Method for training target segmentation model, target segmentation method and related device | |
CN109658508B (en) | Multi-scale detail fusion terrain synthesis method | |
CN117576248B (en) | Image generation method and device based on gesture guidance | |
CN117541668A (en) | Virtual character generation method, device, equipment and storage medium | |
CN113240033B (en) | Visual relation detection method and device based on scene graph high-order semantic structure |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20210414 Address after: 215000 Building 1, Wujiang Taihu new city science and Technology Innovation Park, No.18, Suzhou River Road, Wujiang District, Suzhou City, Jiangsu Province Patentee after: Jiangsu Yiyou Huiyun Software Co.,Ltd. Address before: 215500 Changshou City South Three Ring Road No. 99, Suzhou, Jiangsu Patentee before: CHANGSHU INSTITUTE OF TECHNOLOGY |