CN111368900A - Image target object identification method - Google Patents

Image target object identification method Download PDF

Info

Publication number
CN111368900A
CN111368900A CN202010130406.7A CN202010130406A CN111368900A CN 111368900 A CN111368900 A CN 111368900A CN 202010130406 A CN202010130406 A CN 202010130406A CN 111368900 A CN111368900 A CN 111368900A
Authority
CN
China
Prior art keywords
image
support vector
vector machine
bat
recognized
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010130406.7A
Other languages
Chinese (zh)
Inventor
徐波
陈非儿
彭东亚
梁红
樊慧珍
荣彩
叶权锋
郭瑞超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Electronic Technology
Original Assignee
Guilin University of Electronic Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Electronic Technology filed Critical Guilin University of Electronic Technology
Priority to CN202010130406.7A priority Critical patent/CN111368900A/en
Publication of CN111368900A publication Critical patent/CN111368900A/en
Pending legal-status Critical Current

Links

Classifications

    • G06F18/2411
    • G06F18/214
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computers simulating life
    • G06N3/006Artificial life, i.e. computers simulating life based on simulated virtual individual or collective life forms, e.g. single "avatar", social simulations, virtual worlds or particle swarm optimisation
    • G06N3/045

Abstract

The invention relates to the technical field of image recognition, and particularly discloses an image target object recognition method, which comprises the following steps: the method comprises the following steps that a camera collects images to be recognized of different target object types, and marks target object features of the different images to be recognized to form an image set to be recognized; establishing a support vector machine, and optimizing network parameters of the support vector machine by utilizing a bat harmonic mixing algorithm; dividing the marked image sample to be identified into a training set and a testing set, and training a support vector machine; establishing a plurality of convolutional neural networks respectively corresponding to different targets, and training the convolutional neural networks; the method comprises the steps of collecting real-time identification images in real time by a camera, inputting the real-time identification images into a trained support vector machine for classification, and then inputting the classified real-time identification images into corresponding convolutional neural network identification target objects. The method utilizes the advantages that the small sample of the support vector machine has strong learning and the convolutional nerves are good at predicting the picture, and improves the accuracy and efficiency of prediction.

Description

Image target object identification method
Technical Field
The invention belongs to the technical field of image recognition, and particularly relates to an image target object recognition method.
Background
The robot learning algorithm comprises a Support Vector Machine (SVM), a convolutional neural network and the like, and the SVM is a novel small sample learning method with a solid theoretical basis. It basically does not involve probability measures and law of majority etc., and therefore differs from existing statistical methods. In essence, the method avoids the traditional process from induction to deduction, realizes efficient 'transduction reasoning' from the training sample to the forecast sample, and greatly simplifies the problems of common classification, regression and the like. However, for prediction of a picture set with large data, it is obvious that the SVM is not free from the attention, and therefore, the convolutional neural network is generally used for picture recognition because the multilayer network automatically extracts and classifies image features, such as convolutional pooling of the convolutional neural network.
However, for the recognition of the picture object, if the recognition of a plurality of objects is performed, the picture data for each object is huge, and if the recognition is performed only by using one convolutional neural network, the preparation rate and efficiency of the picture recognition are reduced.
Disclosure of Invention
The invention aims to provide an image target object identification method, which can overcome the defects that the image data of each target object is huge, and the image identification preparation rate and efficiency are reduced if only one convolutional neural network is used for identification.
In order to achieve the above object, the present invention provides an image target recognition method, including:
s1, the camera collects images to be recognized of different target object types, marks the target object characteristics of the different images to be recognized and forms an image set to be recognized;
s2, establishing a support vector machine, and optimizing the network parameters of the support vector machine by utilizing a bat harmony mixing algorithm to form the support vector machine with the optimal network parameters;
s3, dividing the marked image sample to be identified into a training set and a testing set, inputting the training set into a support vector machine with optimal network parameters to train the feature data of the target object, and testing the trained support vector machine by using the testing set to obtain the support vector machine capable of classifying the features of the target object;
s4, establishing a plurality of convolutional neural networks respectively corresponding to different targets, and inputting images to be identified of different target types into the corresponding convolutional neural networks respectively for training to obtain a plurality of convolutional neural networks capable of predicting the same type respectively;
and S5, acquiring real-time recognition images by the camera in real time, inputting the real-time recognition images into a trained support vector machine for classification, and then inputting the classified real-time recognition images into corresponding convolutional neural network recognition target objects.
Preferably, in the above technical solution, step S2 specifically includes:
s201, setting parameters of the support vector machine to be optimized: punishment parameter C, RBF nuclear parameter delta and loss function epsilon parameter;
s202, initializing a harmony population size HMS, a probability HMCR of a learning harmony library, a pitch adjustment rate PAR, a distance bandwidth bw, a maximum iteration number J and a sound memory library HM;
s203, bat population initialization: initializing population number N and maximum pulse volume A0Maximum pulse rate R0Search lower bound xminSearch upper bound xmaxAttenuation of sound volumeCoefficient α, enhancement coefficient γ of search frequency, maximum number of iterations Imax
S204, updating the bat population, and updating each bat in the bat population according to the following steps:
generating a new bat according to formula (1):
BatX(i)=gbest*(N(0,1)+1),as r(i)<rand (1)
where N (0,1) is a Gaussian distribution function subject to a mean of 0 and a variance of 1; r (i) is the pulse sent by the ith bats; rand is a random number uniformly distributed in the range of [0,1 ]; gbest is an optimal value;
carrying out border crossing processing on the newly generated bats;
calculating the fitness value f (BatX (i)) of the bat;
if the newly generated bat fitness is less than the gbest, updating the gbest with the current bat;
updating the pulse loudness A and pulse rate r according to equations (2) and (3)
ri t+1=R0[1-exp(-γt)](3)
S205, updating the harmony population and the gbest, randomly selecting one harmony from the harmony database to carry out variation according to the formula (4) and calculating the fitness of the harmony according to the formula (4)
If the newly generated sum-sound fitness is smaller than the gbest, updating the gbest with the current bat;
s206, repeating S204-S205 until the preset search precision is met or the maximum search frequency is reached, turning to S207, otherwise, turning to S204 for continuous calculation;
and S207, outputting the gbest to obtain the optimized parameters of the support vector machine so as to establish the optimal support vector machine.
Preferably, in the above technical solution, the convolutional neural network is a multilayer perceptual convolutional neural network model based on an NIN network structure.
Preferably, in the above technical solution, the method further comprises a correcting step, specifically comprising:
establishing a coordinate system, inputting the acquired image to be recognized into the coordinate system, acquiring whether the edge of the image to be recognized is parallel to the coordinate axis of the coordinate system, acquiring an included angle between the edge of the image to be recognized and the coordinate axis if the edge of the image to be recognized is not parallel to the coordinate axis of the coordinate system, acquiring an angle for rotating the image to be recognized according to the included angle, and rotating the image to be recognized according to the angle so as to enable the image to be recognized to be corrected.
Compared with the prior art, the invention has the following beneficial effects:
1. according to the image target object identification method, the support vector machine is used for classifying the target objects in the pictures, then the classified pictures are input into the corresponding target object identification network model for identification, and then the pictures are identified, so that the advantages that the small sample of the support vector machine is strong in learning and the convolutional nerves are good at predicting the pictures are fully utilized, and the accuracy and the efficiency of prediction are improved.
2. The invention utilizes the support vector machine optimized by the bat harmony mixing algorithm, and utilizes the local optimizing capability of the bat algorithm and the global optimizing capability of the harmony search algorithm to carry out collaborative optimization, thereby improving the performance of the algorithm.
Drawings
FIG. 1 is a flow chart of an image object recognition method of the present invention.
FIG. 2 is a flow chart of bat and acoustic mixing algorithm optimization of the present invention.
Fig. 3 is a schematic diagram of the NIN network structure of the present invention.
Fig. 4 is a flow chart of the image object recognition method of the present invention.
Detailed Description
The following detailed description of the present invention is provided in conjunction with the accompanying drawings, but it should be understood that the scope of the present invention is not limited to the specific embodiments.
As shown in fig. 1, an image target recognition method in this embodiment includes:
step S1, the camera collects images to be recognized of different target object types, and marks target object features of the different images to be recognized, that is, feature extraction, to form a set of images to be recognized. If pictures of different articles are selected: tree, dog, car, marking the object features in the image.
And step S2, establishing a support vector machine, and optimizing the network parameters of the support vector machine by using a bat harmonic mixing algorithm to form the support vector machine with the optimal network parameters. The input of the image to be recognized is x, the output is y, y ═ f (x) is a nonlinear relation, and an inverse function of y ═ f (x) can be obtained by using a support vector machine. In the embodiment, for simplifying the operation, an SVM (support vector machine) is utilized to take the image to be recognized as input, and the marked target object characteristics are taken as output.
And step S3, dividing the marked image sample to be recognized into a training set and a testing set, inputting the training set into a support vector machine with the optimal network parameters to train the feature data of the target object, and testing the trained support vector machine by using the testing set to obtain the support vector machine capable of classifying the features of the target object.
Step S4, establishing a plurality of convolutional neural networks respectively corresponding to different targets, and inputting the images to be identified of different target types into the corresponding convolutional neural networks respectively for training to obtain a plurality of convolutional neural networks capable of predicting the same type respectively.
Step S5, the camera collects real-time recognition images in real time, the real-time recognition images are input to a trained support vector machine for classification, and then the classified real-time recognition images are input to corresponding convolutional neural network recognition targets, with the recognition effect as shown in fig. 4.
Step S2 adopts the bat harmony and harmony algorithm, and in this embodiment, the harmony and bat algorithms are introduced first:
the harmony search algorithm simulates the music creation process and adjusts the pitch within a range of values each time to achieve the optimum. The main parameters are the harmonic population size (HMS), the probability of learning the harmonic library (HMCR), the Pitch Adjustment Rate (PAR), the distance bandwidth (bw) and the maximum iteration number (J). The steps of the HS algorithm are as follows:
step1, initializing the parameters;
step2, initializing and memorizing the voice library;
step3: impulse generates a new harmony. The harmony update process for each dimension is as follows:
step3.1 at [0,1]]Generates a random number rand1, randomly selects a tone in the harmony pool if rand1 is smaller than HMCR, otherwise at [ x ]min,xmax]Intervening a random number as a new tone, xmaxAnd xminRespectively, harmonic upper and lower limits.
Step3.2 if from the harmony pool, at [0,1]]Generates a random number rand2, if rand2 is less than PAR, performs fine tuning,is the value of the ith dimension of the newly generated sum sound, the fine tuning formula is as follows:
step4 update and sound memory library. And if the fitness of the new harmony is better than the fitness of the worst harmony in the harmony pool, replacing the worst harmony in the harmony pool with the searched harmony.
Step5 repeat Step3 to Step4 until the maximum number of iterations is reached.
The bat algorithm is an intelligent optimization algorithm for simulating bats to search for preys, and the bat algorithm judges the quality of the current position according to the fitness of the current position. And (3) randomly shifting the bat to the position with the best fitness of the bat individual in each iteration. The bat algorithm is as follows:
step1 population initialization: initializing population number N and maximum pulse volume A0Maximum pulse rate R0Search lower bound xminSearch upper bound xmaxAttenuation coefficient of volume α, enhancement coefficient of search frequency γ, maximum number of iterations ImaxBat with egg yolkPosition x ofi
Step2, calculating the fitness f (x) of the bat, wherein x is (x)1,…xd)TEvaluating the fitness value of each bat according to a fitness evaluation function to find the current optimal solution x*
Step3 the search pulse frequency, speed and position of the bat are updated.
pi=pmin+(pmax-pmin)β (2)
Wherein β is a random number, x*Is the location of the optimal bat.
Step4 generating a random number rand, if rand>riAnd randomly changing the optimal bat position in the current group to generate a new bat.
Step5 generating a random number rand, if rand>AiAnd the adaptability of the newly generated bat is better, the current bat position is updated, and the updating is carried out as follows
ri t+1=R0[1-exp(-γt)](6)
And Step6, solving the fitness values of all bats to find an optimal solution.
And Step7, repeating the steps 2-5 until the set optimal solution condition threshold value is met or the maximum iteration number is reached.
The invention introduces the bat algorithm into the harmony search algorithm for collaborative optimization. The main operation is to initialize a bat population BatX for storing global optimum and local solutions generated near acoustic gbest, generate a local solution BatX (i) after Gaussian disturbance is carried out on the gbest during optimization, and then evaluate position information BatX (i) of the bat individual: if the fitness value of the bat individual is better than gbest and it produces a loudness a (i) greater than the randomly generated loudness, then the worst sum sound in the sum sound library is replaced with the bat and gbest is updated. The bat generation formula is shown in fig. 7.
BatX(i)=gbest*(N(0,1)+1),as r(i)<rand (7)
Where N (0,1) is a Gaussian distribution function subject to a mean of 0 and a variance of 1; r (i) is the pulse sent by the ith bats; rand is a random number uniformly distributed in the range of [0,1 ]; gbest is the optimum value.
Therefore, as shown in fig. 2, step S2 specifically includes:
s201, setting parameters of the support vector machine to be optimized: the penalty parameter C has a parameter range of [1, 100], the RBF kernel parameter δ has a parameter range of [0.1, 100], and the loss function ε has a parameter range of [0.001, 1 ].
S202, initializing a harmony population size HMS, learning the probability HMCR of a harmony library, a pitch adjustment rate PAR, a distance bandwidth bw, a maximum iteration number J and an acoustic memory library HM.
S203, bat population initialization: initializing population number N and maximum pulse volume A0Maximum pulse rate R0Search lower bound xminSearch upper bound xmaxAttenuation coefficient of volume α, enhancement coefficient of search frequency γ, maximum number of iterations Imax
S204, updating the bat population, and updating each bat in the bat population according to the following steps:
generating a new bat according to equation (7): batx (i) ═ gbest (N (0,1) +1), as r (i) < rand
Carrying out border crossing processing on the newly generated bats;
calculating the fitness value f (BatX (i)) of the bat;
if the newly generated bat fitness is less than the gbest, updating the gbest with the current bat;
updating the pulse loudness A and pulse rate r according to equations (5) and (6)
ri t+1=R0[1-exp(-γt)]
S205, updating the harmony population and the gbest, randomly selecting one harmony from the harmony database to carry out variation according to the formula (1) and calculating the fitness of the harmony according to the formula (1)
If the newly generated sum-sound fitness is smaller than the gbest, updating the gbest with the current bat;
s206, repeating S204-S205 until the preset search precision is met or the maximum search frequency is reached, turning to S207, otherwise, turning to S204 for continuous calculation;
s207, outputting the gbest, and selecting an optimal support vector machine model and parameters thereof, wherein the method comprises the following steps: training parameters (including a penalty factor C, radial basis kernel function parameters and the like), the type of the model, the kernel function type, the loss function and parameters thereof to establish the optimal support vector machine.
Therefore, the support vector machine optimized by the bat harmony mixing algorithm is utilized, the local optimizing capability of the bat algorithm and the global optimizing capability of the harmony search algorithm are utilized to carry out collaborative optimizing, and the performance of the algorithm is improved.
Further, in this embodiment, the convolutional neural network selects a multilayer perceptual convolutional neural network model based on an NIN network structure, and a deep learning model is constructed based on the NIN network structure, where the entire structure of the NIN is composed of a plurality of multilayer perceptual convolutional layers (Mlpconv) and a full connection layer. Each Mlpconv is a convolution operation of a micro-network structure of a multilayer perceptron on each local receptive field, and consists of a layer of convolution and two layers of perception layers; and adding a full connection layer after the last Mlpconv layer, and realizing the identification output of the picture target object by a linear regression method. As shown in fig. 3, the deep learning model network architecture for identifying and detecting a picture target object with high accuracy provided by the embodiment of the present invention includes:
an input layer: the input target picture is a three-channel true color image with a data set of image size 32x32x 3.
First Mlpconv layer: the convolution kernels are set to be 16 in size of 3x3x3, the sensor is realized by convolution of 1x1, namely 1x1x16 and 1x1x32, and finally the pooling layer dimensionality reduction processing is carried out;
second Mlpconv layer: the convolution kernels are set to be 32 in size of 3x3x16, the sensors are realized through convolution of 1x1, namely 1x1x32 and 1x1x32, and finally the pooling layer dimensionality reduction processing is carried out;
third Mlpconv layer: the convolution kernels are set to be 64 in size of 3x3x32, the sensor is realized by convolution of 1x1, namely 1x1x64 and 1x1x64, and finally the pooling layer dimensionality reduction processing is carried out;
fourth Mlpconv layer: the convolution kernels are set to be 128 in size of 3x3x64, the sensors are realized through convolution of 1x1, namely 1x1x128 and 1x1x128, and finally the pooling layer dimensionality reduction processing is carried out;
full connection layer: the pictures output by the convolutional layer and the pooling layer have higher layer characteristics, a two-dimensional feature map is mapped to a one-dimensional space through a full connection layer, full connection parameters of 20x1 are set, the pictures are output as picture target object detection values through a linear regression method, a Leaky-Relu function is used for nonlinear activation, and meanwhile L1 regularization processing is used.
Further, when the obtained picture is input into the support vector machine or the convolutional neural network, the method further comprises a correction step, and specifically comprises the following steps:
establishing a coordinate system, inputting the acquired image to be recognized into the coordinate system, acquiring whether the edge of the image to be recognized is parallel to the coordinate axis of the coordinate system, acquiring an included angle between the edge of the image to be recognized and the coordinate axis if the edge of the image to be recognized is not parallel to the coordinate axis of the coordinate system, acquiring an angle for rotating the image to be recognized according to the included angle, and rotating the image to be recognized according to the angle so as to enable the image to be recognized to be corrected.
In summary, the image target object identification method in the invention utilizes the support vector machine to classify the target object in the picture, then inputs the classified picture into the corresponding target object identification network model for identification, and then identifies the classified picture, thereby fully utilizing the advantages of strong learning of small samples of the support vector machine and good prediction of the convolutional nerves on the picture, and improving the accuracy and efficiency of prediction.
The foregoing descriptions of specific exemplary embodiments of the present invention have been presented for purposes of illustration and description. It is not intended to limit the invention to the precise form disclosed, and obviously many modifications and variations are possible in light of the above teaching. The exemplary embodiments were chosen and described in order to explain certain principles of the invention and its practical application to enable one skilled in the art to make and use various exemplary embodiments of the invention and various alternatives and modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the claims and their equivalents.

Claims (4)

1. An image object recognition method, comprising:
s1, the camera collects images to be recognized of different target object types, and marks the target object characteristics of the different images to be recognized to form an image set to be recognized;
s2, establishing a support vector machine, and optimizing the network parameters of the support vector machine by utilizing a bat harmony mixing algorithm to form the support vector machine with the optimal network parameters;
s3, dividing the marked image sample to be identified into a training set and a testing set, inputting the training set into a support vector machine with optimal network parameters to train the feature data of the target object, and testing the trained support vector machine by using the testing set to obtain the support vector machine capable of classifying the features of the target object;
s4, establishing a plurality of convolutional neural networks respectively corresponding to different targets, and inputting images to be identified of different target types into the corresponding convolutional neural networks respectively for training to obtain a plurality of convolutional neural networks capable of predicting the same type respectively;
and S5, acquiring real-time recognition images by the camera in real time, inputting the real-time recognition images into a trained support vector machine for classification, and then inputting the classified real-time recognition images into corresponding convolutional neural network recognition target objects.
2. The image object recognition method according to claim 1, wherein step S2 specifically includes:
s201, setting parameters of the support vector machine to be optimized: punishment parameter C, RBF nuclear parameter delta and loss function epsilon parameter;
s202, initializing a harmony population size HMS, a probability HMCR of a learning harmony library, a pitch adjustment rate PAR, a distance bandwidth bw, a maximum iteration number J and a sound memory library HM;
s203, bat population initialization: initializing population number N and maximum pulse volume A0Maximum pulse rate R0Search lower bound xminSearch upper bound xmaxAttenuation coefficient of volume α, enhancement coefficient of search frequency γ, maximum number of iterations Imax
S204, updating the bat population, and updating each bat in the bat population according to the following steps:
generating a new bat according to formula (1):
BatX(i)=gbest*(N(0,1)+1),as r(i)<rand (1)
where N (0,1) is a Gaussian distribution function subject to a mean of 0 and a variance of 1; r (i) is the pulse sent by the ith bats; rand is a random number uniformly distributed in the range of [0,1 ]; gbest is an optimal value;
carrying out border crossing processing on the newly generated bats;
calculating the fitness value f (BatX (i)) of the bat;
if the newly generated bat fitness is less than the gbest, updating the gbest with the current bat;
updating the pulse loudness A and pulse rate r according to equations (2) and (3)
S205, updating the harmony population and the gbest, randomly selecting one harmony from the harmony database to carry out variation according to the formula (4) and calculating the fitness of the harmony according to the formula (4)
If the newly generated sum-sound fitness is smaller than the gbest, updating the gbest with the current bat;
s206, repeating S204-S205 until the preset search precision is met or the maximum search frequency is reached, turning to S207, otherwise, turning to S204 for continuous calculation;
and S207, outputting the gbest to obtain the optimized parameters of the support vector machine so as to establish the optimal support vector machine.
3. The image object recognition method of claim 1, wherein the convolutional neural network is a multi-layer perceptual convolutional neural network model based on a NIN network structure.
4. The image object recognition method according to claim 1, further comprising a correction step, specifically comprising:
establishing a coordinate system, inputting the acquired image to be recognized into the coordinate system, acquiring whether the edge of the image to be recognized is parallel to the coordinate axis of the coordinate system, acquiring an included angle between the edge of the image to be recognized and the coordinate axis if the edge of the image to be recognized is not parallel to the coordinate axis of the coordinate system, acquiring an angle for rotating the image to be recognized according to the included angle, and rotating the image to be recognized according to the angle so as to enable the image to be recognized to be corrected.
CN202010130406.7A 2020-02-28 2020-02-28 Image target object identification method Pending CN111368900A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010130406.7A CN111368900A (en) 2020-02-28 2020-02-28 Image target object identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010130406.7A CN111368900A (en) 2020-02-28 2020-02-28 Image target object identification method

Publications (1)

Publication Number Publication Date
CN111368900A true CN111368900A (en) 2020-07-03

Family

ID=71208351

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010130406.7A Pending CN111368900A (en) 2020-02-28 2020-02-28 Image target object identification method

Country Status (1)

Country Link
CN (1) CN111368900A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113221954A (en) * 2021-04-15 2021-08-06 长春工业大学 BP (Back propagation) classification algorithm based on improved bat algorithm
CN113716001A (en) * 2021-11-04 2021-11-30 单县多米石墨烯科技有限公司 Underwater robot system based on power supply of graphene electric brush

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150294192A1 (en) * 2014-04-10 2015-10-15 Disney Enterprises, Inc. Multi-level framework for object detection
CN107609601A (en) * 2017-09-28 2018-01-19 北京计算机技术及应用研究所 A kind of ship seakeeping method based on multilayer convolutional neural networks
CN107784320A (en) * 2017-09-27 2018-03-09 电子科技大学 Radar range profile's target identification method based on convolution SVMs
EP3327616A1 (en) * 2016-11-29 2018-05-30 Sap Se Object classification in image data using machine learning models
CN108256555A (en) * 2017-12-21 2018-07-06 北京达佳互联信息技术有限公司 Picture material recognition methods, device and terminal
CN108304785A (en) * 2018-01-16 2018-07-20 桂林电子科技大学 Road traffic sign detection based on self-built neural network and recognition methods
CN108765951A (en) * 2018-06-11 2018-11-06 广东工业大学 Method for identifying traffic status of express way based on bat algorithm support vector machines
CN109426901A (en) * 2017-08-25 2019-03-05 中国电力科学研究院 Long-term power consumption prediction method and device in one kind
CN109711373A (en) * 2018-12-29 2019-05-03 浙江大学 A kind of big data feature selection approach based on improvement bat algorithm
US20190220692A1 (en) * 2017-07-24 2019-07-18 Yi Tunnel (Beijing) Technology Co., Ltd. Method and apparatus for checkout based on image identification technique of convolutional neural network

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150294192A1 (en) * 2014-04-10 2015-10-15 Disney Enterprises, Inc. Multi-level framework for object detection
EP3327616A1 (en) * 2016-11-29 2018-05-30 Sap Se Object classification in image data using machine learning models
CN108121997A (en) * 2016-11-29 2018-06-05 Sap欧洲公司 Use the object classification in the image data of machine learning model
US20190220692A1 (en) * 2017-07-24 2019-07-18 Yi Tunnel (Beijing) Technology Co., Ltd. Method and apparatus for checkout based on image identification technique of convolutional neural network
CN109426901A (en) * 2017-08-25 2019-03-05 中国电力科学研究院 Long-term power consumption prediction method and device in one kind
CN107784320A (en) * 2017-09-27 2018-03-09 电子科技大学 Radar range profile's target identification method based on convolution SVMs
CN107609601A (en) * 2017-09-28 2018-01-19 北京计算机技术及应用研究所 A kind of ship seakeeping method based on multilayer convolutional neural networks
CN108256555A (en) * 2017-12-21 2018-07-06 北京达佳互联信息技术有限公司 Picture material recognition methods, device and terminal
CN108304785A (en) * 2018-01-16 2018-07-20 桂林电子科技大学 Road traffic sign detection based on self-built neural network and recognition methods
CN108765951A (en) * 2018-06-11 2018-11-06 广东工业大学 Method for identifying traffic status of express way based on bat algorithm support vector machines
CN109711373A (en) * 2018-12-29 2019-05-03 浙江大学 A kind of big data feature selection approach based on improvement bat algorithm

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
史亚辉 等: "基于协同寻优的改进蝙蝠和声混合算法", 《广西教育学院学报》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113221954A (en) * 2021-04-15 2021-08-06 长春工业大学 BP (Back propagation) classification algorithm based on improved bat algorithm
CN113716001A (en) * 2021-11-04 2021-11-30 单县多米石墨烯科技有限公司 Underwater robot system based on power supply of graphene electric brush
CN113716001B (en) * 2021-11-04 2022-01-18 单县多米石墨烯科技有限公司 Underwater robot system based on power supply of graphene electric brush

Similar Documents

Publication Publication Date Title
CN111191732B (en) Target detection method based on full-automatic learning
CN103927531B (en) It is a kind of based on local binary and the face identification method of particle group optimizing BP neural network
CN107633226B (en) Human body motion tracking feature processing method
CN107392919B (en) Adaptive genetic algorithm-based gray threshold acquisition method and image segmentation method
CN108764298B (en) Electric power image environment influence identification method based on single classifier
Haji et al. Comparison of optimization techniques based on gradient descent algorithm: A review
CN109029363A (en) A kind of target ranging method based on deep learning
CN104537647A (en) Target detection method and device
CN107330902B (en) Chaotic genetic BP neural network image segmentation method based on Arnold transformation
CN112101430B (en) Anchor frame generation method for image target detection processing and lightweight target detection method
CN110020712B (en) Optimized particle swarm BP network prediction method and system based on clustering
CN111368900A (en) Image target object identification method
CN111833322B (en) Garbage multi-target detection method based on improved YOLOv3
WO2022121289A1 (en) Methods and systems for mining minority-class data samples for training neural network
CN111783841A (en) Garbage classification method, system and medium based on transfer learning and model fusion
CN111160176A (en) Fusion feature-based ground radar target classification method for one-dimensional convolutional neural network
CN111126278A (en) Target detection model optimization and acceleration method for few-category scene
CN112581263A (en) Credit evaluation method for optimizing generalized regression neural network based on wolf algorithm
CN112819063B (en) Image identification method based on improved Focal loss function
CN110766058A (en) Battlefield target detection method based on optimized RPN (resilient packet network)
CN113705769A (en) Neural network training method and device
CN107529647B (en) Cloud picture cloud amount calculation method based on multilayer unsupervised sparse learning network
CN109143408B (en) Dynamic region combined short-time rainfall forecasting method based on MLP
CN111145145A (en) Image surface defect detection method based on MobileNet
CN113239980B (en) Underwater target detection method based on small sample local machine learning and hyper-parameter optimization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination