CN110533103A - A kind of lightweight wisp object detection method and system - Google Patents

A kind of lightweight wisp object detection method and system Download PDF

Info

Publication number
CN110533103A
CN110533103A CN201910815228.9A CN201910815228A CN110533103A CN 110533103 A CN110533103 A CN 110533103A CN 201910815228 A CN201910815228 A CN 201910815228A CN 110533103 A CN110533103 A CN 110533103A
Authority
CN
China
Prior art keywords
image
true
module
neural network
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910815228.9A
Other languages
Chinese (zh)
Other versions
CN110533103B (en
Inventor
秦豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dilu Technology Co Ltd
Original Assignee
Dilu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dilu Technology Co Ltd filed Critical Dilu Technology Co Ltd
Priority to CN201910815228.9A priority Critical patent/CN110533103B/en
Publication of CN110533103A publication Critical patent/CN110533103A/en
Application granted granted Critical
Publication of CN110533103B publication Critical patent/CN110533103B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The invention discloses a kind of lightweight wisp object detection method and systems, include the following steps, acquisition module acquisition image is simultaneously transmitted to image processing module;Described image processing module carries out image procossing to acquired image, and output treated image is as training data;Building neural network module simultaneously trains up the neural network module based on training data, saves trained neural network module;Target object is detected using the trained neural network module.Beneficial effects of the present invention: the present invention is by way of optimizing detection network, greatly reduce the parameter amount of model, to reduce network query function cost and memory space, and detects speed faster, the requirement that real-time detection can be reached also increases for the detection accuracy of Small object object.

Description

A kind of lightweight wisp object detection method and system
Technical field
The present invention relates to the technical field of target detection more particularly to a kind of lightweight wisp object detection method and it is System.
Background technique
In recent years, one of important research direction based on the target detection of deep learning as computer vision field is more It attracts attention, and small target deteection is always a problem in deep learning convolutional neural networks model.Wherein, YOLOV3 makees For a kind of improved target detection model end to end, it is advantageous that image processing speed is fast, such as the detection on GPU Speed can achieve 20 milliseconds of processing, one image, compared to YOLOV1 and YOLOV2 for Small object detection effect also It improves;But it detects speed heavy dependence hardware device, such as handles an image on CPU and then need several hundred milliseconds, simultaneously The phenomenon that missing inspection erroneous detection is easy to appear in small target deteection, precision is not high enough.
Summary of the invention
The purpose of this section is to summarize some aspects of the embodiment of the present invention and briefly introduce some preferable implementations Example.It may do a little simplified or be omitted to avoid our department is made in this section and the description of the application and the title of the invention Point, the purpose of abstract of description and denomination of invention it is fuzzy, and this simplification or omit and cannot be used for limiting the scope of the invention.
In view of above-mentioned existing problem, the present invention is proposed.
Therefore, it technical problem solved by the present invention is providing a kind of lightweight wisp object detection method, is able to ascend The speed and detection accuracy of detection, while optimizing network makes its parameter measure reduction.
In order to solve the above technical problems, the invention provides the following technical scheme: a kind of lightweight wisp target detection side Method includes the following steps that acquisition module acquisition image is simultaneously transmitted to image processing module;Described image processing module is to collecting Image carry out image procossing, output treated image is as training data;Construct neural network module and based on training number It is trained up according to the neural network module, saves trained neural network module;Use the trained mind Target object is detected through network module.
A kind of preferred embodiment as lightweight wisp object detection method of the present invention, in which: the acquisition Module is Image Acquisition camera, every one image of acquisition in 10 seconds, and needs to be acquired under different scenes.
A kind of preferred embodiment as lightweight wisp object detection method of the present invention, in which: described image It handles further comprising the steps of, collected all images summarize and screening and filtering, delete the invalid image of repetition;Make Mark is carried out to image with mark tool and saves the mark image obtained after mark.
A kind of preferred embodiment as lightweight wisp object detection method of the present invention, in which: the nerve Network module constructs convolutional neural networks, including core network and module of target detection, the master based on shufflenetv2 Dry network includes separation residual error convolutional layer and down-sampling convolutional layer, can carry out feature extraction to the image of training data, obtain Further feature and shallow-layer feature;The target detection layer handles the further feature and shallow-layer feature that extract, calculates The exact position at target object center and size, output test result, calculation formula are as follows:
(X, Y)=down* { (x, y)+(dx, dy) }
(W, H)=(bw, bh) * e(dw, dh)
Wherein, down is ratio value, takes 4 or 8, and dx, dy are offsets, and x, y are in the position of characteristic layer, and dw, dh are volumes Product computation layer as a result, (bw, bh) indicates the length and width of basic subrack, and the selected of basic subrack is determined by the size of target object.
A kind of preferred embodiment as lightweight wisp object detection method of the present invention, in which: further include missing Poor detection module, includes the following steps,
True positional shift (true_dx, true_dy) is calculated according to mark image and target normalizes size (true_ Dw, true_dh), calculation formula is as follows:
Wherein, true_x, true_w are center and size of the target on original image respectively, and down is ratio value, 4 or 8 are taken, ox, oy are the positions on corresponding characteristic layer, and ow, oh are the length of corresponding basic subrack.
(true_dx, true_dy, true_dw, the true_dh) of comparison (dx, dy, dw, dh) and true value simultaneously calculates mistake Difference, calculation formula are as follows:
Loss=loss1+loss2+loss3+loss4
Wherein:
Loss1=-true_dx*log (dx)-(1-true_dx) * log (1-dx)
Loss2=-true_dy*log (dy)-(1-true_dy) * log (1-dy)
Loss3=w* (true_dw-dw)2
Loss4=w* (true_dh-dh)2
W=2-true_x*true_y/ (image_w*image_h)
Wherein, w is wisp processing coefficient, is the optimisation strategy to wisp.
A kind of preferred embodiment as lightweight wisp object detection method of the present invention, in which: the error Detection module is further comprising the steps of, and according to the error amount Loss of calculating, core network is updated by way of backpropagation Parameter makes the testing result of output closer to true value;When ten groups of data of continuous training and network no longer exports better detection When as a result, terminates training and save trained neural network module parameter.
A kind of preferred embodiment as lightweight wisp object detection method of the present invention, in which: described to mesh Mark object detect further comprising the steps of, acquisition module acquisition target image;The collected target image is carried out Screening anomaly removes imperfect image and invalid image, and normal target image is inputted the neural network module after training;Institute After the neural network module after training is stated to the normal target image procossing, the box of the corresponding position of target object is obtained.
Another technical problem that the present invention solves is: providing a kind of small target deteection network comprising simplifying, makes to detect The lightweight wisp object detection system that speed and detection accuracy increase, above-mentioned lightweight wisp target detection side Method can rely on this system realization.
In order to solve the above technical problems, the invention provides the following technical scheme: a kind of lightweight wisp target detection system System, including, acquisition module, the acquisition module is for acquiring image data;Image processing module, described image processing module energy It is enough that acquired image is handled, satisfactory image is screened, and be labeled to image;Neural network module, The neural network module can be handled the image of input, mark out the box of target object corresponding position;Error inspection Module is surveyed, the error sensing module is used to calculate the error of testing result, and judges whether to need again to neural network mould Block is trained.
A kind of preferred embodiment as lightweight wisp object detection system of the present invention, in which: described image Processing module includes mark tool, and the mark tool is used to carry out mark to the target object on image.
A kind of preferred embodiment as lightweight wisp object detection system of the present invention, in which: the nerve Network module includes core network, and the core network is used to carry out feature extraction to image;Module of target detection, the mesh Mark detection module can calculate exact position and the size at target object center, and mark on the image.
Beneficial effects of the present invention: the present invention greatly reduces the parameter amount of model by way of optimizing detection network, It to reduce network query function cost and memory space, and detects speed faster, the requirement of real-time detection can be reached, for small mesh Target detection accuracy equally increases.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, required use in being described below to embodiment Attached drawing be briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for this For the those of ordinary skill of field, without any creative labor, it can also be obtained according to these attached drawings other Attached drawing.Wherein:
Fig. 1 is the overall flow schematic diagram of lightweight wisp object detection method described in the first embodiment of the invention;
Fig. 2 is the signal of core network in lightweight wisp object detection method described in the first embodiment of the invention Figure;
Overall structure when Fig. 3 is training in lightweight wisp object detection system described in second of embodiment of the invention Schematic diagram;
Fig. 4 is the entirety in lightweight wisp object detection system described in second of embodiment of the invention in actual use Structural schematic diagram.
Specific embodiment
In order to make the foregoing objectives, features and advantages of the present invention clearer and more comprehensible, right with reference to the accompanying drawings of the specification A specific embodiment of the invention is described in detail, it is clear that and described embodiment is a part of the embodiments of the present invention, and It is not all of embodiment.Based on the embodiments of the present invention, ordinary people in the field is without making creative work Every other embodiment obtained, all should belong to the range of protection of the invention.
In the following description, numerous specific details are set forth in order to facilitate a full understanding of the present invention, but the present invention can be with Implemented using other than the one described here other way, those skilled in the art can be without prejudice to intension of the present invention In the case of do similar popularization, therefore the present invention is not limited by the specific embodiments disclosed below.
Secondly, " one embodiment " or " embodiment " referred to herein, which refers to, may be included at least one realization side of the invention A particular feature, structure, or characteristic in formula." in one embodiment " that different places occur in the present specification not refers both to The same embodiment, nor the individual or selective embodiment mutually exclusive with other embodiments.
Combination schematic diagram of the present invention is described in detail, when describing the embodiments of the present invention, for purposes of illustration only, indicating device The sectional view of structure can disobey general proportion and make partial enlargement, and the schematic diagram is example, should not limit this herein Invent the range of protection.In addition, the three-dimensional space of length, width and depth should be included in actual fabrication.
Simultaneously in the description of the present invention, it should be noted that the orientation of the instructions such as " upper and lower, inner and outer " in term Or positional relationship is to be based on the orientation or positional relationship shown in the drawings, and is merely for convenience of description of the present invention and simplification of the description, and It is not that the device of indication or suggestion meaning or element must have a particular orientation, be constructed and operated in a specific orientation, therefore It is not considered as limiting the invention.In addition, term " first, second or third " is used for description purposes only, and cannot understand For indication or suggestion relative importance.
In the present invention unless otherwise clearly defined and limited, term " installation is connected, connection " shall be understood in a broad sense, example Such as: may be a fixed connection, be detachably connected or integral type connection;It equally can be mechanical connection, be electrically connected or be directly connected to, Can also indirectly connected through an intermediary, the connection being also possible to inside two elements.For the ordinary skill people of this field For member, the concrete meaning of above-mentioned term in the present invention can be understood with concrete condition.
Embodiment 1
Target detection, which refers to, identifies the target object for including in image by algorithm of target detection, and can export target The position of object in the picture.There are mainly two types of modes for algorithm of target detection, first is that then first calculating candidate region carries out CNN points Class, such as RCNN series of network;Second is that directly positioning and classification results are exported simultaneously, such as SSD, YOLO series of network.The former algorithm Accuracy rate is higher, however calculating speed is slow, and the latter is more bonded actual scene use, more real-time, therefore actually makes Used time is more common.YOLO series of network has evolved to YOLOV3 at present, and the detection speed and precision compared to preceding two generation has It improves, size is also optimized, and clear in structure, real-time is good, therefore YOLOV3 is one of most common detection algorithm in engineering. Specifically, being provided in the present embodiment referring to Fig.1 in order to advanced optimize its network structure, raising detection accuracy and speed A kind of lightweight wisp object detection method, includes the following steps, acquisition module 100 acquires image and is simultaneously transmitted at image Manage module 200;Image processing module 200 carries out image procossing to acquired image, and output treated image is as training Data;Building neural network module 300 simultaneously trains up the neural network module 300 based on training data, saves Trained neural network module 300;Target object is detected using the trained neural network module 300.
Wherein, acquisition module 100 is Image Acquisition camera, every one image of acquisition in 10 seconds, and needs do not sympathizing with Be acquired under scape, in the present embodiment, can by the way that Image Acquisition camera is mounted on automobile, with automobile traveling into Row Image Acquisition, to obtain picture material abundant.Specifically, in order to keep acquired image data content abundant enough, Need to carry out Image Acquisition, such as different weather conditions, including fine day, rainy day, cloudy day etc. under different scenes, it is different Time, including morning, noon, afternoon, dusk etc..
Image processing module 200 is further comprising the steps of to acquired image progress image procossing, by collected institute There is image to be summarized and carry out screening and filtering, deletes the invalid image of repetition;Mark is carried out to image using mark tool 201 And save the mark image obtained after mark.
Specifically, interval time is ofer short duration, it may appear that in the short time since acquisition module 100 is when acquiring image Acquired image content repeats or similar situation, therefore after summarizing collected all images, needs to carry out Screening and filtering, delete repetition and invalid image, the effective image remained then can be used as training data.
Preferably, to detect the neural network module 300 after training reliably, the amount of images as training data should not Lower than 10,000.
Image after screening and filtering carries out mark using mark tool 201, wherein mark tool 201 is open mark work Have labelme software, detailed plotting can be carried out to target, target object is marked out with box and to return to object The relative position of body in the picture, and preserved with the form of xml document format, obtain mark image.
The building of neural network module 300 is carried out based on shufflenetv2, and neural network module 300 includes core network 301 and module of target detection 302, building includes the following steps,
Referring to Fig. 2, core network 301 uses for reference the thought of lightweight network shufflenetv2, mainly substantially single by two kinds Member stacks, and is that separation residual error convolutional layer and down-sampling convolutional layer, core network 301 can be to the images of training data respectively Feature extraction is carried out, further feature layer and shallow-layer characteristic layer are obtained.Wherein, separation residual error convolutional layer is mainly by input feature vector layer It being divided into two according to Characteristic Number, the calculating that half feature participation convolutional layer feature is extracted again, the other half is then not involved in calculating, After half feature calculation, then this two parts feature put together again, and carry out upsetting recombination according to feature number, beaten Random purpose is to interact each group characteristic information, so as to preferably complete Fusion Features, such as feature includes 1,2, 3,4,5,6 totally 6, when grouping, are divided into two groups, and 1,2,3 be one group, and 4,5,6 be another group, wherein 4,5,6 participation features mention again It takes, obtainsFeature permutation after upsetting recombination then becomes 1,2、3、In grouping next time, and become 1、2 one groups,3、One group.
The purpose of down-sampling convolutional layer is that the size of input feature vector layer is reduced half, and number of features doubles, under adopt Input feature vector layer is first put into two groups of convolution feature extraction layers and calculates by sample convolutional layer, and number of features is constant after calculating, but Because being two groups of features, number of features expands one times, while in convolution feature extraction, and the size of characteristic layer is reduced Half recombinates new characteristic layer finally by upsetting, such as feature number of layers is 10, size 64*64, is passing through convolution After extract layer, number 10, size 32*32, but due to there is two groups of convolution feature extraction layers, the number after recombination is 20, size 32*32, to reach the reduction of characteristic layer size, the increased purpose of feature number of active lanes.
Core network 301 passes through separation residual error convolution from separation residual error convolutional layer and down-sampling convolutional layer stacking The repeatedly stacking of layer and down-sampling convolutional layer achievees the purpose that feature-rich, it is to be understood that the main work of core network 301 With being to carry out feature extraction, further feature and shallow-layer feature are obtained, specifically, the picture for being 640*640 for size, by master The further feature of 40*40 and the shallow-layer feature of 80*80, the 20*20 characteristic layer of deeper are obtained after the feature extraction of dry network 301 Then give up and do not have to, the reason is that the present embodiment mainly for detection of wisp rather than big object.
Module of target detection 302 is mainly handled on characteristic layer, and for the place of further feature and shallow-layer feature Reason mode is identical.Specifically, module of target detection 302 includes multiple convolutional layers, such as shallow-layer feature, module of target detection 302 first do a convolutional calculation in shallow-layer feature, it is therefore an objective to obtain the essence of each corresponding position point in the shallow-layer feature of 80*80 Firmly believe breath, the size including target's center's position offset and target frame can according to offset and locating characteristic layer position To calculate target object central point exact position, calculation formula is as follows:
(X, Y)=down* { (x, y)+(dx, dy) }
Wherein, down is ratio value, takes 4 or 8, and dx, dy are offsets, and x, y are in the position of characteristic layer, and 8 be to pass through original Image size 640 is obtained divided by characteristic pattern size 80.The calculating kind down of shallow-layer feature takes 8, and further feature calculating then takes 16.
The size calculating of target object includes the following steps, corresponds to 3 basic subrack for each element of characteristic layer, greatly It is small be respectively (0.5,1), (1,1.5), (2,3), according to convolutional calculation layer calculate as a result, and corresponding basic subrack, energy Enough determine the size of target object, specific formula for calculation is as follows:
(W, H)=(bw, bh) * e(dw, dh)
Wherein, dw, dh are convolutional calculation layers as a result, (bw, bh) indicates the length and width of basic subrack, and basic subrack it is selected by The size of target object determines, selects and the immediate basic subrack of its size.
It is understood that building neural network module 300, the image of training data is inputted into neural network module In 300, exact position and size and the output test result at target object center can be calculated.
Neural network module 300 further includes error sensing module 400, for the neural network module 300 built, is being thrown It needs to promote its accuracy detected by training before entering actual use, error sensing module 400 is for judging neural network mould Whether the detection error and neural network module 300 of block 300, which have obtained sufficient training, to use in actual operation, The use of error sensing module 400 includes the following steps,
The image of training data extracts feature in the core network 301 of neural network module 300, obtains on characteristic layer Target position deviate (dx, dy) and target normalization size (dw, dh), according to it is corresponding by mark tool 201 obtain Image is marked, true positional shift (true_dx, true_dy) and target normalization size (true_dw, true_ are obtained Dh), calculation formula is as follows:
Wherein, true_x, true_w are center and size of the target on original image respectively, and down is ratio value, 4 or 8 are taken, ox, oy are the positions on corresponding characteristic layer, and ow, oh are the length of corresponding basic subrack.
(dx, dy, dw, the dh) that is obtained according to neural network module 300 and true value (true_dx, true_dy, True_dw, true_dh) comparison, error amount is calculated, calculation formula is as follows:
Loss=loss1+loss2+loss3+loss4
Wherein, Loss is error amount, including object locating bias loss function loss1 and loss2 and article size return Return loss function loss3 and loss4, specific:
Loss1=-true_dx*log (dx)-(1-true_dx) * log (1-dx)
Loss2=-true_dy*log (dy)-(1-true_dy) * log (1-dy)
Loss3=w* (true_dw-dw)2
Loss4=w* (true_dh-dh)2
W is wisp processing coefficient, is the optimisation strategy to wisp, and calculation formula is as follows:
W=2-true_w*true_h/ (image_w*image_h)
Wherein, true_w and true_h refers to that the width and height of target object, image_w and image_h are former when distinguishing The width and height of image.
According to the error amount Loss of calculating, by way of backpropagation, the ginseng of network in neural network module 300 is updated Number makes the testing result of the output of neural network module 300 closer to true value;When ten groups of data of continuous training and network is no longer defeated Out when better testing result, terminates training and save the parameter of trained neural network module 300, mention in actual use For calculating parameter.
After the training for completing neural network module 300, that is, it can be used the trained neural network module 300 to target Object is detected, and following steps are specifically included,
Target image is acquired using acquisition module 100;Screening anomaly is carried out to collected target image, is removed imperfect Image and invalid image do not do and further detect and reuse the acquisition of acquisition module 100, and normal target image is inputted and is instructed Neural network module 300 after white silk, specifically, imperfect image and invalid image include occurring because of the problem of signal transmits It is duplicate without the object or image that need to detect inside the image and image that the problem of decoding is destroyed picture quality, Detected without being transmitted to neural network module 300;The normal target figure of 300 pairs of neural network module inputs after training After processing, the box of the corresponding position of target object is obtained, and output test result is referred to for user.
Embodiment 2
Referring to the signal of Fig. 3~4, a kind of lightweight wisp object detection system is proposed in the present embodiment, the system packet Include acquisition module 100, image processing module 200, neural network module 300 and error sensing module 400.Wherein, acquisition module 100 for acquiring image data, can be Image Acquisition camera;Image processing module 200 can to acquired image into Row processing filters out the image for meeting the requirement for detection, and is labeled to be marked to image in the training stage Infuse image;Neural network module 300 can be handled the image of input, mark out the box of target object corresponding position; Error sensing module 400 is used to calculate the error of testing result in the training stage, and judges whether to need again to neural network Module 300 is trained.
Specifically, image processing module 200 includes mark tool 201, for carrying out mark to the target object on image, The actual position and size of target object are marked out, mark tool 201 can be open mark tool labelme software.
Neural network module 300 includes core network 301 and module of target detection 302.Wherein, core network 301 is used for Feature extraction is carried out to image, uses for reference the thought of lightweight network shufflenetv2, mainly stacked by two kinds of basic units and At being separation residual error convolutional layer and down-sampling convolutional layer respectively.Module of target detection 302 can calculate target object center Exact position and size, and mark on the image, and module of target detection 302 includes multiple convolutional layers.
Core network 301 due to neural network module 300 based on lightweight network shufflenetv2 building basis, because Had compared to tradition based on the darknet53 or mobilenet neural network constituted as core network in this its speed Very big improvement can be disposed on GPU and CPU, be suitable for a variety of use occasions, and detection velocity contrast is as follows:
Core network Detection speed (ms) on GPU Detection speed (ms) on CPU
darknet53 13 500 or more
mobilenet 8 100 or more
shufflenetv2 4 30
As can be seen that the mind constructed based on lightweight network shufflenetv2 as core network 301 in the present embodiment Through network module 300 in terms of detecting speed, there is great promotion compared to traditional neural network, be especially deployed in When on CPU, reducing the dependence for hardware device, detection can also be disposed under deficient environment by losing in computing resource, and And also real-time that can be detected gets a promotion for the promotion of speed.
In addition, the neural network module 300 provided in the present embodiment is compared to the yolov3 net for being conventionally used to target detection Network is optimized in precision and parameter amount, is compared as follows:
Network Measuring accuracy (%) Join population size (M)
volov3 85 127
Neural network module 300 92 9.5
Compared to yolov3 network, neural network module 300 provided in this embodiment will be used to detect in yolov3 network The characteristic layer of big object target is cut, and removes this feature layer, neural network module 300 is made to focus more on wisp target Detection identification, and the weight of the size setup parameter according to object allow minimum object to obtain more preferable, more accurate identification, not only Detection speed is improved, and improves the accuracy rate of detection, in addition, the parameter amount of detection model greatly reduces, compared to Yolov3 network is more than 120M, and ginseng population size is no more than 10M in the present embodiment, there is very significant reduction effect.
It should be noted that the above examples are only used to illustrate the technical scheme of the present invention and are not limiting, although referring to preferable Embodiment describes the invention in detail, those skilled in the art should understand that, it can be to technology of the invention Scheme is modified or replaced equivalently, and without departing from the spirit and scope of the technical solution of the present invention, should all be covered in this hair In bright scope of the claims.

Claims (10)

1. a kind of lightweight wisp object detection method, it is characterised in that: include the following steps,
Acquisition module (100) acquisition image is simultaneously transmitted to image processing module (200);
Described image processing module (200) carries out image procossing to acquired image, and output treated image is as training Data;
Building neural network module (300) simultaneously trains up the neural network module (300) based on training data, protects Deposit trained neural network module (300);
Target object is detected using the trained neural network module (300).
2. lightweight wisp object detection method as described in claim 1, it is characterised in that: the acquisition module (100) For Image Acquisition camera, every one image of acquisition in 10 seconds, and need to be acquired under different scenes.
3. lightweight wisp object detection method as claimed in claim 1 or 2, it is characterised in that: described image processing is also Include the following steps,
Collected all images summarize simultaneously screening and filtering, delete the invalid image of repetition;
Mark is carried out to image using mark tool (201) and saves the mark image obtained after mark.
4. lightweight wisp object detection method as claimed in claim 3, it is characterised in that: the neural network module (300) convolutional neural networks are constructed based on shufflenetv2, including core network (301) and module of target detection (302),
The core network (301) includes separation residual error convolutional layer and down-sampling convolutional layer, can image to training data into Row feature extraction obtains further feature and shallow-layer feature;
The target detection layer (302) handles the further feature and shallow-layer feature that extract, calculates in target object The exact position of the heart and size, output test result, calculation formula are as follows:
(X, Y)=down* { (x, y)+(dx, dy) }
(W, H)=(bw, bh) * e(dw, dh)
Wherein, down is ratio value, takes 4 or 8, and dx, dy are offsets, and x, y are in the position of characteristic layer, and dw, dh are convolution meters Calculate layer as a result, (bw, bh) indicates the length and width of basic subrack, and the selected of basic subrack is determined by the size of target object.
5. lightweight wisp object detection method as claimed in claim 4, it is characterised in that: further include error sensing module (400), include the following steps,
Calculate true positional shift (true_dx, true_dy) according to mark image and target normalize size (true_dw, True_dh), calculation formula is as follows:
Wherein, true_x, true_w are center and size of the target on original image respectively, and down is ratio value, take 4 or 8, ox, oy is the position on corresponding characteristic layer, and ow, oh are the length of corresponding basic subrack.
(true_dx, true_dy, true_dw, the true_dh) of comparison (dx, dy, dw, dh) and true value simultaneously calculates error amount, Calculation formula is as follows:
Loss=loss1+loss2+loss3+loss4
Wherein:
Loss1=-true_dx*log (dx)-(1-true_dx) * log (1-dx)
Loss2=-true_dy*log (dy)-(1-true_dy) * log (1-dy)
Loss3=w* (true_dw-dw)2
Loss4=w* (true_dh-dh)2
W=2-true_x*true_y/ (image_w*image_h)
Wherein, w is wisp processing coefficient, is the optimisation strategy to wisp.
6. lightweight wisp object detection method as claimed in claim 5, it is characterised in that: the error sensing module It is (400) further comprising the steps of,
According to the error amount Loss of calculating, the parameter of core network is updated by way of backpropagation, makes the detection knot of output Fruit is closer to true value;
When it is continuous training ten groups of data and network no longer export better testing result when, terminate training simultaneously saving trained mind Through network module (300) parameter.
7. lightweight wisp object detection method as claimed in claim 6, it is characterised in that: described to be carried out to target object Detect it is further comprising the steps of,
Acquisition module (100) acquires target image;
Screening anomaly is carried out to the collected target image, removes imperfect image and invalid image, and by normal target Neural network module (300) after image input training;
After neural network module (300) after the training is to the normal target image procossing, the correspondence of target object is obtained The box of position.
8. a kind of lightweight wisp object detection system, it is characterised in that: including,
Acquisition module (100), the acquisition module (100) is for acquiring image data;
Image processing module (200), described image processing module (200) can be handled acquired image, screening symbol Desired image is closed, and image is labeled;
Neural network module (300), the neural network module (300) can be handled the image of input, mark out mesh Mark the box of object corresponding position;
Error sensing module (400), the error sensing module (400) are used to calculate the error of testing result, and judge whether It needs again to be trained neural network module (300).
9. lightweight wisp object detection system as claimed in claim 8, it is characterised in that: described image processing module It (200) include mark tool (201), the mark tool (201) is used to carry out mark to the target object on image.
10. lightweight wisp object detection system as claimed in claim 8 or 9, it is characterised in that: the neural network mould Block (300) includes,
Core network (301), the core network (301) are used to carry out feature extraction to image;
Module of target detection (302), the module of target detection (302) can calculate the exact position at target object center with And size, and mark on the image.
CN201910815228.9A 2019-08-30 2019-08-30 Lightweight small object target detection method and system Active CN110533103B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910815228.9A CN110533103B (en) 2019-08-30 2019-08-30 Lightweight small object target detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910815228.9A CN110533103B (en) 2019-08-30 2019-08-30 Lightweight small object target detection method and system

Publications (2)

Publication Number Publication Date
CN110533103A true CN110533103A (en) 2019-12-03
CN110533103B CN110533103B (en) 2023-08-01

Family

ID=68665564

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910815228.9A Active CN110533103B (en) 2019-08-30 2019-08-30 Lightweight small object target detection method and system

Country Status (1)

Country Link
CN (1) CN110533103B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111144417A (en) * 2019-12-27 2020-05-12 创新奇智(重庆)科技有限公司 Intelligent container small target detection method and detection system based on teacher student network
CN111160434A (en) * 2019-12-19 2020-05-15 中国平安人寿保险股份有限公司 Training method and device of target detection model and computer readable storage medium
CN111435437A (en) * 2019-12-26 2020-07-21 珠海大横琴科技发展有限公司 PCB pedestrian re-recognition model training method and PCB pedestrian re-recognition method
CN111652102A (en) * 2020-05-27 2020-09-11 国网山东省电力公司东营供电公司 Power transmission channel target object identification method and system
CN111798435A (en) * 2020-07-08 2020-10-20 国网山东省电力公司东营供电公司 Image processing method, and method and system for monitoring invasion of engineering vehicle into power transmission line
CN111898497A (en) * 2020-07-16 2020-11-06 济南博观智能科技有限公司 License plate detection method, system, equipment and readable storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108647742A (en) * 2018-05-19 2018-10-12 南京理工大学 Fast target detection method based on lightweight neural network

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108647742A (en) * 2018-05-19 2018-10-12 南京理工大学 Fast target detection method based on lightweight neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LAYGIN: "基于YOLOv3和shufflenet的人脸实时检测", 《墨天轮》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111160434A (en) * 2019-12-19 2020-05-15 中国平安人寿保险股份有限公司 Training method and device of target detection model and computer readable storage medium
CN111435437A (en) * 2019-12-26 2020-07-21 珠海大横琴科技发展有限公司 PCB pedestrian re-recognition model training method and PCB pedestrian re-recognition method
CN111144417A (en) * 2019-12-27 2020-05-12 创新奇智(重庆)科技有限公司 Intelligent container small target detection method and detection system based on teacher student network
CN111144417B (en) * 2019-12-27 2023-08-01 创新奇智(重庆)科技有限公司 Intelligent container small target detection method and detection system based on teacher and student network
CN111652102A (en) * 2020-05-27 2020-09-11 国网山东省电力公司东营供电公司 Power transmission channel target object identification method and system
CN111798435A (en) * 2020-07-08 2020-10-20 国网山东省电力公司东营供电公司 Image processing method, and method and system for monitoring invasion of engineering vehicle into power transmission line
CN111898497A (en) * 2020-07-16 2020-11-06 济南博观智能科技有限公司 License plate detection method, system, equipment and readable storage medium

Also Published As

Publication number Publication date
CN110533103B (en) 2023-08-01

Similar Documents

Publication Publication Date Title
CN110533103A (en) A kind of lightweight wisp object detection method and system
CN111540199B (en) High-speed traffic flow prediction method based on multi-mode fusion and graph attention machine mechanism
CN106650913A (en) Deep convolution neural network-based traffic flow density estimation method
CN104978580B (en) A kind of insulator recognition methods for unmanned plane inspection transmission line of electricity
CN110263705A (en) Towards two phase of remote sensing technology field high-resolution remote sensing image change detecting method
CN106971152A (en) A kind of method of Bird's Nest in detection transmission line of electricity based on Aerial Images
CN107832835A (en) The light weight method and device of a kind of convolutional neural networks
CN106373397B (en) Remote sensing images road situation analysis method based on fuzzy neural network
CN106991666B (en) A kind of disease geo-radar image recognition methods suitable for more size pictorial informations
CN110322453A (en) 3D point cloud semantic segmentation method based on position attention and auxiliary network
CN110084151A (en) Video abnormal behaviour method of discrimination based on non-local network's deep learning
CN110213788A (en) WSN abnormality detection and kind identification method based on data flow space-time characteristic
CN106022380A (en) Individual identity identification method based on deep learning
CN107705560A (en) A kind of congestion in road detection method for merging visual signature and convolutional neural networks
CN109785629A (en) A kind of short-term traffic flow forecast method
CN105629198B (en) The indoor multi-target tracking method of fast search clustering algorithm based on density
CN107563349A (en) A kind of Population size estimation method based on VGGNet
CN111832398B (en) Unmanned aerial vehicle image distribution line pole tower ground wire broken strand image detection method
CN107564290A (en) A kind of urban road intersection saturation volume rate computational methods
CN109670584A (en) A kind of fault diagnosis method and system based on big data
CN102831431A (en) Detector training method based on hierarchical clustering
CN110426037A (en) A kind of pedestrian movement track real time acquiring method under enclosed environment
CN106778526A (en) A kind of extensive efficient face identification method based on Hamming distance
CN110334602A (en) A kind of people flow rate statistical method based on convolutional neural networks
CN113780132A (en) Lane line detection method based on convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant