CN109299722A - Characteristic pattern processing method, device and system and storage medium for neural network - Google Patents

Characteristic pattern processing method, device and system and storage medium for neural network Download PDF

Info

Publication number
CN109299722A
CN109299722A CN201810932282.7A CN201810932282A CN109299722A CN 109299722 A CN109299722 A CN 109299722A CN 201810932282 A CN201810932282 A CN 201810932282A CN 109299722 A CN109299722 A CN 109299722A
Authority
CN
China
Prior art keywords
characteristic pattern
channel
network
network block
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810932282.7A
Other languages
Chinese (zh)
Inventor
张祥雨
马宁宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Megvii Technology Co Ltd
Original Assignee
Beijing Megvii Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Megvii Technology Co Ltd filed Critical Beijing Megvii Technology Co Ltd
Priority to CN201810932282.7A priority Critical patent/CN109299722A/en
Publication of CN109299722A publication Critical patent/CN109299722A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the present invention provides a kind of characteristic pattern processing method, device and system and storage medium for neural network.Method includes: that each network block at least one network block is cut layer via channel and receives the input feature vector figure of the network block, and the channel of input feature vector figure is cut to two parts, to obtain first group of characteristic pattern and second group of characteristic pattern;First group of characteristic pattern is inputted into the first access, and second group of characteristic pattern is inputted into alternate path;The output characteristic pattern of the output characteristic pattern of the first access and alternate path is carried out channel via channel articulamentum to connect, with the characteristic pattern after being connected;Layer is reset via channel, channel rearrangement is carried out to the characteristic pattern after connection, to obtain the output characteristic pattern of the network block.This working method can reduce the calculation amount of neural network, and improve the speed and accuracy rate of neural network.

Description

Characteristic pattern processing method, device and system and storage medium for neural network
Technical field
The present invention relates to machine learning field, relate more specifically to a kind of characteristic pattern processing method for neural network, Device and system and storage medium.
Background technique
In field of image recognition, the network layer of deep neural network can achieve layers up to a hundred, and port number can achieve Thousands of.The recognition accuracy of deep neural network can increase with the depth of network and the increase of port number, but simultaneously Need a large amount of calculation amount and parameter amount.In general, it is (per second to can achieve more than one hundred million FLOPs for the calculation amount of deep neural network Flops).However, the mobile devices such as mobile phone, robot only have tens or several hundred MFLOPs, (per second million floating Point processing number) calculation amount budget, this makes deep neural network be difficult to run on devices.
Summary of the invention
The present invention is proposed in view of the above problem.The present invention provides a kind of characteristic pattern processing for neural network Methods, devices and systems and storage medium.
According to an aspect of the present invention, a kind of characteristic pattern processing method for neural network, the neural network are provided Including at least one network block, each network block at least one described network block includes that channel cuts layer, the first access, the Layer is reset in two accesses, channel articulamentum and channel, and the alternate path includes at least one convolutional layer, and method includes: for institute Each network block at least one network block is stated, cuts the input feature vector figure that layer receives the network block via the channel, and The channel of the input feature vector figure is cut to two parts, to obtain first group of characteristic pattern and second group of characteristic pattern;For described First group of characteristic pattern is inputted first access by each network block at least one network block, and by described second Group characteristic pattern inputs the alternate path;For each network block at least one described network block, connect via the channel It connects layer and connects the output characteristic pattern of first access with the output characteristic pattern of alternate path progress channel, to be connected Characteristic pattern after connecing;For each network block at least one described network block, layer is reset to the company via the channel Characteristic pattern after connecing carries out channel rearrangement, to obtain the output characteristic pattern of the network block.
Illustratively, at least one described network block includes at least one first network block, at least one described first net The alternate path of each first network block in network block includes the first convolutional layer, the second convolutional layer and third convolutional layer, institute State method further include: for each first network block at least one described first network block, via first convolutional layer Dimensionality reduction convolution is carried out to second group of characteristic pattern;For each first network block at least one described first network block, Depth separation volume is carried out to the output characteristic pattern of first convolutional layer via second convolutional layer and the third convolutional layer Product, to obtain the output characteristic pattern of the alternate path.
Illustratively, second convolutional layer is used to the output characteristic pattern to first convolutional layer and executes step-length be 1 Convolution, first access are the jump connecting paths between the channel cutting layer and the channel articulamentum.
Illustratively, second convolutional layer is used to the output characteristic pattern to first convolutional layer and executes step-length be 2 Convolution, first access include pond layer, the method also includes: first group of characteristic pattern is held via the pond layer The pond that row step-length is 2, to obtain the output characteristic pattern of first access.
Illustratively, at least one described network block includes at least one second network block, at least one described second net The alternate path of the second network block of each of network block includes residual error structure, and the residual error structure includes residual error branch and jump Jump connection branch, the residual error branch include at least one described convolutional layer, the method also includes: for it is described at least one The second network block of each of second network block carries out convolution to second group of characteristic pattern via the residual error branch;For The second network block of each of at least one second network block, by the output characteristic pattern of the residual error branch and the jump The second group of characteristic pattern for connecting branch output carries out element addition, to obtain the output characteristic pattern of the alternate path.
Illustratively, the residual error branch includes Volume Four lamination, the 5th convolutional layer and the 6th convolutional layer, wherein described Method further include: for the second network block of each of at least one second network block, via the Volume Four lamination pair Second group of characteristic pattern carries out dimensionality reduction convolution;For the second network block of each of at least one second network block, warp Depth is carried out to the output characteristic pattern of the Volume Four lamination by the 5th convolutional layer and the 6th convolutional layer and separates convolution, To obtain the output characteristic pattern of the residual error branch;Wherein, the port number and described the of the output characteristic pattern of the residual error branch The port number of two groups of characteristic patterns is equal.
Illustratively, the 5th convolutional layer is used to the output characteristic pattern to the Volume Four lamination and executes step-length be 1 Convolution, first access are jump connecting path of the channel cutting layer to the channel articulamentum.
Illustratively, the neural network include sequentially connected initial convolutional layer, maximum pond layer, first network section, Second network segment, third network segment, global pool layer and full articulamentum, wherein the first network section includes described at least one Four network blocks in a network block, second network segment include eight network blocks at least one described network block, institute Stating third network segment includes four network blocks at least one described network block.
Illustratively, the port number of first group of characteristic pattern and the port number of second group of characteristic pattern are equal.
According to a further aspect of the invention, a kind of characteristic pattern processing unit for neural network, the nerve net are provided Network includes at least one network block, each network block at least one described network block include channel cut layer, the first access, Layer is reset in alternate path, channel articulamentum and channel, and the alternate path includes at least one convolutional layer, and described device includes: Channel cuts module, for cutting layer via the channel and receiving for each network block at least one described network block The input feature vector figure of the network block, and the channel of the input feature vector figure is cut to two parts, to obtain first group of characteristic pattern With second group of characteristic pattern;Input module, for for each network block at least one described network block, by described first group Characteristic pattern inputs first access, and second group of characteristic pattern is inputted the alternate path;Channel link block, is used for It is via the channel articulamentum that the output of first access is special for each network block at least one described network block The output characteristic pattern progress channel of sign figure and the alternate path connects, with the characteristic pattern after being connected;Channel reordering module, For resetting layer to the feature after the connection via the channel for each network block at least one described network block Figure carries out channel rearrangement, to obtain the output characteristic pattern of the network block.
According to a further aspect of the invention, a kind of characteristic pattern processing system for neural network, including processor are provided And memory, wherein computer program instructions are stored in the memory, the computer program instructions are by the processor For executing the above-mentioned characteristic pattern processing method for neural network when operation.
According to a further aspect of the invention, a kind of storage medium is provided, stores program instruction on said storage, Described program instruction is at runtime for executing the above-mentioned characteristic pattern processing method for neural network.
It is according to an embodiment of the present invention to be situated between for the characteristic pattern processing method of neural network, device and system and storage Matter is cut by channel the channel of input feature vector figure being divided into two parts, enables input feature vector figure complete by part Portion's convolution, can then be connected by channel and channel resets and two parts characteristic pattern is attached and is reset.This work side Formula can reduce the calculation amount of neural network, and improve the speed and accuracy rate of neural network.
Detailed description of the invention
The embodiment of the present invention is described in more detail in conjunction with the accompanying drawings, the above and other purposes of the present invention, Feature and advantage will be apparent.Attached drawing is used to provide to further understand the embodiment of the present invention, and constitutes explanation A part of book, is used to explain the present invention together with the embodiment of the present invention, is not construed as limiting the invention.In the accompanying drawings, Identical reference label typically represents same parts or step.
Fig. 1 shows for realizing the characteristic pattern treating method and apparatus according to an embodiment of the present invention for neural network The schematic block diagram of exemplary electronic device;
Fig. 2 shows the schematic flows of the characteristic pattern processing method according to an embodiment of the invention for neural network Figure;
Fig. 3 a shows the structural schematic diagram of the network block of neural network according to an embodiment of the invention;
Fig. 3 b shows the structural schematic diagram of the network block of neural network in accordance with another embodiment of the present invention;
Fig. 4 shows the structural schematic diagram of the network block of neural network in accordance with another embodiment of the present invention;
Fig. 5 shows the schematic frame of the characteristic pattern processing unit according to an embodiment of the invention for neural network Figure;And
Fig. 6 shows the schematic frame of the characteristic pattern processing system according to an embodiment of the invention for neural network Figure.
Specific embodiment
In order to enable the object, technical solutions and advantages of the present invention become apparent, root is described in detail below with reference to accompanying drawings According to example embodiments of the present invention.Obviously, described embodiment is only a part of the embodiments of the present invention, rather than this hair Bright whole embodiments, it should be appreciated that the present invention is not limited by example embodiment described herein.
To solve the above-mentioned problems, the embodiment of the present invention provides a kind of characteristic pattern processing processing method for neural network And device, this method and device are cut by channel the channel of input feature vector figure being divided into two parts, so that input feature vector figure Can by part not all convolution, can then be connected by channel and channel reset two parts characteristic pattern is attached and It resets.This working method can reduce the calculation amount of neural network, and improve the speed and accuracy rate of neural network.According to this The characteristic pattern treating method and apparatus for neural network of inventive embodiments can be applied to various deep learning fields, such as scheme As identification etc..
Firstly, the characteristic pattern to describe for realizing according to an embodiment of the present invention for neural network is handled referring to Fig.1 The exemplary electronic device 100 of method and apparatus.
As shown in Figure 1, electronic equipment 100 includes one or more processors 102, one or more storage devices 104.It can Selection of land, electronic equipment 100 can also include input unit 106, output device 108 and image collecting device 110, these groups Part passes through the interconnection of bindiny mechanism's (not shown) of bus system 112 and/or other forms.It should be noted that electronics shown in FIG. 1 is set Standby 100 component and structure be it is illustrative, and not restrictive, as needed, the electronic equipment also can have it His component and structure.
The processor 102 can use microprocessor, digital signal processor (DSP), field programmable gate array (FPGA), at least one of programmable logic array (PLA) example, in hardware realizes that the processor 102 can be center Processing unit (CPU), image processor (GPU), dedicated integrated circuit (ASIC) have data-handling capacity and/or refer to The combination of one or more of processing unit of other forms of executive capability is enabled, and can control the electronic equipment Other components in 100 are to execute desired function.
The storage device 104 may include one or more computer program products, and the computer program product can To include various forms of computer readable storage mediums, such as volatile memory and/or nonvolatile memory.It is described easy The property lost memory for example may include random access memory (RAM) and/or cache memory (cache) etc..It is described non- Volatile memory for example may include read-only memory (ROM), hard disk, flash memory etc..In the computer readable storage medium On can store one or more computer program instructions, processor 102 can run described program instruction, to realize hereafter institute The client functionality (realized by processor) in the embodiment of the present invention stated and/or other desired functions.In the meter Can also store various application programs and various data in calculation machine readable storage medium storing program for executing, for example, the application program use and/or The various data etc. generated.
The input unit 106 can be the device that user is used to input instruction, and may include keyboard, mouse, wheat One or more of gram wind and touch screen etc..
The output device 108 can export various information (such as image and/or sound) to external (such as user), and It and may include one or more of display, loudspeaker etc..Optionally, the input unit 106 and the output device 108 can integrate together, be realized using same interactive device (such as touch screen).
Described image acquisition device 110 can acquire image (including still image and video frame), and will be collected Image is stored in the storage device 104 for the use of other components.Image collecting device 110 can be individual camera, The imaging sensor in camera or capture machine in mobile terminal.It should be appreciated that image collecting device 110 is only example, electricity Sub- equipment 100 can not include image collecting device 110.In such a case, it is possible to using other with Image Acquisition ability Device acquire image, and the image of acquisition is sent to electronic equipment 100.
Illustratively, for realizing the characteristic pattern treating method and apparatus according to an embodiment of the present invention for neural network Exemplary electronic device can be realized in the equipment of personal computer or remote server etc..
In the following, reference Fig. 2 is described into the characteristic pattern processing method according to an embodiment of the present invention for being used for neural network, it is described Neural network includes at least one network block, and each network block at least one described network block includes that channel cuts layer, the Layer is reset in one access, alternate path, channel articulamentum and channel, and the alternate path includes at least one convolutional layer.Fig. 2 shows The schematic flow chart of characteristic pattern processing method 200 according to an embodiment of the invention for neural network.Such as Fig. 2 institute Show, the characteristic pattern processing method 200 for neural network includes the following steps S210, S220, S230 and S240.
Is cut by layer via channel and receives the network for each network block at least one network block in step S210 The input feature vector figure of block, and the channel of input feature vector figure is cut to two parts, it is special to obtain first group of characteristic pattern and second group Sign figure.
Fig. 3 a shows the structural schematic diagram of the network block of neural network according to an embodiment of the invention.Such as Fig. 3 a institute Show, block structure includes that channel cuts layer, channel articulamentum and channel rearrangement layer, cuts between layer and channel articulamentum and has in channel There are two accesses, i.e. the first access, alternate path.
It is cut by channel, the channel for the input feature vector figure for inputting network block can be cut to two parts.Assuming that input The port number of characteristic pattern is caA, the port number of the first group of characteristic pattern obtained after cutting is c1A, being obtained after cutting second group The port number of characteristic pattern is c2It is a, then ca=c1+c2.That is, the port number of first group of characteristic pattern and second group of characteristic pattern it It is equal with the port number of input feature vector figure.The port number of first group of characteristic pattern and the port number of second group of characteristic pattern can bases It needs to set.The port number of first group of characteristic pattern and the port number of second group of characteristic pattern can be equal or different.Compare desirable It is that the port number of first group of characteristic pattern and the port number of second group of characteristic pattern are equal, compared with port number not equal situation, channel The equal speed that neural network can be improved of number.
Each network block at least one network block is led to first group of characteristic pattern input first in step S220 Road, and second group of characteristic pattern is inputted into alternate path.
Channel is cut into the two groups of characteristic patterns obtained and inputs the first access and alternate path respectively.For the first access and For two accesses, in the case where access is jump connection (shortcut) access, the characteristic pattern for inputting the access is not carried out Especially processing, directly exports characteristic pattern.In the case where including some network layers (such as convolutional layer) in access, to defeated The characteristic pattern for entering the access carries out respective handling, obtains treated characteristic pattern and will treated characteristic pattern output.
In step S230, for each network block at least one network block, via channel articulamentum by the first access The output characteristic pattern of output characteristic pattern and alternate path carry out channel and connect, with the characteristic pattern after being connected.
(channel concatenate) is connected by channel, the output characteristic pattern of the first access and second can be led to The output characteristic pattern on road connects together, the characteristic pattern after being connected.Assuming that the port number of the output characteristic pattern of the first access For c3A, the port number of the output characteristic pattern of alternate path is c4A, the port number of the characteristic pattern after connection is cbIt is a, then cb=c3 +c4
In step S240, for each network block at least one network block, after resetting layer to connection via channel Characteristic pattern carries out channel rearrangement, to obtain the output characteristic pattern of the network block.
(channel shuffle) is reset by channel, the characteristic pattern in different channels can be upset and be resequenced, Obtain new characteristic pattern.When channel is reset, the mode that characteristic pattern is resequenced can be set as needed.At one In example, the output characteristic pattern of the first access can be divided into m group, and the output characteristic pattern of alternate path is also divided into m Group resets in layer in channel, i-th group of characteristic pattern in the output characteristic pattern of alternate path can be come to the output of alternate path After i-th group of characteristic pattern in characteristic pattern and before i+1 group characteristic pattern.For example, it is assumed that the output characteristic pattern packet of the first access Four groups of characteristic patterns that number is 1,2,3,4 are included, the output characteristic pattern of alternate path includes four groups of features that number is 5,6,7,8 Figure, after resetting, the sequence of the characteristic pattern of acquisition can be 1,5,2,6,3,7,4,8.
Channel, which is reset, can make infeasible channel is cut, channel connects originally to become feasible, if without channel weight Row, then network can be divided into two parts by being cut using channel, a part carries out full convolution, and another part is without operation, in this way Meeting is so that neural network can not train well.Channel reordering operations can make two parts characteristic pattern for cutting acquisition mutually pass on from one to another Information is passed, avoids passage portion from passing through convolution always or does not pass through convolution always, so that neural network can be trained normally. It only needs transposition to operate (dimension shuffle) in addition, channel is reset, does not need calculation amount, therefore will not additionally increase The calculation amount of neural network.
Characteristic pattern processing method according to an embodiment of the present invention for neural network, has the following characteristics that
1) it cuts to obtain passage portion by channel and carries out convolutional calculation, the port number of convolution can be reduced in this way, and adopt It is compared with the neural network of grouping convolution, the processing speed of neural network can be improved, because the speed of grouping convolution is slow;
2) channel of input feature vector figure is cut, the output that can neatly control the first access and alternate path is logical Road number, one of the first access and alternate path can be used to implement convolution, so that convolutional layer no longer need to maintain with it is entire The consistent port number of input feature vector figure of block structure, so that the design of convolutional layer is more flexible;In addition, also due to above-mentioned reason, So that the port number of input feature vector figure can increase considerably, in the case where calculation amount is constant so as to effectively improve mind Information representation ability through network;
3) after splicing two parts channel, then channel rearrangement is carried out, convolution or beginning can be passed through always to avoid passage portion Eventually not by convolution, while channel resets and only transposition is needed to operate, and not will increase calculation amount.
Due to above-mentioned characteristic, under identical abbreviation multiple, opposite known models, the embodiment of the present invention provides neural network Possess apparent accuracy benefits and speed advantage.Compared to other light weight models, neural network provided in an embodiment of the present invention Allow there are more port numbers, i.e., stronger code capacity has on especially small network and more preferably shows.Inventor exists It is tested in ImageNet classification task, experiment display, neural network provided in an embodiment of the present invention is than other models in speed It is all more superior on degree and in accuracy rate.For example, accuracy rate ratio MobileNet improves 8.7% under 40MFLOPs magnitude, than ShuffleNet (g=1) improves 4.4%.In addition, inventor is also tested for actual image processor (GPU) hardware-accelerated feelings Condition, the results show that performance of the neural network provided in an embodiment of the present invention on GTX 1080Ti is better than
ShuffleNet。
Illustratively, the characteristic pattern processing method according to an embodiment of the present invention for neural network can have storage It is realized in the unit or system of device and processor.
Characteristic pattern processing method according to an embodiment of the present invention for neural network can be deployed at personal terminal, all Such as smart phone, tablet computer, personal computer.
Alternatively, the characteristic pattern processing method according to an embodiment of the present invention for neural network can also dispose with being distributed At server end and client.For example, can client obtain video flowing (such as Image Acquisition end acquisition user people Face image), the image that client will acquire sends server end (or cloud) to, carries out characteristic pattern by server end (or cloud) Processing.
According to embodiments of the present invention, at least one network block may include at least one first network block, at least one The alternate path of each first network block in one network block includes the first convolutional layer, the second convolutional layer and third convolutional layer, side Method 200 can also include: for each first network block at least one first network block, via the first convolutional layer to second Group characteristic pattern carries out dimensionality reduction convolution;For each first network block at least one first network block, via the second convolutional layer Depth is carried out with output characteristic pattern of the third convolutional layer to the first convolutional layer and separates convolution, to obtain the output feature of alternate path Figure.
With continued reference to Fig. 3 a, cutting between layer and channel articulamentum in channel includes two paths, and left side is the first access, Right side is alternate path.Alternate path includes three convolutional layers, is followed successively by the first convolutional layer, the second convolutional layer and third convolution Layer.Illustratively, the second convolutional layer and third convolutional layer can be used to implement depth separation convolution.Depth separation convolution refer to by 3x3 convolution operation splits into two parts: the convolution (i.e. the second convolutional layer) and 1x1 convolution (i.e. third convolution of 3x3 channel dimension Layer), it is therefore an objective to reduce calculation amount.In addition, can be separated in depth to reduce port number in 3x3 channel dimension conventional part 1x1 convolution (i.e. the first convolutional layer) is used to reduce channel dimension before convolution.
Compared with the port number of the characteristic pattern (i.e. second group of characteristic pattern) of input alternate path, the output feature of alternate path The port number of figure can remain unchanged, increases or reduce, this is determined by the convolutional layer that alternate path includes.Alternate path includes Convolutional layer can be set as needed.
According to embodiments of the present invention, the second convolutional layer is used to the output characteristic pattern to the first convolutional layer and executes step-length be 1 Convolution, the first access are the jump connecting paths between channel cutting layer and channel articulamentum.
The structure of network block shown in Fig. 3 a is the structure that step-length (stride) is 1.As shown in Figure 3a, the first of left side is logical Road is jump connecting path, i.e., channel cutting layer is directly connected to channel articulamentum, channel is enabled to cut layer for first Group characteristic pattern is input to channel articulamentum via the first access.In the first access, extra process is not executed.Those skilled in the art In the case that member is appreciated that the convolutional layer in alternate path executes the convolution that step-length is 1, the size of characteristic pattern before and after convolution (i.e. width and height) is constant.
According to embodiments of the present invention, the second convolutional layer is used to the output characteristic pattern to the first convolutional layer and executes step-length be 2 Convolution, the first access include pond layer, and it is 2 to first group of characteristic pattern execution step-length that method 200, which can also include: via pond layer, Pond, with obtain the first access output characteristic pattern.
Fig. 3 b shows the structural schematic diagram of the network block of neural network in accordance with another embodiment of the present invention.With Fig. 3 a class As, in fig 3b, cutting between layer and channel articulamentum in channel includes two paths, and left side is the first access, and right side is Alternate path.The alternate path of Fig. 3 b and the alternate path of Fig. 3 a are substantially similar, and what difference was the execution of the second convolutional layer is step A length of 2 convolution.In this case, the first access can no longer be jump connecting path, and may include some network layers. As shown in Figure 3b, the first access includes a pond layer, and the convolution kernel size of the pond layer is 3 × 3, step-length 2.This field skill In the case that art personnel are appreciated that the convolutional layer in alternate path executes the convolution that step-length is 2, characteristic pattern after convolution Size (i.e. width and height) can reduce half.First access is operated by pondization also reduces one for the size of first group of characteristic pattern Half, the size of the output characteristic pattern of the output characteristic pattern and alternate path of the first access subsequent in this way still maintains consistent, convenient Carry out channel connection.
According to embodiments of the present invention, at least one network block includes at least one second network block, at least one second net The alternate path of the second network block of each of network block includes residual error structure, and residual error structure includes residual error branch and jump connection branch Road, residual error branch include at least one convolutional layer, and method 200 can also include: for every at least one second network block A second network block carries out convolution to second group of characteristic pattern via residual error branch;For every at least one second network block Second group of characteristic pattern of the output characteristic pattern of residual error branch and jump connection branch output is carried out element phase by a second network block Add, to obtain the output characteristic pattern of alternate path.
Optionally, block structure may include residual error structure, can not also include residual error structure.In order to distinguish, will not include The network block of residual error structure is known as first network block (as shown in Figure 3a and Figure 3b shows), will include that the network block of residual error structure is known as the Two network blocks.Herein, the first, second etc term is only used for distinguishing purpose, is not offered as sequence or other special contain Justice.
Fig. 4 shows the structural schematic diagram of the network block of neural network in accordance with another embodiment of the present invention.In Fig. 4, First access is jump connecting path, and alternate path is divided into two branches, i.e. residual error branch and jump connection branch, the two branch Road forms residual error structure.
The net in the alternate path in operation and Fig. 3 a that the network layer for including in residual error branch and each network layer are implemented Network layers can be similar, and details are not described herein again.Jump connection branch is to be directly connected to branch, and second group of characteristic pattern is transmitted To at the output of residual error branch, enables the output characteristic pattern of residual error branch to carry out element with second group of characteristic pattern and be added.It is residual The port number of the port number and second group of characteristic pattern of the output characteristic pattern of poor branch is consistent, and the feature of residual error branch output The size of figure is in the same size with second group of characteristic pattern, so that the two is able to carry out element addition.It is added in alternate path residual Poor structure can enable neural network normally to restrain in training, avoid the gradient disappearance problem that may occur.
According to embodiments of the present invention, residual error branch may include Volume Four lamination, the 5th convolutional layer and the 6th convolutional layer, In, method 200 can also include: for the second network block of each of at least one second network block, via Volume Four lamination Dimensionality reduction convolution is carried out to second group of characteristic pattern;For the second network block of each of at least one second network block, via the 5th Convolutional layer and the 6th convolutional layer carry out depth to the output characteristic pattern of Volume Four lamination and separate convolution, to obtain the defeated of residual error branch Characteristic pattern out;Wherein, the port number of the output characteristic pattern of residual error branch is equal with the port number of second group of characteristic pattern.
As shown in figure 4, alternate path includes three convolutional layers, it is followed successively by Volume Four lamination, the 5th convolutional layer and volume six Lamination.Illustratively, the 5th convolutional layer and the 6th convolutional layer can be used to implement depth separation convolution comprising the channel 3x3 dimension The convolution (i.e. the 5th convolutional layer) and 1x1 convolution (i.e. the 6th convolutional layer) of degree.As described above, depth separation convolution can reduce meter Calculation amount.In addition, in order to reduce port number in 3x3 channel dimension conventional part 1x1 volumes can be used before depth separates convolution Product (i.e. Volume Four lamination) reduces channel dimension.
According to embodiments of the present invention, the 5th convolutional layer is used to the output characteristic pattern to Volume Four lamination and executes step-length be 1 Convolution, the first access are jump connecting path of the channel cutting layer to channel articulamentum.
The structure of network block shown in Fig. 4 is the structure that step-length (stride) is 1.As shown in figure 4, first access in left side It is jump connecting path, i.e., channel cutting layer is directly connected to channel articulamentum, channel is enabled to cut layer for first group Characteristic pattern is input to channel articulamentum via the first access.In the first access, extra process is not executed.Those skilled in the art It is appreciated that in the case that the convolutional layer in alternate path executes the convolution that step-length is 1, the size of characteristic pattern before and after convolution (i.e. width and height) is constant.
According to embodiments of the present invention, neural network includes sequentially connected initial convolutional layer, maximum pond layer, first network Section, the second network segment, third network segment, global pool layer and full articulamentum, wherein first network section includes at least one network Four network blocks in block, the second network segment include eight network blocks at least one network block, and third network segment includes extremely Four network blocks in a few network block.
Illustratively, the whole network structure of neural network can use three phases (network segment), each network segment point Not Chong Fu 4,8,4 above-mentioned network blocks, a kind of schematic structure of neural network is as shown in table 1 below.The nerve net being shown in Table 1 In network structure, each network segment executes the down-sampling that a step-length is 2 first with block structure as shown in Figure 3b, then utilizes Block structure as shown in Figure 3a executes the convolution that subsequent step-length is 1.For example, first network segment (network segment 2) is first with such as Block structure shown in Fig. 3 b executes the down-sampling that a step-length is 2, then executes a hyposynchronization using block structure as shown in Figure 3a A length of 1 convolution.
The schematic structure of 1. neural network of table
In light weight model design, existing neural network model, neural network provided in an embodiment of the present invention are compared Performance be obviously improved.As described above, inventor has done comparative test on ImageNet data set.The results show that this The neural network that inventive embodiments provide promotes 8.7% than MobileNet in accuracy rate, at the same speed, than ShuffleNet promotes 4.4%.Referring to table 2, existing neural network and neural network provided in an embodiment of the present invention are shown Performance.
Above-mentioned the results show neural network provided in an embodiment of the present invention can save calculation amount significantly, simultaneously Accuracy can be effectively improved, current best level has been reached in the design of light weight model.
The performance of table 2. existing neural network and neural network provided in an embodiment of the present invention
According to a further aspect of the invention, a kind of characteristic pattern processing unit for neural network, the neural network are provided Including at least one network block, each network block at least one described network block includes that channel cuts layer, the first access, the Layer is reset in two accesses, channel articulamentum and channel, and the alternate path includes at least one convolutional layer.Fig. 5 is shown according to this The schematic block diagram of the characteristic pattern processing unit 500 for neural network of invention one embodiment.
As shown in figure 5, the characteristic pattern processing unit 500 according to an embodiment of the present invention for neural network is cut out including channel Cut-off-die block 510, input module 520, channel link block 530 and channel reordering module 540.The modules can execute respectively Above in conjunction with each step/function of Fig. 2-4 characteristic pattern processing method for neural network described.Below only to the use It is described in the major function of each component of the characteristic pattern processing unit 500 of neural network, and omits and had been described above Detail content.
Channel cuts module 510 and is used for for each network block at least one described network block, via the channel It cuts layer and receives the input feature vector figure of the network block, and the channel of the input feature vector figure is cut to two parts, to obtain the One group of characteristic pattern and second group of characteristic pattern.Channel cuts module 510 can processor 102 in electronic equipment as shown in Figure 1 The program instruction that stores in Running storage device 103 is realized.
Input module 520 is used for for each network block at least one described network block, by first group of feature Figure inputs first access, and second group of characteristic pattern is inputted the alternate path.Input module 520 can be by Fig. 1 Shown in the program instruction that stores in 102 Running storage device 103 of processor in electronic equipment realize.
Channel link block 530 is used for for each network block at least one described network block, via the channel The output characteristic pattern of first access is carried out channel with the output characteristic pattern of the alternate path and connected by articulamentum, to obtain Characteristic pattern after connection.Channel link block 530 can the operation storage dress of processor 102 in electronic equipment as shown in Figure 1 The program instruction that stores in 103 is set to realize.
Channel reordering module 540 is used for for each network block at least one described network block, via the channel It resets layer and channel rearrangement is carried out to the characteristic pattern after the connection, to obtain the output characteristic pattern of the network block.Reset mould in channel The program instruction that block 540 can store in 102 Running storage device 103 of processor in electronic equipment as shown in Figure 1 comes real It is existing.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed The scope of the present invention.
Fig. 6 shows the signal of the characteristic pattern processing system 600 according to an embodiment of the invention for neural network Property block diagram.The neural network includes at least one network block, and each network block at least one described network block includes logical Road cuts layer, the first access, alternate path, channel articulamentum and channel and resets layer, and the alternate path includes at least one volume Lamination.Characteristic pattern processing system 600 for neural network includes storage device (i.e. memory) 610 and processor 620.
The storage of storage device 610 is handled for realizing the characteristic pattern according to an embodiment of the present invention for neural network The computer program instructions of corresponding steps in method.
The processor 620 is for running the computer program instructions stored in the storage device 610, to execute basis The corresponding steps of the characteristic pattern processing method for neural network of the embodiment of the present invention.
In one embodiment, for executing following step when the computer program instructions are run by the processor 620 It is rapid: for each network block at least one described network block, to cut the input that layer receives the network block via the channel Characteristic pattern, and the channel of the input feature vector figure is cut to two parts, to obtain first group of characteristic pattern and second group of characteristic pattern; First group of characteristic pattern is inputted into first access, and second group of characteristic pattern is inputted into the alternate path;Via The output characteristic pattern of the output characteristic pattern of first access and the alternate path is carried out channel company by the channel articulamentum It connects, with the characteristic pattern after being connected;Layer is reset via the channel, and channel rearrangement is carried out to the characteristic pattern after the connection, with Obtain the output characteristic pattern of the network block.
In addition, according to embodiments of the present invention, additionally providing a kind of storage medium, storing program on said storage Instruction, when described program instruction is run by computer or processor for execute the embodiment of the present invention for neural network The corresponding steps of characteristic pattern processing method, and for realizing the characteristic pattern according to an embodiment of the present invention for neural network at Manage the corresponding module in device.The storage medium for example may include the storage unit of the storage card of smart phone, tablet computer It is part, the hard disk of personal computer, read-only memory (ROM), Erasable Programmable Read Only Memory EPROM (EPROM), portable compact Disk read-only memory (CD-ROM), any combination of USB storage or above-mentioned storage medium.
In one embodiment, described program instruction can make computer or place when being run by computer or processor Reason device realizes each functional module of the characteristic pattern processing unit according to an embodiment of the present invention for neural network, and and/or Person can execute the characteristic pattern processing method according to an embodiment of the present invention for neural network.
In one embodiment, described program instruction is at runtime for executing following steps: for it is described at least one Each network block in network block cuts layer via the channel and receives the input feature vector figure of the network block, and by the input The channel of characteristic pattern is cut to two parts, to obtain first group of characteristic pattern and second group of characteristic pattern;By first group of characteristic pattern First access is inputted, and second group of characteristic pattern is inputted into the alternate path;Via the channel articulamentum by institute The output characteristic pattern of the output characteristic pattern and the alternate path of stating the first access carries out channel and connects, with the spy after being connected Sign figure;Layer is reset via the channel, channel rearrangement is carried out to the characteristic pattern after the connection, to obtain the output of the network block Characteristic pattern.
Each module in characteristic pattern processing system according to an embodiment of the present invention for neural network can pass through basis The processor operation for the electronic equipment that the implementation of the embodiment of the present invention is handled for the characteristic pattern of neural network is deposited in memory The computer program instructions of storage realize, or can in the computer of computer program product according to an embodiment of the present invention can The realization when computer instruction for reading to store in storage medium is run by computer.
Although describing example embodiment by reference to attached drawing here, it should be understood that above example embodiment are only exemplary , and be not intended to limit the scope of the invention to this.Those of ordinary skill in the art can carry out various changes wherein And modification, it is made without departing from the scope of the present invention and spiritual.All such changes and modifications are intended to be included in appended claims Within required the scope of the present invention.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed The scope of the present invention.
In several embodiments provided herein, it should be understood that disclosed device and method can pass through it Its mode is realized.For example, apparatus embodiments described above are merely indicative, for example, the division of the unit, only Only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components can be tied Another equipment is closed or is desirably integrated into, or some features can be ignored or not executed.
In the instructions provided here, numerous specific details are set forth.It is to be appreciated, however, that implementation of the invention Example can be practiced without these specific details.In some instances, well known method, structure is not been shown in detail And technology, so as not to obscure the understanding of this specification.
Similarly, it should be understood that in order to simplify the present invention and help to understand one or more of the various inventive aspects, To in the description of exemplary embodiment of the present invention, each feature of the invention be grouped together into sometimes single embodiment, figure, Or in descriptions thereof.However, the method for the invention should not be construed to reflect an intention that i.e. claimed The present invention claims features more more than feature expressly recited in each claim.More precisely, such as corresponding power As sharp claim reflects, inventive point is that the spy of all features less than some disclosed single embodiment can be used Sign is to solve corresponding technical problem.Therefore, it then follows thus claims of specific embodiment are expressly incorporated in this specific Embodiment, wherein each, the claims themselves are regarded as separate embodiments of the invention.
It will be understood to those skilled in the art that any combination pair can be used other than mutually exclusive between feature All features disclosed in this specification (including adjoint claim, abstract and attached drawing) and so disclosed any method Or all process or units of equipment are combined.Unless expressly stated otherwise, this specification (is wanted including adjoint right Ask, make a summary and attached drawing) disclosed in each feature can be replaced with an alternative feature that provides the same, equivalent, or similar purpose.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments In included certain features rather than other feature, but the combination of the feature of different embodiments mean it is of the invention Within the scope of and form different embodiments.For example, in detail in the claims, embodiment claimed it is one of any Can in any combination mode come using.
Various component embodiments of the invention can be implemented in hardware, or to run on one or more processors Software module realize, or be implemented in a combination thereof.It will be understood by those of skill in the art that can be used in practice Microprocessor or digital signal processor (DSP) are realized at the characteristic pattern according to an embodiment of the present invention for neural network Manage some or all functions of some modules in device.The present invention is also implemented as executing side as described herein Some or all program of device (for example, computer program and computer program product) of method.Such this hair of realization Bright program can store on a computer-readable medium, or may be in the form of one or more signals.It is such Signal can be downloaded from an internet website to obtain, and is perhaps provided on the carrier signal or is provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and ability Field technique personnel can be designed alternative embodiment without departing from the scope of the appended claims.In the claims, Any reference symbol between parentheses should not be configured to limitations on claims.Word "comprising" does not exclude the presence of not Element or step listed in the claims.Word "a" or "an" located in front of the element does not exclude the presence of multiple such Element.The present invention can be by means of including the hardware of several different elements and being come by means of properly programmed computer real It is existing.In the unit claims listing several devices, several in these devices can be through the same hardware branch To embody.The use of word first, second, and third does not indicate any sequence.These words can be explained and be run after fame Claim.
The above description is merely a specific embodiment or to the explanation of specific embodiment, protection of the invention Range is not limited thereto, and anyone skilled in the art in the technical scope disclosed by the present invention, can be easily Expect change or replacement, should be covered by the protection scope of the present invention.Protection scope of the present invention should be with claim Subject to protection scope.

Claims (12)

1. a kind of characteristic pattern processing method for neural network, the neural network include at least one network block, it is described extremely Each network block in a few network block includes that channel cuts layer, the first access, alternate path, channel articulamentum and channel weight Layer is arranged, the alternate path includes at least one convolutional layer, which comprises
For each network block at least one described network block,
Layer is cut via the channel and receives the input feature vector figure of the network block, and the channel of the input feature vector figure is cut to Two parts, to obtain first group of characteristic pattern and second group of characteristic pattern;
First group of characteristic pattern is inputted into first access, and second group of characteristic pattern is inputted into the alternate path;
Via the channel articulamentum by first access output characteristic pattern and the alternate path output characteristic pattern into Row of channels connection, with the characteristic pattern after being connected;
Layer is reset via the channel, and channel rearrangement is carried out to the characteristic pattern after the connection, it is special with the output for obtaining the network block Sign figure.
2. the method for claim 1, wherein at least one described network block includes at least one first network block, institute The alternate path for stating each first network block at least one first network block includes the first convolutional layer, the second convolutional layer With third convolutional layer, the method also includes:
For each first network block at least one described first network block,
Dimensionality reduction convolution is carried out to second group of characteristic pattern via first convolutional layer;
Depth point is carried out to the output characteristic pattern of first convolutional layer via second convolutional layer and the third convolutional layer From convolution, to obtain the output characteristic pattern of the alternate path.
3. method according to claim 2, wherein second convolutional layer is used for the output feature to first convolutional layer Figure executes the convolution that step-length is 1, and first access is that the jump that the channel is cut between layer and the channel articulamentum connects Connect road.
4. method according to claim 2, wherein second convolutional layer is used for the output feature to first convolutional layer Figure executes the convolution that step-length is 2, and first access includes pond layer, the method also includes:
The pond that step-length is 2 is executed to first group of characteristic pattern via the pond layer, to obtain the defeated of first access Characteristic pattern out.
5. such as the described in any item methods of Claims 1-4, wherein at least one described network block include at least one second Network block, the alternate path of the second network block of each of at least one described second network block include residual error structure, institute Stating residual error structure includes residual error branch and jump connection branch, and the residual error branch includes at least one described convolutional layer, described Method further include:
For the second network block of each of at least one second network block,
Convolution is carried out to second group of characteristic pattern via the residual error branch;
Second group of characteristic pattern of the output characteristic pattern of the residual error branch and the jump connection branch output is subjected to member Element is added, to obtain the output characteristic pattern of the alternate path.
6. method as claimed in claim 5, wherein the residual error branch includes Volume Four lamination, the 5th convolutional layer and the 6th Convolutional layer, wherein the method also includes:
For the second network block of each of at least one second network block,
Dimensionality reduction convolution is carried out to second group of characteristic pattern via the Volume Four lamination;
Depth point is carried out to the output characteristic pattern of the Volume Four lamination via the 5th convolutional layer and the 6th convolutional layer From convolution, to obtain the output characteristic pattern of the residual error branch;
Wherein, the port number of the output characteristic pattern of the residual error branch is equal with the port number of second group of characteristic pattern.
7. method as claimed in claim 6, wherein the 5th convolutional layer is used for the output feature to the Volume Four lamination Figure executes the convolution that step-length is 1, and first access is that the jump connection that the channel cuts layer to the channel articulamentum is logical Road.
8. such as the described in any item methods of Claims 1-4, wherein the neural network includes sequentially connected initial convolution Layer, maximum pond layer, first network section, the second network segment, third network segment, global pool layer and full articulamentum,
Wherein, the first network section includes four network blocks at least one described network block, the second network segment packet Eight network blocks at least one described network block are included, the third network segment includes four at least one described network block A network block.
9. such as the described in any item methods of Claims 1-4, wherein the port number and described second of first group of characteristic pattern The port number of group characteristic pattern is equal.
10. a kind of characteristic pattern processing unit for neural network, the neural network include at least one network block, it is described extremely Each network block in a few network block includes that channel cuts layer, the first access, alternate path, channel articulamentum and channel weight Layer is arranged, the alternate path includes at least one convolutional layer, and described device includes:
Channel cuts module, for cutting layer via the channel for each network block at least one described network block The input feature vector figure of the network block is received, and the channel of the input feature vector figure is cut to two parts, it is special to obtain first group Sign figure and second group of characteristic pattern;
Input module, for for each network block at least one described network block, first group of characteristic pattern to be inputted First access, and second group of characteristic pattern is inputted into the alternate path;
Channel link block, for for each network block at least one described network block, via the channel articulamentum The output characteristic pattern of first access is carried out channel with the output characteristic pattern of the alternate path to connect, after being connected Characteristic pattern;
Channel reordering module, for resetting layer via the channel for each network block at least one described network block Channel rearrangement is carried out to the characteristic pattern after the connection, to obtain the output characteristic pattern of the network block.
11. a kind of characteristic pattern processing system for neural network, including processor and memory, wherein in the memory Computer program instructions are stored with, for executing such as claim 1 when the computer program instructions are run by the processor To 9 described in any item characteristic pattern processing methods for neural network.
12. a kind of storage medium stores program instruction on said storage, described program instruction is at runtime for holding The row characteristic pattern processing method as described in any one of claim 1 to 9 for neural network.
CN201810932282.7A 2018-08-16 2018-08-16 Characteristic pattern processing method, device and system and storage medium for neural network Pending CN109299722A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810932282.7A CN109299722A (en) 2018-08-16 2018-08-16 Characteristic pattern processing method, device and system and storage medium for neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810932282.7A CN109299722A (en) 2018-08-16 2018-08-16 Characteristic pattern processing method, device and system and storage medium for neural network

Publications (1)

Publication Number Publication Date
CN109299722A true CN109299722A (en) 2019-02-01

Family

ID=65165125

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810932282.7A Pending CN109299722A (en) 2018-08-16 2018-08-16 Characteristic pattern processing method, device and system and storage medium for neural network

Country Status (1)

Country Link
CN (1) CN109299722A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110533161A (en) * 2019-07-24 2019-12-03 特斯联(北京)科技有限公司 A kind of characteristic pattern processing method based on layering group convolutional neural networks
CN110543900A (en) * 2019-08-21 2019-12-06 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN110910434A (en) * 2019-11-05 2020-03-24 东南大学 Method for realizing deep learning parallax estimation algorithm based on FPGA (field programmable Gate array) high energy efficiency
CN111784555A (en) * 2020-06-16 2020-10-16 杭州海康威视数字技术股份有限公司 Image processing method, device and equipment
CN112116032A (en) * 2019-06-21 2020-12-22 富士通株式会社 Object detection device and method and terminal equipment
CN113011384A (en) * 2021-04-12 2021-06-22 重庆邮电大学 Anchor-frame-free target detection method based on lightweight convolution
WO2022183345A1 (en) * 2021-03-01 2022-09-09 浙江大学 Encoding method, decoding method, encoder, decoder, and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107247949A (en) * 2017-08-02 2017-10-13 北京智慧眼科技股份有限公司 Face identification method, device and electronic equipment based on deep learning
CN108154192A (en) * 2018-01-12 2018-06-12 西安电子科技大学 High Resolution SAR terrain classification method based on multiple dimensioned convolution and Fusion Features
CN108288075A (en) * 2018-02-02 2018-07-17 沈阳工业大学 A kind of lightweight small target detecting method improving SSD
CN108401154A (en) * 2018-05-25 2018-08-14 同济大学 A kind of image exposure degree reference-free quality evaluation method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107247949A (en) * 2017-08-02 2017-10-13 北京智慧眼科技股份有限公司 Face identification method, device and electronic equipment based on deep learning
CN108154192A (en) * 2018-01-12 2018-06-12 西安电子科技大学 High Resolution SAR terrain classification method based on multiple dimensioned convolution and Fusion Features
CN108288075A (en) * 2018-02-02 2018-07-17 沈阳工业大学 A kind of lightweight small target detecting method improving SSD
CN108401154A (en) * 2018-05-25 2018-08-14 同济大学 A kind of image exposure degree reference-free quality evaluation method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
NINGNING MA 等: "ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design", 《ARXIV:1807.11164V1》 *
XIANGYU ZHANG 等: "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices", 《ARXIV:1707.01083V2》 *
吴天舒 等: "基于改进 SSD 的轻量化小目标检测算法", 《红外与激光工程》 *
王磊 等: "面向嵌入式应用的深度神经网络模型压缩技术综述", 《北京交通大学学报》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112116032A (en) * 2019-06-21 2020-12-22 富士通株式会社 Object detection device and method and terminal equipment
JP2021002333A (en) * 2019-06-21 2021-01-07 富士通株式会社 Object detection device, object detection method, and terminal equipment
JP7428075B2 (en) 2019-06-21 2024-02-06 富士通株式会社 Object detection device, object detection method and terminal equipment
CN110533161A (en) * 2019-07-24 2019-12-03 特斯联(北京)科技有限公司 A kind of characteristic pattern processing method based on layering group convolutional neural networks
CN110533161B (en) * 2019-07-24 2022-05-20 特斯联(北京)科技有限公司 Feature map processing method based on hierarchical group convolution neural network
CN110543900A (en) * 2019-08-21 2019-12-06 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN110910434A (en) * 2019-11-05 2020-03-24 东南大学 Method for realizing deep learning parallax estimation algorithm based on FPGA (field programmable Gate array) high energy efficiency
CN110910434B (en) * 2019-11-05 2023-05-12 东南大学 Method for realizing deep learning parallax estimation algorithm based on FPGA (field programmable Gate array) high energy efficiency
CN111784555B (en) * 2020-06-16 2023-08-25 杭州海康威视数字技术股份有限公司 Image processing method, device and equipment
CN111784555A (en) * 2020-06-16 2020-10-16 杭州海康威视数字技术股份有限公司 Image processing method, device and equipment
WO2022183345A1 (en) * 2021-03-01 2022-09-09 浙江大学 Encoding method, decoding method, encoder, decoder, and storage medium
CN113011384A (en) * 2021-04-12 2021-06-22 重庆邮电大学 Anchor-frame-free target detection method based on lightweight convolution
CN113011384B (en) * 2021-04-12 2022-11-25 重庆邮电大学 Anchor-frame-free target detection method based on lightweight convolution

Similar Documents

Publication Publication Date Title
CN109299722A (en) Characteristic pattern processing method, device and system and storage medium for neural network
CN108595585B (en) Sample data classification method, model training method, electronic equipment and storage medium
KR102224510B1 (en) Systems and methods for data management
CN108875486A (en) Recongnition of objects method, apparatus, system and computer-readable medium
CN108875523A (en) Human synovial point detecting method, device, system and storage medium
Rybakken et al. Decoding of neural data using cohomological feature extraction
CN108875722A (en) Character recognition and identification model training method, device and system and storage medium
CN108875932A (en) Image-recognizing method, device and system and storage medium
McClure et al. Representational distance learning for deep neural networks
CN108875521A (en) Method for detecting human face, device, system and storage medium
CN108875732A (en) Model training and example dividing method, device and system and storage medium
CN108780519A (en) Structure learning in convolutional neural networks
CN108764133A (en) Image-recognizing method, apparatus and system
CN108875487A (en) Pedestrian is identified the training of network again and is identified again based on its pedestrian
CN108664897A (en) Bank slip recognition method, apparatus and storage medium
CN109155006A (en) The audio analysis based on frequency is carried out using neural network
CN106529511A (en) Image structuring method and device
CN109117953A (en) Network parameter training method and system, server, client and storage medium
CN106447721A (en) Image shadow detection method and device
CN108876804A (en) It scratches as model training and image are scratched as methods, devices and systems and storage medium
CN108876793A (en) Semantic segmentation methods, devices and systems and storage medium
CN108734052A (en) character detecting method, device and system
CN108108662A (en) Deep neural network identification model and recognition methods
CN108875492A (en) Face datection and crucial independent positioning method, device, system and storage medium
CN108009466A (en) Pedestrian detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190201

RJ01 Rejection of invention patent application after publication