WO2021208667A1 - 图像处理方法及装置、电子设备和存储介质 - Google Patents
图像处理方法及装置、电子设备和存储介质 Download PDFInfo
- Publication number
- WO2021208667A1 WO2021208667A1 PCT/CN2021/081782 CN2021081782W WO2021208667A1 WO 2021208667 A1 WO2021208667 A1 WO 2021208667A1 CN 2021081782 W CN2021081782 W CN 2021081782W WO 2021208667 A1 WO2021208667 A1 WO 2021208667A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- level
- feature map
- scale
- fusion
- feature
- Prior art date
Links
- 238000003860 storage Methods 0.000 title claims abstract description 30
- 238000003672 processing method Methods 0.000 title claims abstract description 23
- 230000004927 fusion Effects 0.000 claims abstract description 154
- 238000001514 detection method Methods 0.000 claims abstract description 65
- 238000000034 method Methods 0.000 claims abstract description 64
- 238000000605 extraction Methods 0.000 claims abstract description 17
- 238000012545 processing Methods 0.000 claims description 81
- 230000009466 transformation Effects 0.000 claims description 43
- 238000010606 normalization Methods 0.000 claims description 34
- 238000004590 computer program Methods 0.000 claims description 23
- 238000005070 sampling Methods 0.000 claims description 20
- 230000009467 reduction Effects 0.000 claims description 9
- 230000003321 amplification Effects 0.000 claims description 3
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 11
- 238000010586 diagram Methods 0.000 description 28
- 230000008569 process Effects 0.000 description 17
- 230000006870 function Effects 0.000 description 11
- 238000004891 communication Methods 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 10
- 230000002829 reductive effect Effects 0.000 description 7
- 238000012549 training Methods 0.000 description 6
- 230000009471 action Effects 0.000 description 5
- 230000004913 activation Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 238000013461 design Methods 0.000 description 3
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 3
- 229910052737 gold Inorganic materials 0.000 description 3
- 239000010931 gold Substances 0.000 description 3
- 238000013341 scale-up Methods 0.000 description 3
- 241001465754 Metazoa Species 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000007499 fusion processing Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 241001494479 Pecora Species 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Definitions
- the present disclosure relates to the field of computer technology, and in particular to an image processing method and device, electronic equipment, and storage medium.
- the present disclosure proposes a technical solution for image processing.
- an image processing method which includes: performing M-level feature extraction on an image to be processed to obtain an M-level first feature map of the to-be-processed image, each of the M-level first feature maps The scale of the first feature map of the level is different, and M is an integer greater than 1.
- the scale adjustment and fusion are performed on the feature map groups corresponding to the first feature maps of each level to obtain the second feature map of the M level, and each feature map group includes The first feature map and a first feature map adjacent to the first feature map; target detection is performed on the M-level second feature map to obtain a target detection result of the image to be processed.
- the feature map group corresponding to the first feature map of the i-th level includes the first feature map of the i-1th level, the first feature map of the i-th level, and the first feature map of the i+1th level.
- I is an integer and 1 ⁇ i ⁇ M
- the scale adjustment and fusion are performed on the feature map groups corresponding to the first feature maps of each level to obtain the second feature map of M level, including: Scale down the first feature map of the first level to obtain the first third feature map of the i-th level; perform scale-invariant transformation on the first feature map of the i-th level to obtain the second third feature map of the i-th level; Enlarge the scale of the first feature map of the i+1 level to obtain the third feature map of the third level i; The third feature map and the third i-th level third feature map are merged to obtain the i-th level second feature map, wherein the first i-th level third feature map and the second i-th level The scales of the third feature map and the third i-th level third feature map are the same.
- the i-1th level first feature map with a larger scale can be reduced to the same scale as the i-th level first feature map; the scale is smaller
- the first feature map of level i+1 is enlarged to the same scale as the first feature map of level i, so as to unify the scale of each feature map in the feature map group.
- the feature map group corresponding to the first feature map of the first level includes the first feature map of the first level and the first feature map of the second level.
- the feature map groups corresponding to the image are scaled and fused respectively to obtain the M-level second feature map, including: performing scale-invariant transformation on the first-level first feature map to obtain the first first-level third feature Figure; scale up the first feature map of the second level to obtain a second feature map of the first level; the first third feature map of the first level and the second level 1
- the third feature maps are fused to obtain a first-level second feature map, wherein the scales of the first first-level third feature map and the second first-level third feature map are the same.
- the first-level first feature map there is no previous-level feature map, and only the first-level first feature map itself and the adjacent second-level first feature map can be processed to obtain the first first feature map.
- the scale of the level 1 third feature map is the same as the scale of the second level 1 third feature map.
- the first and second level 1 third feature maps can be added to obtain the first level second feature map. In this way, the fusion of adjacent feature maps of the first level can be achieved.
- the feature map group corresponding to the first feature map of the M level includes the first feature map of the M-1 level and the first feature map of the M level.
- the feature map group corresponding to a feature map is scaled and fused respectively to obtain the M-level second feature map, including: reducing the scale of the M-1 level first feature map to obtain the first M-level third feature map Feature map; scale-invariant transformation is performed on the M-th level first feature map to obtain the second M-th level third feature map; the first M-th level third feature map and the second Fusion of the M-th level third feature maps to obtain the M-th level second feature map, wherein the scale of the first M-th level third feature map is the same as the scale of the second M-th level third feature map .
- each M-th level third feature map is the same as the scale of the second M-th level third feature map.
- the first and second M-th level third feature maps can be added to obtain the M-th level second feature map. In this way, the fusion of adjacent feature maps of the M-th level can be achieved.
- the step of reducing the scale of the i-1th level first feature map to obtain the first i-th level third feature map includes: performing a first convolutional layer on the Convolution is performed on the first feature map of level i-1 to obtain the first feature map of level i, the size of the convolution kernel of the first convolutional layer is N ⁇ N, and the step size is n, N , N is an integer greater than 1, the scale of the first feature map of the i-1 level is n times the scale of the first feature map of the i level; Scale-invariant transformation to obtain the second i-th level third feature map, including: convolving the i-th level first feature map through a second convolution layer to obtain the second i-th level Three feature maps, the size of the convolution kernel of the second convolutional layer is N ⁇ N, and the step size is 1.
- the third-level feature map includes: convolving and up-sampling the i+1-th level first feature map through a third convolution layer and an up-sampling layer to obtain the third i-th level third feature map ,
- the size of the convolution kernel of the third convolution layer is N ⁇ N, and the step size is 1.
- the performing scale-invariant transformation on the first-level first feature map to obtain the first first-level third feature map includes: The first-level first feature map is convolved to obtain the first first-level third feature map.
- the size of the convolution kernel of the second convolutional layer is N ⁇ N, the step size is 1, and N is An integer greater than 1;
- the scaling up of the second-level first feature map to obtain a second first-level third feature map includes: performing a third convolutional layer and an up-sampling layer on the second Convolution and upsampling are performed on the first-level feature map to obtain the second first-level third feature map.
- the size of the convolution kernel of the third convolutional layer is N ⁇ N, and the step size is 1.
- the step of reducing the scale of the M-1 level first feature map to obtain the first M level third feature map includes: performing a first convolutional layer on the The M-1 level first feature map is convolved to obtain the first M level third feature map.
- the size of the convolution kernel of the first convolution layer is N ⁇ N, and the step size is n, N , N is an integer greater than 1, the scale of the i-1th level first feature map is n times the scale of the i-th level first feature map; Scale-invariant transformation to obtain the second M-th level third feature map, including: convolving the M-th level first feature map through a second convolution layer to obtain the second M-th level Three feature maps, the size of the convolution kernel of the second convolution layer is N ⁇ N, and the step size is 1.
- the second convolutional layer and the third convolutional layer include deformable convolutional layers or hollow convolutional layers.
- an additional convolutional layer can be set to learn the offset, and then the input feature map and the offset are used together as a deformable convolution The input of the layer, the operation sampling point is shifted, and then convolution is performed.
- the expansion rate of the hole convolution can be preset to adjust the receptive field of the convolution adaptively and further improve the effect of feature map fusion.
- the method is implemented by an image processing network
- the image processing network includes a P-level fusion network block connected in series, configured to perform P sub-scale adjustment and fusion on the M-level first feature map
- Each level of fusion network block includes multiple first convolutional layers, multiple second convolutional layers, and multiple third convolutional layers, P is a positive integer;
- the pair of feature maps corresponding to the first feature maps of each level The group performs scale adjustment and fusion respectively to obtain an M-level second feature map, which includes: inputting the M-level first feature map into the first-level fusion network block, and outputting the first-integrated M-level fourth feature map;
- the M-level fourth feature map of the j-1th fusion is input to the j-th fusion network block, and the M-level fourth feature map of the jth fusion is output.
- j is an integer and 1 ⁇ j ⁇ P;
- the sub-fused M-level fourth feature map is input into the P-level fusion network block, and the M-level second
- the fusion effect can be further improved by processing the image through the P-level fusion network block connected in series.
- each level of the fusion network block further includes a normalization layer, and the M level fourth feature map of the j-1th fusion is input into the jth level of the fusion network block, and the jth level is output
- the fused M-level fourth feature map includes: the first convolutional layer, the second convolutional layer, and the third convolutional layer of the j-th fused network block, and the J-1th merged M
- the feature map groups corresponding to the fourth feature map are scaled and fused respectively to obtain the M-level intermediate feature map of the jth fusion; the normalization layer is used to perform the jth fusion of the M-level intermediate feature
- the map is subjected to joint batch normalization processing to obtain the M-level fourth feature map of the j-th fusion.
- the method is implemented by an image processing network, the image processing network further includes a regression network and a classification network, and the target detection is performed on the M-level second feature map to obtain the waiting
- Processing the target detection result of the image includes: inputting the M-level second feature map into the regression network, determining the image frame corresponding to the target in the image to be processed; inputting the M-level second feature map into the classification
- the network determines the category of the target in the image to be processed, and the target detection result includes the image frame corresponding to the target and the category of the target.
- the regression network and the classification network are used to implement the regression task and the classification task in the target detection, respectively.
- an image processing device including: a feature extraction module configured to perform M-level feature extraction on an image to be processed to obtain an M-level first feature map of the image to be processed, and the M-level The scales of the first feature maps at all levels in the first feature map are different, and M is an integer greater than 1.
- the scale adjustment and fusion module is configured to perform scale adjustment and fusion on the feature map groups corresponding to the first feature maps at all levels, respectively, Obtain an M-level second feature map, and each feature map group includes the first feature map and a first feature map adjacent to the first feature map; the target detection module is configured to detect the M-level second feature map The figure performs target detection to obtain the target detection result of the image to be processed.
- an electronic device including: a processor; a memory for storing executable instructions of the processor; wherein the processor is configured to call the instructions stored in the memory to execute the foregoing method.
- a computer-readable storage medium having computer program instructions stored thereon, and the computer program instructions implement the above-mentioned method when executed by a processor.
- a computer program product includes one or more instructions, and the one or more instructions are suitable for implementing the above-mentioned image processing method when executed by a processor.
- M-level feature extraction can be performed on the image to be processed to obtain the M-level first feature map; each first feature map and its adjacent feature maps are fused to obtain the M-level second feature map;
- the target detection of the second feature map obtains the target detection result, so that the relevant information of the features between the adjacent layers of the M-level first feature map can be merged, and the effect of target detection can be effectively improved.
- Fig. 1a shows a flowchart of an image processing method according to an embodiment of the present disclosure.
- Figure 1b is a schematic diagram of four different methods for generating multi-dimensional feature combinations.
- Figure 1c is a schematic diagram of the working principle of the deformable convolutional layer.
- FIGS. 2a and 2b show schematic diagrams of batch normalization according to the related art.
- Figure 2c shows a schematic diagram of joint batch normalization according to an embodiment of the present disclosure.
- Fig. 3a shows a schematic diagram of a detector according to the related art.
- Fig. 3b shows a schematic diagram of an image processing network according to an embodiment of the present disclosure.
- Fig. 4 shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure.
- Fig. 5 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
- Fig. 6 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
- Fig. 1a shows a flowchart of an image processing method according to an embodiment of the present disclosure. As shown in Fig. 1a, the method includes:
- step S11 perform M-level feature extraction on the image to be processed to obtain an M-level first feature map of the to-be-processed image.
- the scales of the first feature maps of each level in the M-level first feature map are different, and M is greater than An integer of 1;
- step S12 the feature map groups corresponding to the first feature maps at all levels are respectively adjusted and fused to obtain an M-level second feature map, wherein each of the feature map groups includes the first feature map and A first feature map adjacent to the first feature map;
- step S13 target detection is performed on the M-level second feature map to obtain a target detection result of the image to be processed.
- the image processing method may be executed by electronic devices such as a terminal device or a server, and the terminal device may be a user equipment (UE), a mobile device, a user terminal, a terminal, a cellular phone, or a cordless
- UE user equipment
- PDAs personal digital assistants
- the method can be implemented by a processor invoking computer-readable instructions stored in a memory.
- the method can be executed by a server.
- the image to be processed may be an image including a target (for example, an object, an animal, a pedestrian, etc.), and the image to be processed may be acquired by an image acquisition device (for example, a camera), or may be acquired by other methods.
- a target for example, an object, an animal, a pedestrian, etc.
- an image acquisition device for example, a camera
- multi-level feature extraction may be performed on the image to be processed through a feature pyramid network, and feature maps are extracted from different levels of the network to obtain an M-level first feature map of the image to be processed ( It can also be called a feature pyramid), M is an integer greater than 1. Among them, the scales of the first feature maps of each level in the M-level first feature map are different.
- the feature pyramid network may include at least M layers of convolutional layers, pooling layers, etc. The present disclosure does not limit the network structure of the feature pyramid network. By using single-scale images for detection, memory and calculation costs can be reduced.
- Figure 1b is a schematic diagram of four different methods for generating multi-dimensional feature combinations, including Figure (a) Characterized Image Pyramid, Figure (b) Single-scale Features, Figure (c) Pyramid Feature Hierarchy Structure, and (d) Features Pyramid network, as shown in Figure 1b, the image pyramid characterized in Figure 1b (a) uses an image pyramid to construct a feature pyramid. Calculating features independently on each scale image, the speed of output prediction is slow.
- Figure 1b (b) single-scale feature the detection system chooses to use only a single-scale feature to speed up the detection speed and output predictions.
- Figure 1b (c) pyramid feature hierarchical structure reuse the pyramid feature hierarchical structure to output predictions.
- the feature pyramid network proposed in (d) in Figure 1b is as fast as Figures (b) and (c), but more accurate.
- the top-down process of the feature pyramid network enlarges the small feature map at the top level to the same size as the adjacent feature map through upsampling.
- the advantage of this is that it not only uses the strong semantic features of the top layer, but also uses the high-resolution information of the bottom layer.
- step S12 can be used to realize the fusion between the first feature maps of each level and the adjacent first feature maps.
- the feature map groups corresponding to the first feature maps of each level can be adjusted and merged respectively to obtain the second feature map of M level, and each feature map group includes all the feature maps.
- the first feature map and the first feature map adjacent to the first feature map For example, for any first feature map, the scales of adjacent 2q feature maps (that is, q feature maps are taken before and after each) can be adjusted to the same scale as the first feature map, and then the adjusted 2q The two feature maps are added to the first feature map to obtain a second feature map corresponding to the first feature map, q ⁇ 1, and the present disclosure does not limit the value of q.
- the scale of the feature map group (including the first feature map and the adjacent 2q feature maps) of the first feature map can also be unified to a specific scale, for example, the The feature maps are all expanded to a multiple of the scale of the first feature map, or all are reduced to a fraction of the scale of the first feature map. Then, the adjusted feature maps are added together to obtain a second feature map corresponding to the first feature map.
- the present disclosure does not limit the scale range and method for adjusting the scale of the feature map group.
- the correlation of the dimensionality of the feature map and the correlation of the spatial dimension can be captured, and the accuracy of the feature map obtained by fusion can be improved.
- target detection may be performed on the M-level second feature map in step S13 to obtain the target detection result of the image to be processed. For example, perform regression and classification processing on the M-level second feature map respectively. After regression processing, the image area (that is, the detection frame) where the target in the image to be processed is located can be determined; after classification processing, the category of the target in the image to be processed can be determined.
- the target detection result of the image to be processed may include the image area (that is, the detection frame) where the target is located in the image to be processed, the type of the target, and the like.
- the embodiments of the present disclosure it is possible to perform M-level feature extraction on the image to be processed to obtain an M-level first feature map; fuse each first feature map with its neighboring feature maps to obtain an M-level second feature map;
- the target detection of the second feature map obtains the target detection result, so that the relevant information of the features between the adjacent layers of the M-level first feature map can be merged, and the effect of target detection can be effectively improved.
- the scale of each level of the first feature map in the M-level first feature map obtained in step S11 may be decreasing, for example, the scale of the first level of the first feature map is 512 ⁇ 512 , The scale of the first feature map of the second level is 256 ⁇ 256, and the scale of the first feature map of the third level is 128 ⁇ 128, etc.
- the present disclosure does not limit the value of the scale of the M-level first feature map.
- step S12 includes:
- the scales of the first i-th level third feature map, the second i-th level third feature map, and the third i-th level third feature map are the same.
- the i-1th level first feature map with a larger scale can be reduced to the same scale as the i-th level first feature map;
- the smaller i+1-th level first feature map is enlarged to the same scale as the i-th level first feature map, so as to unify the scales of the feature maps in the feature map group.
- the first feature map of level i-1 is scaled down to obtain the first level i third feature map; the scale-invariant transformation is performed on the first feature map of level i , Get the second i-th level third feature map; enlarge the scale of the i+1-th level first feature map to obtain the third i-th level third feature map.
- the scales of the first, second, and third i-th level third feature maps are the same.
- scale reduction can be achieved by means of convolution, down-sampling, etc.
- scale-up can be achieved by means of deconvolution, up-sampling, convolution with a step length of less than 1, etc.
- Product or other processing methods realize scale-invariant transformation, which is not limited in the present disclosure.
- the first, second, and third i-th level third feature maps can be added directly or added according to a preset weight to obtain the i-th level second feature map by fusion ,
- the scale of the i-th level second feature map is the same as the scale of the i-th level first feature map. In this way, the fusion of adjacent feature maps can be realized, and the feature extraction effect can be improved.
- the step of reducing the scale of the i-1th level first feature map to obtain the first i-th level third feature map includes: performing a first convolutional layer on the Convolution is performed on the first feature map of level i-1 to obtain the first feature map of level i, the size of the convolution kernel of the first convolutional layer is N ⁇ N, and the step size is n, N , N is an integer greater than 1, and the scale of the i-1th level first feature map is n times the scale of the i-th level first feature map;
- the performing scale-invariant transformation on the i-th level first feature map to obtain the second i-th level third feature map includes: performing the i-th level first feature map through a second convolutional layer Convolution to obtain the second i-th level third feature map, the size of the convolution kernel of the second convolution layer is N ⁇ N, and the step size is 1;
- the scaling up of the first feature map of the i+1 level to obtain the third feature map of the third level i includes: the step of The first feature map is convolved and up-sampled to obtain the third i-th level third feature map.
- the size of the convolution kernel of the third convolution layer is N ⁇ N, and the step size is 1.
- the processing of each feature map in the feature map group corresponding to the first feature map of the i-th level can be realized.
- the first feature map of the i-1th level may be convolved by the first convolution layer to obtain the first i-th level third feature map.
- the size of the convolution kernel of the first convolutional layer is N ⁇ N
- the step size is n
- N and n are integers greater than 1
- the scale of the i-1th level first feature map is the i-th level
- the scale of a feature map is n times, that is, the scale is reduced by convolution.
- the scale of the first feature map of level i-1 is 256 ⁇ 256
- the scale of the first i-th level third feature map obtained is 128 ⁇ 128.
- the value of N is 3, for example, and the present disclosure does not limit the values of N and n.
- the first feature map of the i-th level may be convolved through the second convolution layer to obtain the third feature map of the second i-th level, and the convolution kernel of the second convolution layer
- the size is N ⁇ N
- the step size is 1, that is, the scale-invariant transformation is realized through convolution.
- the scale of the i-th level first feature map is 128 ⁇ 128, and after convolution, the scale of the second i-th level third feature map is 128 ⁇ 128. It should be understood that those skilled in the art can use other methods to achieve scale-invariant transformation, which is not limited in the present disclosure.
- the third convolutional layer and the up-sampling layer can perform convolution and n-time upsampling on the first feature map of the i+1 level to obtain the third feature map of the third level i
- the size of the convolution kernel of the third convolution layer is N ⁇ N, and the step size is 1, that is, the scale enlargement is achieved through convolution and upsampling.
- the scale of the first feature map of level i+1 is 64 ⁇ 64
- the scale of the third i-th level and third feature map obtained is 128 ⁇ 128. It should be understood that those skilled in the art may use other methods to achieve scale enlargement, such as deconvolution or convolution with a step size of 1/n, which is not limited in the present disclosure.
- the first, second, and third i-th level third feature maps can be directly added to obtain the i-th level second feature map.
- the whole process is as follows:
- Y ⁇ i represents the second feature map of level i; x ⁇ (i+1), x ⁇ i, and x ⁇ (i-1) represent the first feature map of level i+1, The first feature map of level i and the first feature map of level i-1; w ⁇ 1, w ⁇ 0, w ⁇ (-1) represent the third convolutional layer, the second convolutional layer and the first convolution, respectively The weight of the layer; * means convolution operation; s means step size; Upsample means upsampling operation.
- the process of formula (1) can be called pyramid convolution or scale space convolution.
- pyramid convolution processing the second feature map of the adjacent layer information fusion can be obtained, which can effectively improve the effect of subsequent target detection.
- the feature map group corresponding to the first-level first feature map includes the first-level first feature map and The first feature map of level 2.
- step S12 includes:
- the first level 1 third feature map and the second level 1 third feature map are merged to obtain the first level second feature map
- the scale of the first level 1 third feature map is the same as the scale of the second level 1 third feature map.
- the first level 1 feature map there is no previous level feature map, and only the first level 1 feature map itself and the adjacent level 2 first feature map can be processed.
- the scale-invariant transformation may be performed on the first-level first feature map to obtain the first first-level third feature map; the second-level first feature map may be scaled up to obtain The second i-th level third feature map.
- the scale of the first and the second level 1 third feature map is the same.
- the first and second level 1 third feature maps may be added to obtain the first level second feature map. In this way, the fusion of adjacent feature maps of the first level can be achieved.
- the performing scale-invariant transformation on the first-level first feature map to obtain the first first-level third feature map includes: The first-level first feature map is convolved to obtain the first first-level third feature map.
- the size of the convolution kernel of the second convolutional layer is N ⁇ N, the step size is 1, and N is An integer greater than 1;
- the scaling up of the second-level first feature map to obtain a second first-level third feature map includes: performing the second-level first feature map through a third convolution layer and an up-sampling layer Perform convolution and upsampling to obtain a second level 1 third feature map.
- the size of the convolution kernel of the third convolution layer is N ⁇ N, and the step size is 1.
- the processing of each feature map in the feature map group corresponding to the first feature map of the first level can be realized.
- the first level 1 feature map can be convolved through the second convolution layer to obtain the first level 1 third feature map, that is, the scale-invariant transformation can be achieved through convolution; the third convolution layer can be used And the up-sampling layer performs convolution and n-time upsampling on the second-level first feature map to obtain the second first-level third feature map, that is, the scale enlargement is achieved through convolution and up-sampling.
- the processing method is similar to the previous description, and the description will not be repeated here.
- the feature map group corresponding to the M-th level first feature map includes the M-1 level first feature map and The first feature map of the M-th level.
- step S12 includes:
- the scale of the first M-th level third feature map is the same as the scale of the second M-th level third feature map.
- the M-th level first feature map there is no subsequent level feature map, and only the M-th level first feature map itself and the adjacent M-1 level first feature map can be processed.
- the M-1 level first feature map can be scaled down to obtain the first M level third feature map; the M level first feature map can be scale-invariant Transform, get the second M-th level third feature map. Among them, the scale of the first and the second M-th level third feature map is the same.
- the first and second M-th level third feature maps may be added to obtain the M-th level second feature map. In this way, the fusion of adjacent feature maps of the M-th level can be achieved.
- the step of reducing the scale of the M-1 level first feature map to obtain the first M level third feature map includes: performing a first convolutional layer on the The M-1 level first feature map is convolved to obtain the first M level third feature map.
- the size of the convolution kernel of the first convolution layer is N ⁇ N, and the step size is n, N , N is an integer greater than 1, and the scale of the i-1th level first feature map is n times the scale of the i-th level first feature map;
- the performing scale-invariant transformation on the M-th level first feature map to obtain a second M-th level third feature map includes: performing a second convolutional layer on the M-th level first feature map Convolution to obtain the second M-th level third feature map, the size of the convolution kernel of the second convolution layer is N ⁇ N, and the step size is 1.
- the processing of each feature map in the feature map group corresponding to the first feature map of the M-th level can be realized.
- the M-1 level first feature map can be convolved through the first convolution layer to obtain the first M level third feature map, that is, the scale is reduced through convolution; the second convolution layer is used to convolve the first feature map.
- the M-level first feature map is convolved to obtain the second M-th third feature map, that is, the scale-invariant transformation is realized through convolution.
- the processing method is similar to the previous description, and the description will not be repeated here. In this way, the scale of each feature map in the feature map group can be unified for subsequent fusion.
- the second convolutional layer and the third convolutional layer include deformable convolutional layers or hollow convolutional layers.
- FIG. 1c is a schematic diagram of the working principle of the deformable convolutional layer, including an input feature map 11, a deformable convolution layer 12, a convolution 13, an offset 14 and an output feature map 15.
- an additional convolution 13 to learn the offset 14 and share the input feature map 11.
- the input feature map 11 and the offset 14 are jointly used as the input of the deformable convolution layer 12, the operation sampling point is offset, and then convolution is performed to obtain the output feature map 15.
- the ordinary convolution in the pyramid convolution can be replaced with a deformable convolution or a hollow convolution, but it shares the weight with the bottom convolution. It can dynamically adjust the receptive field at different positions of the feature map to achieve alignment with the ordinary convolution of the underlying feature map.
- the adjusted pyramid convolution can be called a scale-balanced pyramid convolution.
- the first convolutional layer corresponding to the first feature map of the i-1th level is ordinary convolution; the first convolutional layer corresponding to the first feature map of the i-th level
- the second convolutional layer and the third convolutional layer corresponding to the i+1-th level first feature map are deformable convolutions or hole convolutions.
- an additional convolutional layer can be provided to learn the offset, and then the input feature map and the offset The shift is used as the input of the deformable convolutional layer, the operation sampling point is shifted, and then convolution is performed.
- the expansion rate of the hole convolution can be preset to adjust the receptive field of the convolution adaptively.
- the present disclosure does not limit the setting of the expansion rate.
- the receptive field of convolution can be adjusted adaptively, and the effect of feature map fusion can be further improved.
- the image processing method according to the embodiments of the present disclosure may be implemented by an image processing network, and the image processing network may include a feature pyramid network for performing multi-level feature extraction on the image to be processed.
- the image processing network may include a series-connected P-level fusion network block for performing P-fold scale adjustment and fusion on the M-level first feature map, and each level of fusion network block includes multiple For the first convolutional layer, multiple second convolutional layers, and multiple third convolutional layers, P is a positive integer.
- the process of scale adjustment and fusion can be performed multiple times, and the process can be implemented by a P-level fusion network block.
- Each level of fusion network block (may be referred to as PConv for short) includes multiple first convolutions.
- the layer, multiple second convolutional layers, and multiple third convolutional layers are respectively used to process each feature map group composed of adjacent feature maps.
- the value of P is, for example, 4.
- the present disclosure does not limit the value of P.
- each level of fusion network block can process multiple feature map groups, and each feature map group corresponds to a set of convolutional layers, which are used to convolve each feature map in the feature map group. product.
- each feature map group that includes a first feature map of level i-1, a first feature map of level i, and a first feature map of level i+1
- the set of convolutional layers corresponding to the feature map group includes the first feature map.
- the convolutional layer, the second convolutional layer, the third convolutional layer and the upsampling layer are used for the first feature map of level i-1, the first feature map of level i and the first feature of level i+1, respectively
- the graph is convolved.
- step S12 may include:
- the M-level fourth feature map merged at the P-1th time is input into the P-level fusion network block, and the M-level second feature map is output.
- the M-level first feature map can be input into the first-level fusion network block, the first scale adjustment and fusion can be performed, and the M-level fourth feature map of the first fusion can be output;
- the fourth feature map of the M level is input to the next level of fusion network block.
- the M-level fourth feature map of the j-1th fusion can be input into the j-th fusion network block, and the j-th scale adjustment and fusion can be performed, and the M-level fourth feature map of the j-th fusion can be output, where j is an integer and 1 ⁇ j ⁇ P.
- the M-level fourth feature map of the P-1 fusion can be input into the P-level fusion network block, the P-th scale adjustment and fusion can be performed, and the M-level second feature map can be output.
- each level of fusion network block further includes a normalization layer for normalizing the feature map after the fusion.
- the M level fourth feature map of the j-1 fusion is input into the j level fusion network block, and the M level 4 feature map of the j fusion is output, which may include:
- the feature map groups corresponding to the M-level fourth feature maps of the j-1th fusion are respectively Perform scale adjustment and fusion to obtain the M-level intermediate feature map of the jth fusion;
- the first convolutional layer, the second convolutional layer, and the third convolutional layer of the j-th level fusion network block can be used to compare the j-1th fusion of the M-level
- the feature map group corresponding to the fourth feature map is scaled and fused respectively to obtain the M-level intermediate feature map of the j-th fusion.
- formula (2) is the normalized network response formula
- formula (3) is the formula for calculating the mean value of batch data
- formula (4) is the formula for calculating the variance of batch data
- formula (5) is the normalized formula
- (6) is the formula of scale transformation and offset.
- the j-th level fusion network block can process multiple feature map groups corresponding to the M-level fourth feature maps of the j-1th fusion, and each feature map group corresponds to a set of volumes.
- Multilayer is used to convolve each feature map in the feature map group. For example, for a feature map group that includes a first feature map of level i-1, a first feature map of level i, and a first feature map of level i+1, the set of convolutional layers corresponding to the feature map group includes the first feature map.
- the convolutional layer, the second convolutional layer, the third convolutional layer and the upsampling layer are used for the first feature map of level i-1, the first feature map of level i and the first feature of level i+1, respectively
- the graph is convolved.
- the statistics (such as mean and variance) of the M-level intermediate feature maps of the jth fusion are counted by the normalization layer, and the j-th fused M-level intermediate feature maps are jointly batched.
- the normalized result is determined as the fourth feature map of M level of the j-th fusion.
- Figures 2a and 2b show schematic diagrams of batch normalization according to related technologies
- Figure 2c shows a schematic diagram of joint batch normalization according to an embodiment of the present disclosure.
- the convolutional layer 21 after the convolutional layer 21 is processed, multiple feature maps are output ( Figure 2a, Figure 2b, and Figure 2c take two feature maps as examples for illustration); the batch normalization layer (abbreviated as BN) 22 can be used to separately perform multiple feature maps.
- Each feature map is batch normalized; after batch normalization, activation can be performed through the activation layer (for example, the ReLU layer) 23.
- ⁇ and ⁇ respectively represent the magnification and offset coefficient, which can be obtained through learning;
- ⁇ and ⁇ respectively represent the mean and standard deviation, which can be obtained through statistics.
- two batch normalization layers 22 can share the magnification factor ⁇ and the offset coefficient ⁇ , and the mean value ⁇ and standard deviation ⁇ of each feature map can be counted separately; as shown in FIG. 2b, The two batch normalization layers 22 can learn the magnification factor ⁇ and the offset coefficient ⁇ respectively, and respectively count the mean value ⁇ and the standard deviation ⁇ of each feature map.
- the two batch normalization layers 22 can share the magnification factor ⁇ and the offset coefficient ⁇ , and jointly count the mean value ⁇ of all feature maps. And the standard deviation ⁇ .
- the training process can be effectively stabilized and the performance can be further improved.
- the joint batch normalization can achieve good results.
- the image processing network may further include a regression network and a classification network, which are respectively used to implement the regression task and the classification task in target detection.
- the regression network and the classification network may include a convolutional layer, an activation layer, a fully connected layer, etc., and the present disclosure does not limit the network structure of the regression network and the classification network.
- step S13 may include:
- the M-level second feature map is input into the classification network to determine the target category in the image to be processed, and the target detection result includes the image frame corresponding to the target and the target category.
- the regression task and the classification task in the target detection can be realized according to the M-level second feature map.
- the M-level second feature map can be input into the regression network for processing, and the image frame corresponding to the target in the image to be processed can be regressed; the M-level second feature map can be input into the classification network for processing to determine the target category in the image to be processed .
- the target detection result of the image to be processed may include the image frame corresponding to the target and the category of the target.
- Detectors in related technologies usually design regression heads and classification heads respectively for regression tasks and classification tasks.
- the P-level fusion network block (using pyramid convolution) is used as the combined head of the regression task and the classification task, and only according to the slight difference in the receptive field between the two tasks, the regression network and The unshared convolution is added to the classification network, which can greatly reduce the amount of calculation without loss of performance.
- Fig. 3a shows a schematic diagram of a detector according to the related art
- Fig. 3b shows a schematic diagram of an image processing network according to an embodiment of the present disclosure.
- the detector in the related technology designs a regression head 31 and a classification head 32 for regression tasks and classification tasks, respectively, and processes the feature maps through multi-level network blocks (such as convolution blocks).
- the network block at the last level realizes the regression task and the classification task respectively.
- the regression task obtains the 4 vertex coordinates of the detection frame of the K targets in the image;
- the classification task obtains the categories of the K targets in the image (suppose there are a total of C categories) .
- each level of network block may include a convolutional layer, an activation layer, a fully connected layer, etc., which is not limited in the present disclosure.
- the P-level fusion network block (which can be called P convolutional block) is used as the combined head 33 of the regression task and the classification task, and the M-level first feature map is After the combined header 33 is processed, an M-level second feature map is obtained. Input the M-level second feature map into the network blocks of the additional header 34 of the regression network and the classification network, respectively, and realize the regression in the last-level network blocks (including the convolutional layer, the activation layer, the fully connected layer, etc.) Tasks and classification tasks.
- the additional header 34 of the regression network and the classification network may each include at least one convolutional layer. Different convolution parameters can be set for the convolution layers of the two additional heads 34 according to the faint difference of the receptive fields of the regression task and the classification task, which is not limited in the present disclosure.
- the regression task obtains the coordinates of the 4 vertices of the detection frame of the K targets in the image; the classification task obtains the categories of the K targets in the image (suppose there are a total of C categories).
- the present disclosure does not limit the network block of the additional header 34 and the network structure of the last-level network block.
- the image processing network according to the embodiment of the present disclosure can greatly reduce the amount of calculation without loss of performance.
- the image processing network before applying the image processing network according to the embodiments of the present disclosure, may be trained. That is, the sample images in the training set are input into the image processing network, and the sample target detection results of the sample images are obtained through the feature pyramid network, P-level fusion network block, regression network and classification network; according to the sample target detection results of multiple sample images and The difference in the annotation results is determined to determine the network loss; the parameters of the image processing network are adjusted according to the network loss; when the training conditions (such as network convergence) are met, the trained image processing network is obtained.
- the present disclosure does not limit the training process.
- a three-dimensional convolution form-pyramid convolution is proposed, that is, focusing on the correlation between the feature map dimension and the spatial dimension at the same time.
- the image processing method of the embodiment of the present disclosure it is possible to fuse relevant information of features between adjacent layers in the feature gold tower through convolution of the gold tower with a large spatial scale, and better capture the correlation between the feature map dimension and the spatial dimension.
- the feature pyramid only focuses on fusing the semantic information between different layers when extracting features of different scales, ignoring the problem of the correlation of features between adjacent layers.
- the joint batch normalization and the natural combination of scale space convolution, the overall statistics of all scale feature maps are collected, which effectively stabilizes the training process and further improves performance, so that batch normalization is performed in batches. It can also be used when it is small. It solves the problem that batch normalization has not been well applied in the field of object detection due to its inability to obtain accurate statistics for data batch hours in practical applications.
- the ordinary convolution in order to reduce the difference between the ordinary feature pyramid and the Gaussian pyramid, according to the image processing method of the embodiment of the present disclosure, can be replaced by deformable convolution, and the gold tower convolution can be improved. It is scale-balanced convolution, thereby reducing the difference between ordinary feature pyramid and Gaussian pyramid, making the network more reasonable and efficient when processing different scales of extraction.
- the amount of calculation can be greatly reduced without loss of performance, and the inference speed can be accelerated. Solve the problem of unreasonable design of current feature pyramid and shared head module parameters.
- the image processing method according to the embodiments of the present disclosure can achieve a very small speed loss on a data set with a large scale change, so that the single-stage detector obtains a huge performance improvement, and The detector was also verified to be effective in the second stage.
- the image processing method according to the embodiment of the present disclosure can be applied to scenes such as object detection, pedestrian detection, etc., to realize the detection task of scenes with large changes in the scale of the object (for example, the object is in the close-range position and the long-range position of the camera), and can improve the detection at the same time.
- the performance and detection speed can be applied to scenes such as object detection, pedestrian detection, etc.
- the present disclosure also provides image processing devices, electronic equipment, computer-readable storage media, and programs, all of which can be used to implement any image processing method provided in the present disclosure.
- image processing devices electronic equipment, computer-readable storage media, and programs, all of which can be used to implement any image processing method provided in the present disclosure.
- Fig. 4 shows a block diagram of an image processing device according to an embodiment of the present disclosure. As shown in Fig. 4, the device includes:
- the feature extraction module 41 is configured to perform M-level feature extraction on the image to be processed to obtain an M-level first feature map of the to-be-processed image. Is an integer greater than 1;
- the scale adjustment and fusion module 42 is configured to perform scale adjustment and fusion on the feature map groups corresponding to the first feature maps at all levels to obtain the M-level second feature map, wherein each of the feature map groups includes the first feature map.
- the target detection module 43 is configured to perform target detection on the M-level second feature map to obtain a target detection result of the image to be processed.
- the feature map group corresponding to the first feature map of the i-th level includes the first feature map of the i-1th level, the first feature map of the i-th level, and the first feature map of the i+1th level.
- the scale adjustment and fusion module includes: a first scale reduction sub-module configured to scale down the i-1th level first feature map to obtain the first first feature map i-level third feature map; a first transformation sub-module configured to perform scale-invariant transformation on the i-th level first feature map to obtain a second i-th level third feature map; first scale enlargement sub-module , Configured to enlarge the scale of the i+1-th level first feature map to obtain a third i-th level third feature map; the first fusion submodule is configured to perform scale-up on the first i-th level The feature map, the second i-th level third feature map and the third i-th level third feature map are merged to obtain the i-th level second feature map, wherein the first i-th level third feature map is The scales of the feature map, the second i-th level third feature map, and the third i-th level third feature map are the same.
- the feature map group corresponding to the first level 1 feature map includes the first level 1 feature map and the first level 2 feature map
- the scale adjustment and fusion module includes:
- the second transformation submodule is configured to perform scale-invariant transformation on the first level 1 feature map to obtain the first level 1 third feature map;
- the second scale enlargement submodule is configured to perform scale-invariant transformation on the first level 1 feature map;
- the second fusion sub-module is configured to compare the first level 1 third feature map and the second third feature map
- the level 1 third feature map is fused to obtain the first level second feature map, wherein the scale of the first level 1 third feature map is the same as the scale of the second level 1 third feature map.
- the feature map group corresponding to the M-th level first feature map includes the M-1 level first feature map and the M-th level first feature map
- the scale adjustment and fusion module It includes: a second scale reduction sub-module configured to reduce the scale of the M-1 level first feature map to obtain the first M-th level third feature map; and a third transformation sub-module configured to The M-th level first feature map is scale-invariantly transformed to obtain the second M-th level third feature map; the third fusion sub-module is configured to perform the scale-invariant transformation on the first M-th level third feature map and the The second M-th level third feature map is fused to obtain the M-th level second feature map, wherein the first M-th level third feature map is the same as the second M-th level third feature map
- the scale is the same.
- the first scale reduction submodule is configured to: convolve the i-1th level first feature map through a first convolution layer to obtain the first i-th
- the third feature map of the first level the size of the convolution kernel of the first convolutional layer is N ⁇ N, the step size is n, N and n are integers greater than 1, and the scale of the first feature map of the i-1th level Is n times the scale of the i-th level first feature map
- the first transformation sub-module is configured to: convolve the i-th level first feature map through a second convolution layer to obtain the first feature map Two i-th level third feature maps, the size of the convolution kernel of the second convolution layer is N ⁇ N, and the step size is 1.
- the configuration of the first-scale amplification sub-module is: through the third volume
- the build-up layer and the up-sampling layer perform convolution and up-sampling on the i+1-th level first feature map to obtain the third i-th level third feature map, and the convolution kernel of the third convolutional layer
- the size is N ⁇ N
- the step size is 1.
- the second transformation sub-module is configured to: convolve the first-level first feature map through a second convolution layer to obtain the first first-level third
- the size of the convolution kernel of the second convolution layer is N ⁇ N
- the step size is 1, and N is an integer greater than 1.
- the second scale amplification submodule is configured to: pass through the third convolution layer and The up-sampling layer performs convolution and up-sampling on the second-level first feature map to obtain a second first-level third feature map.
- the convolution kernel size of the third convolutional layer is N ⁇ N. The length is 1.
- the second scale reduction submodule is configured to: convolve the M-1th level first feature map through a first convolution layer to obtain the first Mth
- the third feature map of the first level the size of the convolution kernel of the first convolutional layer is N ⁇ N, the step size is n, N and n are integers greater than 1, and the scale of the first feature map of the i-1th level Is n times the scale of the i-th level first feature map
- the third transformation sub-module is configured to: convolve the M-th level first feature map through a second convolution layer to obtain the first feature map
- Two M-th level third feature maps the size of the convolution kernel of the second convolution layer is N ⁇ N, and the step size is 1.
- the second convolutional layer and the third convolutional layer include deformable convolutional layers or hollow convolutional layers.
- the device is implemented by an image processing network, the image processing network includes a P-level fusion network block connected in series, configured to perform P-level scaling and fusion on the M-level first feature map , Each level of fusion network block includes multiple first convolutional layers, multiple second convolutional layers, and multiple third convolutional layers, and P is a positive integer;
- the scale adjustment and fusion module includes: a first fusion sub-module , Configured to input the M-level first feature map into the first-level fusion network block, and output the M-level fourth feature map for the first fusion; the second fusion sub-module is configured to merge the j-1th fusion
- the M-level fourth feature map is input into the j-th fusion network block, and the M-level fourth feature map of the j-th fusion is output.
- each level of fusion network block further includes a normalization layer, and the second fusion sub-module is configured to pass through the first convolutional layer and the second volume of the j-th level fusion network block.
- the accumulation layer and the third convolutional layer respectively perform scale adjustment and fusion on the feature map group corresponding to the M-level fourth feature map of the j-1th fusion to obtain the J-th fused M-level intermediate feature map ; Perform joint batch normalization processing on the j-th fused M-level intermediate feature map through the normalization layer to obtain the j-th fused M-level fourth feature map.
- the device is implemented by an image processing network
- the image processing network further includes a regression network and a classification network
- the target detection module includes: a regression sub-module configured to configure the M-th Two feature maps are input to the regression network to determine the image frame corresponding to the target in the image to be processed; the classification sub-module is configured to input the M-level second feature map to the classification network to determine the image to be processed
- the target category in the target and the target detection result includes the image frame corresponding to the target and the target category.
- the functions or modules contained in the device provided in the embodiments of the present disclosure can be configured to execute the methods described in the above method embodiments.
- the functions or modules contained in the device provided in the embodiments of the present disclosure can be configured to execute the methods described in the above method embodiments.
- the embodiments of the present disclosure also provide a computer-readable storage medium on which computer program instructions are stored, and the computer program instructions implement the above-mentioned method when executed by a processor.
- the computer-readable storage medium may be a volatile computer-readable storage medium or a non-volatile computer-readable storage medium.
- An embodiment of the present disclosure also proposes an electronic device, including: a processor; a memory for storing executable instructions of the processor; wherein the processor is configured to call the instructions stored in the memory to execute the above method.
- the embodiments of the present disclosure also provide a computer program product, which includes computer-readable code. When the computer-readable code runs on the device, the processor in the device executes the image processing method for implementing the image processing method provided by any of the above embodiments. instruction.
- the embodiments of the present disclosure also provide another computer program product for storing computer-readable instructions, which when executed, cause the computer to perform the operations of the image processing method provided by any of the foregoing embodiments.
- the electronic device can be provided as a terminal, server or other form of device.
- FIG. 5 shows a block diagram of an electronic device 800 according to an embodiment of the present disclosure.
- the electronic device 800 may be a mobile phone, a computer, a digital broadcasting terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and other terminals.
- the electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power supply component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, and a sensor component 814 , And communication component 816.
- the processing component 802 generally controls the overall operations of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
- the processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps of the foregoing method.
- the processing component 802 may include one or more modules to facilitate the interaction between the processing component 802 and other components.
- the processing component 802 may include a multimedia module to facilitate the interaction between the multimedia component 808 and the processing component 802.
- the memory 804 is configured to store various types of data to support operations in the electronic device 800. Examples of these data include instructions for any application or method to operate on the electronic device 800, contact data, phone book data, messages, pictures, videos, etc.
- the memory 804 can be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable and Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic Disk or Optical Disk.
- SRAM static random access memory
- EEPROM electrically erasable programmable read-only memory
- EPROM erasable and Programmable Read Only Memory
- PROM Programmable Read Only Memory
- ROM Read Only Memory
- Magnetic Memory Flash Memory
- Magnetic Disk Magnetic Disk or Optical Disk.
- the power supply component 806 provides power for various components of the electronic device 800.
- the power supply component 806 may include a power management system, one or more power supplies, and other components associated with the generation, management, and distribution of power for the electronic device 800.
- the multimedia component 808 includes a screen that provides an output interface between the
- the screen may include a liquid crystal display (LCD) and a touch panel (TP).
- the screen may be implemented as a touch screen to receive input signals from the user.
- the touch panel includes one or more touch sensors to sense touch, sliding, and gestures on the touch panel.
- the touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure related to the touch or slide operation.
- the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data.
- the audio component 810 is configured to output and/or input audio signals.
- the audio component 810 includes a microphone (MIC), and when the electronic device 800 is in an operation mode, such as a call mode, a recording mode, and a voice recognition mode, the microphone is configured to receive an external audio signal.
- the received audio signal may be further stored in the memory 804 or transmitted via the communication component 816.
- the audio component 810 further includes a speaker for outputting audio signals.
- the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module.
- the above-mentioned peripheral interface module may be a keyboard, a click wheel, a button, and the like.
- the sensor component 814 includes one or more sensors for providing the electronic device 800 with various aspects of state evaluation.
- the sensor component 814 can detect the on/off status of the electronic device 800 and the relative positioning of the components.
- the component is the display and the keypad of the electronic device 800.
- the sensor component 814 can also detect the electronic device 800 or the electronic device 800.
- the position of the component changes, the presence or absence of contact between the user and the electronic device 800, the orientation or acceleration/deceleration of the electronic device 800, and the temperature change of the electronic device 800.
- the sensor component 814 may include a proximity sensor configured to detect the presence of nearby objects when there is no physical contact.
- the sensor component 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
- the sensor component 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
- the communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices.
- the electronic device 800 can access a wireless network based on a communication standard, such as WiFi, 2G, or 3G, or a combination thereof.
- the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
- the communication component 816 further includes a near field communication (NFC) module to facilitate short-range communication.
- NFC near field communication
- the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
- the electronic device 800 may be implemented by one or more application-specific integrated circuits (ASIC), digital signal processors (DSP), digital signal processing devices (DSPD), programmable logic devices (PLD), field-available A programmable gate array (FPGA), controller, microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
- ASIC application-specific integrated circuits
- DSP digital signal processors
- DSPD digital signal processing devices
- PLD programmable logic devices
- FPGA field-available A programmable gate array
- controller microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
- a non-volatile computer-readable storage medium
- FIG. 6 shows a block diagram of an electronic device 1900 according to an embodiment of the present disclosure.
- the electronic device 1900 may be provided as a server. 6
- the electronic device 1900 includes a processing component 1922, which further includes one or more processors, and a memory resource represented by the memory 1932, for storing instructions executable by the processing component 1922, such as application programs.
- the application program stored in the memory 1932 may include one or more modules each corresponding to a set of instructions.
- the processing component 1922 is configured to execute instructions to perform the above-described methods.
- the electronic device 1900 may also include a power supply component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to the network, and an input output (I/O) interface 1958 .
- the electronic device 1900 can operate based on an operating system stored in the memory 1932, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or the like.
- a non-volatile computer-readable storage medium is also provided, such as the memory 1932 including computer program instructions, which can be executed by the processing component 1922 of the electronic device 1900 to complete the foregoing method.
- the present disclosure may be a system, method and/or computer program product.
- the computer program product may include a computer-readable storage medium loaded with computer-readable program instructions for enabling a processor to implement various aspects of the present disclosure.
- the computer-readable storage medium may be a tangible device that can hold and store instructions used by the instruction execution device.
- the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
- Examples of computer-readable storage media include: portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory) , Static random access memory (SRAM), portable compact disk read-only memory (CD-ROM), digital versatile disk (DVD), memory stick, floppy disk, mechanical encoding device, such as punch card or The convex structure in the groove, and any suitable combination of the above.
- the computer-readable storage medium used here is not interpreted as the instantaneous signal itself, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (for example, light pulses through fiber optic cables), or through wires Transmission of electrical signals.
- the computer-readable program instructions described herein can be downloaded from a computer-readable storage medium to various computing/processing devices, or downloaded to an external computer or external storage device via a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
- the network may include copper transmission cables, optical fiber transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
- the network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network, and forwards the computer-readable program instructions for storage in the computer-readable storage medium in each computing/processing device .
- the computer program instructions used to perform the operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, state setting data, or in one or more programming languages.
- Source code or object code written in any combination, the programming language includes object-oriented programming languages such as Smalltalk, C++, etc., and conventional procedural programming languages such as "C" language or similar programming languages.
- Computer-readable program instructions can be executed entirely on the user's computer, partly on the user's computer, executed as a stand-alone software package, partly on the user's computer and partly executed on a remote computer, or entirely on the remote computer or server implement.
- the remote computer can be connected to the user's computer through any kind of network-including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to connect to the user's computer) connect).
- LAN local area network
- WAN wide area network
- an electronic circuit such as a programmable logic circuit, a field programmable gate array (FPGA), or a programmable logic array (PLA), can be customized by using the status information of the computer-readable program instructions.
- the computer-readable program instructions are executed to realize various aspects of the present disclosure.
- These computer-readable program instructions can be provided to the processor of a general-purpose computer, a special-purpose computer, or other programmable data processing device, thereby producing a machine that makes these instructions when executed by the processor of the computer or other programmable data processing device , A device that implements the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams is produced. It is also possible to store these computer-readable program instructions in a computer-readable storage medium. These instructions make computers, programmable data processing apparatuses, and/or other devices work in a specific manner. Thus, the computer-readable medium storing the instructions includes An article of manufacture, which includes instructions for implementing various aspects of the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams.
- each block in the flowchart or block diagram may represent a module, program segment, or part of an instruction, and the module, program segment, or part of an instruction contains one or more components for realizing the specified logical function.
- Executable instructions may also occur in a different order from the order marked in the drawings. For example, two consecutive blocks can actually be executed substantially in parallel, or they can sometimes be executed in the reverse order, depending on the functions involved.
- each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or actions Or it can be realized by a combination of dedicated hardware and computer instructions.
- the computer program product can be implemented by hardware, software or a combination thereof.
- the computer program product is embodied as a computer storage medium.
- the computer program product is embodied as a software product, such as a software development kit (SDK) and so on.
- SDK software development kit
- M-level feature extraction can be performed on the image to be processed to obtain the M-level first feature map; each first feature map and its adjacent feature maps are fused to obtain the M-level second feature map;
- the target detection of the second feature map obtains the target detection result, so that the relevant information of the features between the adjacent layers of the M-level first feature map can be merged, and the effect of target detection can be effectively improved.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Mathematical Physics (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
Description
Claims (25)
- 一种图像处理方法,包括:对待处理图像进行M级特征提取,得到所述待处理图像的M级第一特征图,所述M级第一特征图中各级第一特征图的尺度不同,M为大于1的整数;对与各级第一特征图对应的特征图组分别进行尺度调整及融合,得到M级第二特征图,其中,每个所述特征图组包括所述第一特征图以及与所述第一特征图相邻的第一特征图;对所述M级第二特征图进行目标检测,得到所述待处理图像的目标检测结果。
- 根据权利要求1所述的方法,其中,与第i级第一特征图对应的特征图组包括第i-1级第一特征图、第i级第一特征图及第i+1级第一特征图,i为整数且1<i<M,所述对与各级第一特征图对应的特征图组分别进行尺度调整及融合,得到M级第二特征图,包括:对所述第i-1级第一特征图进行尺度缩小,得到第一个第i级第三特征图;对所述第i级第一特征图进行尺度不变的变换,得到第二个第i级第三特征图;对所述第i+1级第一特征图进行尺度放大,得到第三个第i级第三特征图;对所述第一个第i级第三特征图、所述第二个第i级第三特征图及第三个第i级第三特征图进行融合,得到第i级第二特征图,其中,所述第一个第i级第三特征图、所述第二个第i级第三特征图及第三个第i级第三特征图的尺度相同。
- 根据权利要求1或2所述的方法,其中,与第1级第一特征图对应的特征图组包括所述第1级第一特征图及第2级第一特征图,所述对与各级第一特征图对应的特征图组分别进行尺度调整及融合,得到M级第二特征图,包括:对所述第1级第一特征图进行尺度不变的变换,得到第一个第1级第三特征图;对所述第2级第一特征图进行尺度放大,得到第二个第1级第三特征图;对所述第一个第1级第三特征图及所述第二个第1级第三特征图进行融合,得到第1级第二特征图,其中,所述第一个第1级第三特征图与所述第二个第1级第三特征图的尺度相同。
- 根据权利要求1至3任一项所述的方法,其中,与第M级第一特征图对应的特征图组包括第M-1级第一特征图及所述第M级第一特征图,所述对与各级第一特征图对应的特征图组分别进行尺度调整及融合,得到M级第二特征图,包括:对所述第M-1级第一特征图进行尺度缩小,得到第一个第M级第三特征图;对所述第M级第一特征图进行尺度不变的变换,得到第二个第M级第三特征图;对所述第一个第M级第三特征图及所述第二个第M级第三特征图进行融合,得到第M级第二特征图,其中,所述第一个第M级第三特征图与所述第二个第M级第三特征图的尺度相同。
- 根据权利要求2至4中任意一项所述的方法,其中,所述对所述第i-1级第一特征图进行尺度缩小,得到第一个第i级第三特征图,包括:通过第一卷积层对所述第i-1级第一特征图进行卷积,得到所述第一个第i级第三特征图,所述第一卷积层的卷积核尺寸为N×N,步长为n,N、n为大于1的整数,所述第i-1级第一特征图的尺度为所述第i级第一特征图的尺度的n倍;所述对所述第i级第一特征图进行尺度不变的变换,得到第二个第i级第三特征 图,包括:通过第二卷积层对所述第i级第一特征图进行卷积,得到所述第二个第i级第三特征图,所述第二卷积层的卷积核尺寸为N×N,步长为1;所述对所述第i+1级第一特征图进行尺度放大,得到第三个第i级第三特征图,包括:通过第三卷积层及上采样层对所述第i+1级第一特征图进行卷积及上采样,得到所述第三个第i级第三特征图,所述第三卷积层的卷积核尺寸为N×N,步长为1。
- 根据权利要求3所述的方法,其中,所述对所述第1级第一特征图进行尺度不变的变换,得到第一个第1级第三特征图,包括:通过第二卷积层对所述第1级第一特征图进行卷积,得到所述第一个第1级第三特征图,所述第二卷积层的卷积核尺寸为N×N,步长为1,N为大于1的整数;所述对所述第2级第一特征图进行尺度放大,得到第二个第1级第三特征图,包括:通过第三卷积层及上采样层对所述第2级第一特征图进行卷积及上采样,得到第二个第1级第三特征图,所述第三卷积层的卷积核尺寸为N×N,步长为1。
- 根据权利要求4所述的方法,其中,所述对所述第M-1级第一特征图进行尺度缩小,得到第一个第M级第三特征图,包括:通过第一卷积层对所述第M-1级第一特征图进行卷积,得到所述第一个第M级第三特征图,所述第一卷积层的卷积核尺寸为N×N,步长为n,N、n为大于1的整数,所述第i-1级第一特征图的尺度为所述第i级第一特征图的尺度的n倍;所述对所述第M级第一特征图进行尺度不变的变换,得到第二个第M级第三特征图,包括:通过第二卷积层对所述第M级第一特征图进行卷积,得到所述第二个第M级第三特征图,所述第二卷积层的卷积核尺寸为N×N,步长为1。
- 根据权利要求5至7中任意一项所述的方法,其中,所述第二卷积层及所述第三卷积层包括可变形卷积层或空洞卷积层。
- 根据权利要求5至8中任意一项所述的方法,其中,所述方法通过图像处理网络实现,所述图像处理网络包括串联的P级融合网络块,用于对所述M级第一特征图进行P次尺度调整及融合,每级融合网络块包括多个第一卷积层、多个第二卷积层及多个第三卷积层,P为正整数;所述对与各级第一特征图对应的特征图组分别进行尺度调整及融合,得到M级第二特征图,包括:将所述M级第一特征图输入第一级融合网络块中,输出第一次融合的M级第四特征图;将第j-1次融合的M级第四特征图输入第j级融合网络块中,输出第j次融合的M级第四特征图,j为整数且1<j<P;将第P-1次融合的M级第四特征图输入第P级融合网络块中,输出所述M级第二特征图。
- 根据权利要求9所述的方法,其中,每级融合网络块还包括归一化层,所述将第j-1次融合的M级第四特征图输入第j级融合网络块中,输出第j次融合的M级第四特征图,包括:通过所述第j级融合网络块的第一卷积层、第二卷积层及第三卷积层,对所述第j-1次融合的M级第四特征图对应的特征图组分别进行尺度调整及融合,得到所述第j次融合的M级中间特征图;通过所述归一化层对所述第j次融合的M级中间特征图进行联合批归一化处理,得到所述第j次融合的M级第四特征图。
- 根据权利要求1至10任一项所述的方法,其中,所述方法通过图像处理网络实现,所述图像处理网络还包括回归网络和分类网络,所述对所述M级第二特征 图进行目标检测,得到所述待处理图像的目标检测结果,包括:将所述M级第二特征图输入所述回归网络,确定所述待处理图像中目标对应的图像框;将所述M级第二特征图输入所述分类网络,确定出所述待处理图像中目标的类别,所述目标检测结果包括所述目标对应的图像框和所述目标的类别。
- 一种图像处理装置,包括:特征提取模块,配置为对待处理图像进行M级特征提取,得到所述待处理图像的M级第一特征图,所述M级第一特征图中各级第一特征图的尺度不同,M为大于1的整数;尺度调整及融合模块,配置为对与各级第一特征图对应的特征图组分别进行尺度调整及融合,得到M级第二特征图,其中,每个所述特征图组包括所述第一特征图以及与所述第一特征图相邻的第一特征图;目标检测模块,配置为对所述M级第二特征图进行目标检测,得到所述待处理图像的目标检测结果。
- 根据权利要求12所述的装置,其中,与第i级第一特征图对应的特征图组包括第i-1级第一特征图、第i级第一特征图及第i+1级第一特征图,i为整数且1<i<M,所述尺度调整及融合模块包括:第一尺度缩小子模块,配置为对所述第i-1级第一特征图进行尺度缩小,得到第一个第i级第三特征图;第一变换子模块,配置为对所述第i级第一特征图进行尺度不变的变换,得到第二个第i级第三特征图;第一尺度放大子模块,配置为对所述第i+1级第一特征图进行尺度放大,得到第三个第i级第三特征图;第一融合子模块,配置为对所述第一个第i级第三特征图、所述第二个第i级第三特征图及第三个第i级第三特征图进行融合,得到第i级第二特征图,其中,所述第一个第i级第三特征图、所述第二个第i级第三特征图及第三个第i级第三特征图的尺度相同。
- 根据权利要求12或13所述的装置,其中,与第1级第一特征图对应的特征图组包括所述第1级第一特征图及第2级第一特征图,所述尺度调整及融合模块包括:第二变换子模块,配置为对所述第1级第一特征图进行尺度不变的变换,得到第一个第1级第三特征图;第二尺度放大子模块,配置为对所述第2级第一特征图进行尺度放大,得到第二个第1级第三特征图;第二融合子模块,配置为对所述第一个第1级第三特征图及所述第二个第1级第三特征图进行融合,配置为第1级第二特征图,其中,所述第一个第1级第三特征图与所述第二个第1级第三特征图的尺度相同。
- 根据权利要求12至14任一项所述的装置,其中,与第M级第一特征图对应的特征图组包括第M-1级第一特征图及所述第M级第一特征图,所述尺度调整及融合模块包括:第二尺度缩小子模块,配置为对所述第M-1级第一特征图进行尺度缩小,得到第一个第M级第三特征图;第三变换子模块,配置为对所述第M级第一特征图进行尺度不变的变换,得到第二个第M级第三特征图;第三融合子模块,配置为对所述第一个第M级第三特征图及所述第二个第M级第三特征图进行融合,得到第M级第二特征图,其中,所述第一个第M级第三特征图与所述第二个第M级第三特征图的尺度相同。
- 根据权利要求13至15中任意一项所述的装置,其中,所述第一尺度缩小子模块配置为:通过第一卷积层对所述第i-1级第一特征图进行卷积,得到所述第一个第i级第三特征图,所述第一卷积层的卷积核尺寸为N×N,步长为n,N、n为大于1的整数,所述第i-1级第一特征图的尺度为所述第i级第一特征图的尺度的n倍;所述第一变换子模块配置为:通过第二卷积层对所述第i级第一特征图进行卷积,得 到所述第二个第i级第三特征图,所述第二卷积层的卷积核尺寸为N×N,步长为1;所述对所述第一尺度放大子模块配置为:通过第三卷积层及上采样层对所述第i+1级第一特征图进行卷积及上采样,得到所述第三个第i级第三特征图,所述第三卷积层的卷积核尺寸为N×N,步长为1。
- 根据权利要求15所述的装置,其中,所述第二变换子模块配置为:通过第二卷积层对所述第1级第一特征图进行卷积,得到所述第一个第1级第三特征图,所述第二卷积层的卷积核尺寸为N×N,步长为1,N为大于1的整数;所述第二尺度放大子模块配置为:通过第三卷积层及上采样层对所述第2级第一特征图进行卷积及上采样,得到第二个第1级第三特征图,所述第三卷积层的卷积核尺寸为N×N,步长为1。
- 根据权利要求16所述的装置,其中,所述第二尺度缩小子模块配置为:通过第一卷积层对所述第M-1级第一特征图进行卷积,得到所述第一个第M级第三特征图,所述第一卷积层的卷积核尺寸为N×N,步长为n,N、n为大于1的整数,所述第i-1级第一特征图的尺度为所述第i级第一特征图的尺度的n倍;所述第三变换子模块配置为:通过第二卷积层对所述第M级第一特征图进行卷积,得到所述第二个第M级第三特征图,所述第二卷积层的卷积核尺寸为N×N,步长为1。
- 根据权利要求16至18中任意一项所述的装置,其中,所述第二卷积层及所述第三卷积层包括可变形卷积层或空洞卷积层。
- 根据权利要求16至19中任意一项所述的装置,其中,所述装置通过图像处理网络实现,所述图像处理网络包括串联的P级融合网络块,配置为对所述M级第一特征图进行P次尺度调整及融合,每级融合网络块包括多个第一卷积层、多个第二卷积层及多个第三卷积层,P为正整数;所述尺度调整及融合模块包括:第一融合子模块,配置为将所述M级第一特征图输入第一级融合网络块中,输出第一次融合的M级第四特征图;第二融合子模块,配置为将第j-1次融合的M级第四特征图输入第j级融合网络块中,输出第j次融合的M级第四特征图,j为整数且1<j<P;第三融合子模块,配置为将第P-1次融合的M级第四特征图输入第P级融合网络块中,输出所述M级第二特征图。
- 根据权利要求20所述的装置,其中,每级融合网络块还包括归一化层,所述第二融合子模块配置为:通过所述第j级融合网络块的第一卷积层、第二卷积层及第三卷积层,对所述第j-1次融合的M级第四特征图对应的特征图组分别进行尺度调整及融合,得到所述第j次融合的M级中间特征图;通过所述归一化层对所述第j次融合的M级中间特征图进行联合批归一化处理,得到所述第j次融合的M级第四特征图。
- 根据权利要求13至21任一项所述的装置,其中,所述装置通过图像处理网络实现,所述图像处理网络还包括回归网络和分类网络,所述目标检测模块包括:回归子模块,配置为将所述M级第二特征图输入所述回归网络,确定所述待处理图像中目标对应的图像框;分类子模块,配置为将所述M级第二特征图输入所述分类网络,确定出所述待处理图像中目标的类别,所述目标检测结果包括所述目标对应的图像框和所述目标的类别。
- 一种电子设备,其中,包括:处理器;配置为存储处理器可执行指令的存储器;其中,所述处理器被配置为调用所述存储器存储的指令,以执行权利要求1至11中任意一项所述的方法。
- 一种计算机可读存储介质,其上存储有计算机程序指令,其中,所述计算机程序指令被处理器执行时实现权利要求1至11中任意一项所述的图像处理方法。
- 一种计算机程序产品,所述计算机程序产品包括一条或多条指令,所述一条或多条指令适于由处理器加载并执行如权利要求1至11任一项所述的图像处理方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021566025A JP2022532322A (ja) | 2020-04-17 | 2021-03-19 | 画像処理方法及び装置、電子機器並びに記憶媒体 |
KR1020227000768A KR20220011207A (ko) | 2020-04-17 | 2021-03-19 | 이미지 처리 방법 및 장치, 전자 기기 및 저장 매체 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010306929.2A CN111507408B (zh) | 2020-04-17 | 2020-04-17 | 图像处理方法及装置、电子设备和存储介质 |
CN202010306929.2 | 2020-04-17 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021208667A1 true WO2021208667A1 (zh) | 2021-10-21 |
Family
ID=71874374
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/081782 WO2021208667A1 (zh) | 2020-04-17 | 2021-03-19 | 图像处理方法及装置、电子设备和存储介质 |
Country Status (5)
Country | Link |
---|---|
JP (1) | JP2022532322A (zh) |
KR (1) | KR20220011207A (zh) |
CN (1) | CN111507408B (zh) |
TW (1) | TWI782480B (zh) |
WO (1) | WO2021208667A1 (zh) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115018059A (zh) * | 2022-08-09 | 2022-09-06 | 北京灵汐科技有限公司 | 数据处理方法及装置、神经网络模型、设备、介质 |
CN115131641A (zh) * | 2022-06-30 | 2022-09-30 | 北京百度网讯科技有限公司 | 图像识别方法、装置、电子设备和存储介质 |
CN115223018A (zh) * | 2022-06-08 | 2022-10-21 | 东北石油大学 | 伪装对象协同检测方法及装置、电子设备和存储介质 |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111507408B (zh) * | 2020-04-17 | 2022-11-04 | 深圳市商汤科技有限公司 | 图像处理方法及装置、电子设备和存储介质 |
CN111967401A (zh) * | 2020-08-19 | 2020-11-20 | 上海眼控科技股份有限公司 | 目标检测方法、设备及存储介质 |
CN112200201A (zh) * | 2020-10-13 | 2021-01-08 | 上海商汤智能科技有限公司 | 一种目标检测方法及装置、电子设备和存储介质 |
CN112232361B (zh) * | 2020-10-13 | 2021-09-21 | 国网电子商务有限公司 | 图像处理的方法及装置、电子设备及计算机可读存储介质 |
CN113191390B (zh) * | 2021-04-01 | 2022-06-14 | 华中科技大学 | 一种图像分类模型的构建方法、图像分类方法及存储介质 |
CN114463605B (zh) * | 2022-04-13 | 2022-08-12 | 中山大学 | 基于深度学习的持续学习图像分类方法及装置 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106096670A (zh) * | 2016-06-17 | 2016-11-09 | 北京市商汤科技开发有限公司 | 级联卷积神经网络训练和图像检测方法、装置及系统 |
US20180060719A1 (en) * | 2016-08-29 | 2018-03-01 | International Business Machines Corporation | Scale-space label fusion using two-stage deep neural net |
CN110378976A (zh) * | 2019-07-18 | 2019-10-25 | 北京市商汤科技开发有限公司 | 图像处理方法及装置、电子设备和存储介质 |
CN110852349A (zh) * | 2019-10-21 | 2020-02-28 | 上海联影智能医疗科技有限公司 | 一种图像处理方法、检测方法、相关设备及存储介质 |
CN111507408A (zh) * | 2020-04-17 | 2020-08-07 | 深圳市商汤科技有限公司 | 图像处理方法及装置、电子设备和存储介质 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9965719B2 (en) * | 2015-11-04 | 2018-05-08 | Nec Corporation | Subcategory-aware convolutional neural networks for object detection |
US10929977B2 (en) * | 2016-08-25 | 2021-02-23 | Intel Corporation | Coupled multi-task fully convolutional networks using multi-scale contextual information and hierarchical hyper-features for semantic image segmentation |
JP6546618B2 (ja) * | 2017-05-31 | 2019-07-17 | 株式会社Preferred Networks | 学習装置、学習方法、学習モデル、検出装置及び把持システム |
KR102235745B1 (ko) * | 2018-08-10 | 2021-04-02 | 네이버 주식회사 | 컨볼루션 순환 신경망을 훈련시키는 방법 및 훈련된 컨볼루션 순환 신경망을 사용하는 입력된 비디오의 의미적 세그먼트화 방법 |
TWI691930B (zh) * | 2018-09-19 | 2020-04-21 | 財團法人工業技術研究院 | 基於神經網路的分類方法及其分類裝置 |
CN109816671B (zh) * | 2019-01-31 | 2021-09-24 | 深兰科技(上海)有限公司 | 一种目标检测方法、装置及存储介质 |
CN110647834B (zh) * | 2019-09-18 | 2021-06-25 | 北京市商汤科技开发有限公司 | 人脸和人手关联检测方法及装置、电子设备和存储介质 |
-
2020
- 2020-04-17 CN CN202010306929.2A patent/CN111507408B/zh active Active
-
2021
- 2021-03-19 KR KR1020227000768A patent/KR20220011207A/ko active Search and Examination
- 2021-03-19 WO PCT/CN2021/081782 patent/WO2021208667A1/zh active Application Filing
- 2021-03-19 JP JP2021566025A patent/JP2022532322A/ja active Pending
- 2021-04-12 TW TW110113119A patent/TWI782480B/zh active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106096670A (zh) * | 2016-06-17 | 2016-11-09 | 北京市商汤科技开发有限公司 | 级联卷积神经网络训练和图像检测方法、装置及系统 |
US20180060719A1 (en) * | 2016-08-29 | 2018-03-01 | International Business Machines Corporation | Scale-space label fusion using two-stage deep neural net |
CN110378976A (zh) * | 2019-07-18 | 2019-10-25 | 北京市商汤科技开发有限公司 | 图像处理方法及装置、电子设备和存储介质 |
CN110852349A (zh) * | 2019-10-21 | 2020-02-28 | 上海联影智能医疗科技有限公司 | 一种图像处理方法、检测方法、相关设备及存储介质 |
CN111507408A (zh) * | 2020-04-17 | 2020-08-07 | 深圳市商汤科技有限公司 | 图像处理方法及装置、电子设备和存储介质 |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115223018A (zh) * | 2022-06-08 | 2022-10-21 | 东北石油大学 | 伪装对象协同检测方法及装置、电子设备和存储介质 |
CN115223018B (zh) * | 2022-06-08 | 2023-07-04 | 东北石油大学 | 伪装对象协同检测方法及装置、电子设备和存储介质 |
CN115131641A (zh) * | 2022-06-30 | 2022-09-30 | 北京百度网讯科技有限公司 | 图像识别方法、装置、电子设备和存储介质 |
CN115018059A (zh) * | 2022-08-09 | 2022-09-06 | 北京灵汐科技有限公司 | 数据处理方法及装置、神经网络模型、设备、介质 |
Also Published As
Publication number | Publication date |
---|---|
TW202141423A (zh) | 2021-11-01 |
CN111507408A (zh) | 2020-08-07 |
KR20220011207A (ko) | 2022-01-27 |
TWI782480B (zh) | 2022-11-01 |
JP2022532322A (ja) | 2022-07-14 |
CN111507408B (zh) | 2022-11-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021208667A1 (zh) | 图像处理方法及装置、电子设备和存储介质 | |
WO2021155632A1 (zh) | 图像处理方法及装置、电子设备和存储介质 | |
TWI740309B (zh) | 圖像處理方法及裝置、電子設備和電腦可讀儲存介質 | |
TWI749423B (zh) | 圖像處理方法及裝置、電子設備和電腦可讀儲存介質 | |
TWI724736B (zh) | 圖像處理方法及裝置、電子設備、儲存媒體和電腦程式 | |
US20210012143A1 (en) | Key Point Detection Method and Apparatus, and Storage Medium | |
WO2021164469A1 (zh) | 目标对象的检测方法、装置、设备和存储介质 | |
WO2021128578A1 (zh) | 图像处理方法及装置、电子设备和存储介质 | |
WO2020135529A1 (zh) | 位姿估计方法及装置、电子设备和存储介质 | |
CN108629354B (zh) | 目标检测方法及装置 | |
US11417078B2 (en) | Image processing method and apparatus, and storage medium | |
WO2020155711A1 (zh) | 图像生成方法及装置、电子设备和存储介质 | |
WO2021139120A1 (zh) | 网络训练方法及装置、图像生成方法及装置 | |
WO2021169132A1 (zh) | 图像处理方法及装置、电子设备和存储介质 | |
WO2021208666A1 (zh) | 字符识别方法及装置、电子设备和存储介质 | |
TWI778313B (zh) | 圖像處理方法、電子設備和儲存介質 | |
KR20200106027A (ko) | 네트워크 모듈 및 분배 방법 및 장치, 전자 기기 및 저장 매체 | |
CN111242303A (zh) | 网络训练方法及装置、图像处理方法及装置 | |
CN111259967A (zh) | 图像分类及神经网络训练方法、装置、设备及存储介质 | |
US20220270352A1 (en) | Methods, apparatuses, devices, storage media and program products for determining performance parameters | |
WO2022141969A1 (zh) | 图像分割方法及装置、电子设备、存储介质和程序 | |
CN113283343A (zh) | 人群定位方法及装置、电子设备和存储介质 | |
CN111311588B (zh) | 重定位方法及装置、电子设备和存储介质 | |
CN114359808A (zh) | 目标检测方法及装置、电子设备和存储介质 | |
CN113435390A (zh) | 人群定位方法及装置、电子设备和存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2021566025 Country of ref document: JP Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21788380 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20227000768 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 31.03.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21788380 Country of ref document: EP Kind code of ref document: A1 |