CN113129306A - Occlusion object segmentation solving method based on deep learning - Google Patents

Occlusion object segmentation solving method based on deep learning Download PDF

Info

Publication number
CN113129306A
CN113129306A CN202110504652.9A CN202110504652A CN113129306A CN 113129306 A CN113129306 A CN 113129306A CN 202110504652 A CN202110504652 A CN 202110504652A CN 113129306 A CN113129306 A CN 113129306A
Authority
CN
China
Prior art keywords
image
overall appearance
parameters
area
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110504652.9A
Other languages
Chinese (zh)
Other versions
CN113129306B (en
Inventor
邹倩颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu College of University of Electronic Science and Technology of China
Original Assignee
Chengdu College of University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu College of University of Electronic Science and Technology of China filed Critical Chengdu College of University of Electronic Science and Technology of China
Priority to CN202110504652.9A priority Critical patent/CN113129306B/en
Publication of CN113129306A publication Critical patent/CN113129306A/en
Application granted granted Critical
Publication of CN113129306B publication Critical patent/CN113129306B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image analysis, and aims to provide a method for segmenting and solving a sheltered object based on deep learning, which comprises the following steps of 1: when the object is shielded, acquiring an image of the object which is not shielded, performing initialization processing on the image to extract an area parameter and a frame parameter, and executing the step 2; step 2: the area parameters and the frame parameters are used as input and sent to an image deep learning model, the model outputs corresponding frame overall appearance parameters and area overall appearance parameters, the parameters are continuously screened according to the requirements, and the step 3 is executed; and step 3: the method comprises the steps of screening standard frame overall appearance parameters and standard area overall appearance parameters from a plurality of frame overall appearance parameters and area overall appearance parameters, and sending the standard frame overall appearance parameters and the standard area overall appearance parameters to an image construction model to obtain the object overall appearance.

Description

Occlusion object segmentation solving method based on deep learning
Technical Field
The invention relates to the technical field of image processing, in particular to a method for segmenting and solving a sheltered object based on deep learning.
Background
Given a scene, human beings can perform scene understanding well, for example, human beings can not only recognize objects in the scene well, but also perceive relationships between the objects, including occlusion relationships. Occlusion often occurs in two-dimensional scenes, and the occlusion relationship reflects the depth order relationship between objects, i.e., near objects occlude distant objects. The human being can easily judge the relationship between the shielding objects and can identify the shielded objects at the same time because the human eyes acquire a large amount of prior knowledge along with the accumulation of experience of long-term observation of surrounding images.
Scene understanding is an extremely important fundamental task in the field of computer vision, with the aim of making computers understand scenes like humans. The current studies of scholars on scene understanding are mainly divided into two types: one is based on a neural network model and one is based on a probabilistic graphical model. With the wide application of deep learning in recent years, particularly after the Convolutional Neural Network (CNN) is applied in the image field with great success, various subtasks for scene understanding, such as scene recognition, target detection, scene segmentation and the like, have achieved breakthrough progress. However, the scene understanding based on the neural network model has less research on the occlusion objects, only the objects themselves are concerned (for example, the recognition is only to classify the objects in the picture, and the segmentation is only to classify the pixels), and the relationship between the objects is not considered, so that the occlusion relationship cannot be judged. Secondly, the CNN network generally needs a large amount of data supervision information to support, and needs to see samples shielded at various angles in order to identify the shielded object; in addition, the cognitive process of the neural network is a process based on forward transmission and backward transmission of the Convolutional Neural Network (CNN), and there is no feedback mechanism similar to the human brain, and the essential differences between the two are: the feedforward network is a bottom-up process, while reasoning and feedback based on knowledge and experience is a top-down process. The probabilistic graphical model has certain advantages in the aspects of logical reasoning and context information relationship, some researches can carry out depth order reasoning by using some probabilistic graphical models such as Bayesian reasoning and Markov, but the probabilistic graphical model is only a mathematical model, so that the accuracy is lower compared with a neural network model, different models are required to be established according to different scenes, the universality is poor, and the probabilistic graphical model cannot be used for modeling according to more complex scenes.
Disclosure of Invention
The invention aims to provide a method for segmenting and solving a sheltered object based on deep learning, which comprises the steps of obtaining the area of a part of the object which is not sheltered, obtaining a standard frame global parameter and a standard area global parameter through an image deep learning model, and sending the standard frame global parameter and the standard area global parameter to an image construction model to obtain the object global;
the technical scheme adopted by the invention is as follows: a method for segmenting and solving an occluded object based on deep learning comprises the following steps:
step 1: when the object is shielded, acquiring an image of the object which is not shielded, performing initialization processing on the image to extract an area parameter and a frame parameter, and executing the step 2;
step 2: the area parameters and the frame parameters are used as input and sent to an image deep learning model, the model outputs corresponding frame overall appearance parameters and area overall appearance parameters, the parameters are continuously screened according to the requirements, and the step 3 is executed;
and step 3: and screening out standard frame overall appearance parameters and standard area overall appearance parameters from the plurality of frame overall appearance parameters and area overall appearance parameters, and sending the standard frame overall appearance parameters and the standard area overall appearance parameters to an image construction model to obtain the object overall appearance.
Preferably, in the step 1, the initialization processing includes dividing the obtained image of the object that is not occluded into a plurality of cells, and counting the number and area of the cells, where the cells are rectangles, and the areas of the cells close to the edge of the object and the occlusion boundary gradually decrease until the image is fully covered.
Preferably, the acquired image is a planar image of the object, and a single object acquires planar images in at most three directions.
Preferably, in the step 2, the building of the image depth learning model includes the following steps:
step 41: establishing a neural network structure comprising a full connection layer, a convolution layer, an activation function and a pooling layer, establishing forward propagation of image parameters through the full connection layer, and executing step 42;
step 42: extracting image features through the convolutional layer, outputting the feature matrix channel number which is the same as the number of the convolutional kernels through training the convolutional kernels and the deviation features, and executing the step 43;
step 43: in the activation function, the calculation process is linear, a nonlinear factor is introduced, and a matrix size calculation formula after convolution is obtained, wherein W is (W-F +2P)/S +1
Step 44: and only changing W and H of the characteristic matrix without changing the number of depth channels, and obtaining a learning model formula by setting the probability sum of all the processed nodes to be 1.
Preferably, in step 42, a channel of each convolution kernel is the same as a channel of the input feature layer, each channel of the convolution kernel is convolved with the input layer of the corresponding channel, and then the channels are summed to obtain a feature matrix which is used as an output of the deep learning image and is used as a channel of the next layer of input features to be propagated forward.
Preferably, the standard frame complete picture and the standard area complete picture are average values output by the image depth learning model through multiple times of training.
Preferably, the error calculation of the image depth learning model adopts a cross Encopy Loss of Loss cross entropy algorithm.
Preferably, in step 43, Dropout random neuron inactivation operation is used in the first two layers of the fully-connected layer through the relu activation function to reduce fitting.
Compared with the prior art, the invention has the beneficial effects that:
1. the full-view of the object can be quickly obtained through the only area of the object image, and the method is quick and high in accuracy.
Drawings
Fig. 1 is a schematic diagram of a method for segmentation solution of an occluded object based on deep learning.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to fig. 1 of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other implementations made by those of ordinary skill in the art based on the embodiments of the present invention are obtained without inventive efforts.
Example 1:
a method for segmenting and solving an occluded object based on deep learning comprises the following steps:
step 1: when the object is shielded, acquiring an image of the object which is not shielded, performing initialization processing on the image to extract an area parameter and a frame parameter, and executing the step 2;
step 2: the area parameters and the frame parameters are used as input and sent to an image deep learning model, the model outputs corresponding frame overall appearance parameters and area overall appearance parameters, the parameters are continuously screened according to the requirements, and the step 3 is executed;
and step 3: and screening out standard frame overall appearance parameters and standard area overall appearance parameters from the plurality of frame overall appearance parameters and area overall appearance parameters, and sending the standard frame overall appearance parameters and the standard area overall appearance parameters to an image construction model to obtain the object overall appearance.
It should be noted that, in step 1, the initialization processing includes dividing the obtained image of the object that is not occluded into a plurality of cells, and counting the number and area of the cells, where the cells are rectangles, and the area of the cells near the edge of the object and the occlusion boundary gradually decreases until the image is fully covered.
It should be noted that the acquired image is a planar image of an object, and a single object acquires planar images in at most three directions.
It should be noted that, in the step 2, the building of the image depth learning model includes the following steps:
step 41: establishing a neural network structure comprising a full connection layer, a convolution layer, an activation function and a pooling layer, establishing forward propagation of image parameters through the full connection layer, and executing step 42;
step 42: extracting image features through convolutional layers, training convolutional kernels and deviation features, wherein a channel of each convolutional kernel is the same as a channel of an input feature layer (for example, a 5 x 5 RGB image has 3 input layers, and each convolutional kernel also has three channels corresponding to the three input layers respectively), each channel of the convolutional kernel is convolved with the input layer of the corresponding channel respectively, then summing is carried out to obtain a feature matrix which is used as the output of a deep learning image, the number of characteristic matrix channels which are used as the forward propagation output of one channel of the next layer of input features is the same as the number of convolutional kernels, and executing step 43;
step 43: in the activation function, the calculation process is linear, a nonlinear factor is introduced, and the disappearance of the gradient is easy to occur when the sigmoid-layer number is deep; relu-when a very large gradient passes through the back propagation process, the updated weight distribution center may be smaller than zero, the derivative is always 0, the back propagation cannot update the weight, and the neuron is in an inactive state. Obtaining the calculation formula of the matrix size after convolution, wherein W is (W-F +2P)/S +1
Step 44: only changing W and H of the feature matrix, not changing the number of depth channels, enabling the probability sum of all nodes after processing to be 1, and carrying out Max boosting downsampling to aim at carrying out sparse processing on the feature map and reducing the operation amount of data. AveragePooling downsampling layer, characteristics: # No training parameters # only changes W and H of the feature matrix, does not change the depth (number of channels) # the same 5SoftMax layers as the general poolsize and stride.
It should be noted that, in step 42, the channel of each convolution kernel is the same as the channel of the input feature layer, and each channel of the convolution kernel is convolved with the input layer of the corresponding channel, and then summed to obtain a feature matrix as the output of the deep learning image, which is used as a channel of the next layer of input features to propagate forward.
It should be noted that the standard frame full-face and the standard area full-face are average values output by the image depth learning model through multiple training.
It is worth to be noted that the error calculation of the image depth learning model adopts a cross entropy Loss algorithm of cross entropy Loss, wherein the cross entropy Loss algorithm of cross entropy Loss is as follows: in the process of solving the maximum value of the likelihood function of the sample, the minimum value of a function is deduced, and the function is made to be a cross entropy function. (the goal of maximum likelihood estimation is to extrapolate back to the parameter values most likely to lead to such a result using known sample results.) the logistic regression-two classification reflects the degree of deviation of the two distributions (the distributions of groudtruth and output), the optimization goal being to approximate the two distributions.
For the multi-class problem (the output belongs to only one class), crossentry _ Loss is the function 1.
For the two-class problem (the output may fall into multiple classes), crossentry _ Loss is function 2.
It is worth noting that in said step 43, using Dropout random neuron inactivation operation in the first two layers of the fully-connected layer by relu activation function, fitting is reduced, and the LRN layer: a competition mechanism is created for the activity of local neurons, so that the response value becomes relatively larger, and other neurons with smaller feedback are inhibited, and the generalization capability of the model is enhanced. LRN (local Response normalization) local Response normalization.
In summary, by acquiring the area of the part of the object which is not shielded, the standard frame global parameters and the standard area global parameters are obtained through the image depth learning model, and the standard frame global parameters and the standard area global parameters are sent to the image construction model to obtain the object global.

Claims (8)

1. A method for segmenting and solving an occluded object based on deep learning is characterized by comprising the following steps:
step 1: when the object is shielded, acquiring an image of the object which is not shielded, performing initialization processing on the image to extract an area parameter and a frame parameter, and executing the step 2;
step 2: the area parameters and the frame parameters are used as input and sent to an image deep learning model, the model outputs corresponding frame overall appearance parameters and area overall appearance parameters, the parameters are continuously screened according to the requirements, and the step 3 is executed;
and step 3: and screening out standard frame overall appearance parameters and standard area overall appearance parameters from the plurality of frame overall appearance parameters and area overall appearance parameters, and sending the standard frame overall appearance parameters and the standard area overall appearance parameters to an image construction model to obtain the object overall appearance.
2. The method according to claim 1, wherein in the step 1, the initialization processing includes dividing the obtained image of the object that is not occluded into a plurality of cells, and counting the number and area of the cells, wherein the cells are rectangles, and the area of the cells near the edge of the object and the occlusion boundary gradually decreases until the image is completely covered.
3. The method according to claim 2, wherein the acquired image is a planar image of the object, and a single object acquires planar images in at most three directions.
4. The method for solving the segmentation of the shielding object based on the deep learning as claimed in claim 3, wherein in the step 2, the building of the image deep learning model comprises the following steps:
step 41: establishing a neural network structure comprising a full connection layer, a convolution layer, an activation function and a pooling layer, establishing forward propagation of image parameters through the full connection layer, and executing step 42;
step 42: extracting image features through the convolutional layer, outputting the feature matrix channel number which is the same as the number of the convolutional kernels through training the convolutional kernels and the deviation features, and executing the step 43;
step 43: in the activation function, the calculation process is linear, a nonlinear factor is introduced, and a matrix size calculation formula after convolution is obtained, wherein W is (W-F +2P)/S +1
Step 44: and only changing W and H of the characteristic matrix without changing the number of depth channels, and obtaining a learning model formula by setting the probability sum of all the processed nodes to be 1.
5. The method as claimed in claim 4, wherein in the step 42, the channel of each convolution kernel is the same as the channel of the input feature layer, each channel of the convolution kernel is convolved with the input layer of the corresponding channel, and then the result of summation is used as the output of the deep learning image, and the output is used as a channel of the next layer of input features to be propagated forward.
6. The occlusion object segmentation solution method based on deep learning of claim 5, wherein the standard frame full-face and the standard area full-face are average values output by the image deep learning model through multiple training.
7. The method as claimed in claim 6, wherein the error calculation of the image deep learning model adopts cross entropy Loss algorithm of Cross Encopy Loss.
8. The method as claimed in claim 7, wherein in step 43, Dropout random inactivation neuron operation is used in the first two layers of the fully connected layer through relu activation function to reduce fitting.
CN202110504652.9A 2021-05-10 2021-05-10 Occlusion object segmentation solving method based on deep learning Expired - Fee Related CN113129306B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110504652.9A CN113129306B (en) 2021-05-10 2021-05-10 Occlusion object segmentation solving method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110504652.9A CN113129306B (en) 2021-05-10 2021-05-10 Occlusion object segmentation solving method based on deep learning

Publications (2)

Publication Number Publication Date
CN113129306A true CN113129306A (en) 2021-07-16
CN113129306B CN113129306B (en) 2022-12-02

Family

ID=76781274

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110504652.9A Expired - Fee Related CN113129306B (en) 2021-05-10 2021-05-10 Occlusion object segmentation solving method based on deep learning

Country Status (1)

Country Link
CN (1) CN113129306B (en)

Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060122480A1 (en) * 2004-11-22 2006-06-08 Jiebo Luo Segmenting occluded anatomical structures in medical images
CN101853395A (en) * 2010-05-27 2010-10-06 南昌航空大学 Method for shading three-dimensional target from single graph and image identification part
US20100284609A1 (en) * 2008-02-05 2010-11-11 CENTRE DE RECHERCHE INDUSTRIELLE DU QUéBEC Apparatus and method for measuring size distribution of granular matter
CN103839279A (en) * 2014-03-18 2014-06-04 湖州师范学院 Adhesion object segmentation method based on VIBE in object detection
CN106485215A (en) * 2016-09-29 2017-03-08 西交利物浦大学 Face occlusion detection method based on depth convolutional neural networks
CN107403200A (en) * 2017-08-10 2017-11-28 北京亚鸿世纪科技发展有限公司 Improve the multiple imperfect picture sorting technique of image segmentation algorithm combination deep learning
CN107563303A (en) * 2017-08-09 2018-01-09 中国科学院大学 A kind of robustness Ship Target Detection method based on deep learning
CN107622503A (en) * 2017-08-10 2018-01-23 上海电力学院 A kind of layering dividing method for recovering image Ouluding boundary
US10067509B1 (en) * 2017-03-10 2018-09-04 TuSimple System and method for occluding contour detection
CN108629782A (en) * 2018-04-28 2018-10-09 合肥工业大学 The road target depth estimation method propagated based on ground clue
CN108764186A (en) * 2018-06-01 2018-11-06 合肥工业大学 Personage based on rotation deep learning blocks profile testing method
CN109190458A (en) * 2018-07-20 2019-01-11 华南理工大学 A kind of person of low position's head inspecting method based on deep learning
US20190050667A1 (en) * 2017-03-10 2019-02-14 TuSimple System and method for occluding contour detection
CN109360206A (en) * 2018-09-08 2019-02-19 华中农业大学 Crop field spike of rice dividing method based on deep learning
CN109558810A (en) * 2018-11-12 2019-04-02 北京工业大学 Divided based on position and merges target person recognition methods
CN109858455A (en) * 2019-02-18 2019-06-07 南京航空航天大学 A kind of piecemeal detection scale adaptive tracking method for circular target
CN110414430A (en) * 2019-07-29 2019-11-05 郑州信大先进技术研究院 A kind of pedestrian recognition methods and device again based on the fusion of more ratios
CN110415181A (en) * 2019-06-12 2019-11-05 勤耕仁现代农业科技发展(淮安)有限责任公司 Flue-cured tobacco RGB image intelligent recognition and grade determination method under a kind of open environment
CN111310624A (en) * 2020-02-05 2020-06-19 腾讯科技(深圳)有限公司 Occlusion recognition method and device, computer equipment and storage medium
CN111325764A (en) * 2020-02-11 2020-06-23 广西师范大学 Fruit image contour recognition method
CN111428556A (en) * 2020-02-17 2020-07-17 浙江树人学院(浙江树人大学) Traffic sign recognition method based on capsule neural network
CN111489373A (en) * 2020-04-07 2020-08-04 北京工业大学 Occlusion object segmentation method based on deep learning
JP2020188432A (en) * 2019-05-17 2020-11-19 株式会社国際電気通信基礎技術研究所 Device, program and method for setting imaging direction, and device, program and method for invasion detection
CN112418043A (en) * 2020-11-16 2021-02-26 安徽农业大学 Corn weed occlusion determination method and device, robot, equipment and storage medium
CN112634313A (en) * 2021-01-08 2021-04-09 云从科技集团股份有限公司 Target occlusion assessment method, system, medium and device

Patent Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060122480A1 (en) * 2004-11-22 2006-06-08 Jiebo Luo Segmenting occluded anatomical structures in medical images
US20100284609A1 (en) * 2008-02-05 2010-11-11 CENTRE DE RECHERCHE INDUSTRIELLE DU QUéBEC Apparatus and method for measuring size distribution of granular matter
CN101853395A (en) * 2010-05-27 2010-10-06 南昌航空大学 Method for shading three-dimensional target from single graph and image identification part
CN103839279A (en) * 2014-03-18 2014-06-04 湖州师范学院 Adhesion object segmentation method based on VIBE in object detection
CN106485215A (en) * 2016-09-29 2017-03-08 西交利物浦大学 Face occlusion detection method based on depth convolutional neural networks
US10067509B1 (en) * 2017-03-10 2018-09-04 TuSimple System and method for occluding contour detection
US20190050667A1 (en) * 2017-03-10 2019-02-14 TuSimple System and method for occluding contour detection
CN107563303A (en) * 2017-08-09 2018-01-09 中国科学院大学 A kind of robustness Ship Target Detection method based on deep learning
CN107403200A (en) * 2017-08-10 2017-11-28 北京亚鸿世纪科技发展有限公司 Improve the multiple imperfect picture sorting technique of image segmentation algorithm combination deep learning
CN107622503A (en) * 2017-08-10 2018-01-23 上海电力学院 A kind of layering dividing method for recovering image Ouluding boundary
CN108629782A (en) * 2018-04-28 2018-10-09 合肥工业大学 The road target depth estimation method propagated based on ground clue
CN108764186A (en) * 2018-06-01 2018-11-06 合肥工业大学 Personage based on rotation deep learning blocks profile testing method
CN109190458A (en) * 2018-07-20 2019-01-11 华南理工大学 A kind of person of low position's head inspecting method based on deep learning
CN109360206A (en) * 2018-09-08 2019-02-19 华中农业大学 Crop field spike of rice dividing method based on deep learning
CN109558810A (en) * 2018-11-12 2019-04-02 北京工业大学 Divided based on position and merges target person recognition methods
CN109858455A (en) * 2019-02-18 2019-06-07 南京航空航天大学 A kind of piecemeal detection scale adaptive tracking method for circular target
JP2020188432A (en) * 2019-05-17 2020-11-19 株式会社国際電気通信基礎技術研究所 Device, program and method for setting imaging direction, and device, program and method for invasion detection
CN110415181A (en) * 2019-06-12 2019-11-05 勤耕仁现代农业科技发展(淮安)有限责任公司 Flue-cured tobacco RGB image intelligent recognition and grade determination method under a kind of open environment
CN110414430A (en) * 2019-07-29 2019-11-05 郑州信大先进技术研究院 A kind of pedestrian recognition methods and device again based on the fusion of more ratios
CN111310624A (en) * 2020-02-05 2020-06-19 腾讯科技(深圳)有限公司 Occlusion recognition method and device, computer equipment and storage medium
CN111325764A (en) * 2020-02-11 2020-06-23 广西师范大学 Fruit image contour recognition method
CN111428556A (en) * 2020-02-17 2020-07-17 浙江树人学院(浙江树人大学) Traffic sign recognition method based on capsule neural network
CN111489373A (en) * 2020-04-07 2020-08-04 北京工业大学 Occlusion object segmentation method based on deep learning
CN112418043A (en) * 2020-11-16 2021-02-26 安徽农业大学 Corn weed occlusion determination method and device, robot, equipment and storage medium
CN112634313A (en) * 2021-01-08 2021-04-09 云从科技集团股份有限公司 Target occlusion assessment method, system, medium and device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JIAHONG WU 等: "AINNOSEG:PANORAMIC SEGMENTATION WITH HIGH PERFOMANCE", 《HTTPS://ARXIV.ORG/ABS/2007.10591》 *
唐旭晟等: "基于局部边缘特征的快速目标检测", 《计算机辅助设计与图形学学报》 *
张栩华等: "基于车辆轮廓凹陷区域的分割算法", 《电子设计工程》 *
王立春等: "基于无人机航拍图像的公路标线检测算法", 《计算机技术与发展》 *

Also Published As

Publication number Publication date
CN113129306B (en) 2022-12-02

Similar Documents

Publication Publication Date Title
CN110020606B (en) Crowd density estimation method based on multi-scale convolutional neural network
WO2020244261A1 (en) Scene recognition system for high-resolution remote sensing image, and model generation method
US10936913B2 (en) Automatic filter pruning technique for convolutional neural networks
US20190228268A1 (en) Method and system for cell image segmentation using multi-stage convolutional neural networks
WO2020192736A1 (en) Object recognition method and device
WO2019120110A1 (en) Image reconstruction method and device
CN108154118A (en) A kind of target detection system and method based on adaptive combined filter with multistage detection
CN107918772B (en) Target tracking method based on compressed sensing theory and gcForest
WO2021218470A1 (en) Neural network optimization method and device
Yudistira et al. Correlation net: Spatiotemporal multimodal deep learning for action recognition
CN112329784A (en) Correlation filtering tracking method based on space-time perception and multimodal response
CN112861718A (en) Lightweight feature fusion crowd counting method and system
US10643092B2 (en) Segmenting irregular shapes in images using deep region growing with an image pyramid
CN112418032A (en) Human behavior recognition method and device, electronic equipment and storage medium
Li Hearing loss classification via AlexNet and extreme learning machine
CN111489373B (en) Occlusion object segmentation method based on deep learning
CN113129306B (en) Occlusion object segmentation solving method based on deep learning
US10776923B2 (en) Segmenting irregular shapes in images using deep region growing
WO2019243910A1 (en) Segmenting irregular shapes in images using deep region growing
Jeong et al. Congestion-aware bayesian loss for crowd counting
CN112926502B (en) Micro expression identification method and system based on coring double-group sparse learning
Jeziorek et al. Memory-efficient graph convolutional networks for object classification and detection with event cameras
Su et al. ARMA nets: Expanding receptive field for dense prediction
Irfan et al. COMPARISON OF SGD, RMSProp, AND ADAM OPTIMATION IN ANIMAL CLASSIFICATION USING CNNs
Peng et al. An effective segmentation algorithm of apple watercore disease region using fully convolutional neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20221202