CN110689057B - Method for reducing neural network training sample size based on image segmentation - Google Patents

Method for reducing neural network training sample size based on image segmentation Download PDF

Info

Publication number
CN110689057B
CN110689057B CN201910855228.1A CN201910855228A CN110689057B CN 110689057 B CN110689057 B CN 110689057B CN 201910855228 A CN201910855228 A CN 201910855228A CN 110689057 B CN110689057 B CN 110689057B
Authority
CN
China
Prior art keywords
area
region
image
data set
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910855228.1A
Other languages
Chinese (zh)
Other versions
CN110689057A (en
Inventor
张智
光正慧
王欢
翁宗南
肖绍桐
高广
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Engineering University
Original Assignee
Harbin Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Engineering University filed Critical Harbin Engineering University
Priority to CN201910855228.1A priority Critical patent/CN110689057B/en
Publication of CN110689057A publication Critical patent/CN110689057A/en
Application granted granted Critical
Publication of CN110689057B publication Critical patent/CN110689057B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The invention provides a method for reducing the training sample size of a neural network based on image segmentation. Dividing a single picture into different areas; numbering the regions according to the sequence of region development and recording the region areas; setting an area threshold, and discarding the area with the area smaller than the threshold; converting the color space of the reserved area pixels, converting the RGB values of the pixel points into HIS values, reserving the areas with large color difference of the adjacent areas, and discarding the areas with small color difference; extracting the shape characteristics of each area, leaving the area with more prominent or smoother outline, and abandoning other areas; traversing the whole picture, displaying the abandoned area as white, and displaying the reserved area according to the original image to obtain a new image; judging whether all the pictures are completely processed or not, and sending a new data set to network training after the processing is finished; otherwise a jump back to the loop is made. The invention improves the efficiency and simultaneously reserves and even improves the accuracy.

Description

Method for reducing neural network training sample size based on image segmentation
Technical Field
The invention relates to a method for preprocessing a data set before training the data set by a neural network, which can be used for robot target detection in the field of computer vision.
Background
For target detection of a robot, the conventional method combines HOG descriptions of all blocks in a detection window to form a final feature vector, and then uses an SVM classifier to perform target recognition, however, the conventional algorithm is not ideal in prediction accuracy and speed, and in a complex environment, the features of a target object are less prominent, and the detection effect by the conventional method is poor.
Since the deep neural network algorithm first achieved a good effect on the ImageNet dataset, the field of object detection gradually began to make use of deep learning for research. Then depth models of various structures are proposed and the accuracy of the data set is refreshed over and over again. Actually, compared with the traditional method, the deep learning model has much higher prediction accuracy and speed in the classification task, and the high-performance detection algorithm is based on deep learning so far.
However, when a deep learning neural network is used, if a model with a good detection effect is to be obtained, a large number of data sets are required to be trained, and when a data set is manufactured, a lot of time is required to be spent in the data set acquisition process in order to ensure the diversity and effectiveness of a sample. The more complex the scene, the greater the number of samples required. This can take a lot of time not only when the data set is collected, but also when the sample is trained. In the same network structure, without considering other influence factors, if the sample characteristics of the data set are processed by using a traditional image segmentation method, the image is divided according to regions, and image regions with weak characteristics are filtered, so that the influence of complex environmental factors on the characteristics of the target object is reduced, the demand on the sample can be reduced theoretically, the accuracy rate which can be achieved only by training a large number of samples is achieved, and the time for repeated work is shortened.
Disclosure of Invention
The invention aims to provide a method for reducing the training sample size of a neural network based on image segmentation, which can improve the efficiency and keep the accuracy.
The purpose of the invention is realized by the following steps:
before the data set is fed into the neural network training, the following operations are carried out on the data set:
the method comprises the following steps that firstly, a single picture is segmented according to pixel points by means of a Mean Shift algorithm;
labeling each segmented region according to the segmentation sequence, counting the region area, finding a region center pixel point, setting a threshold value M, and directly discarding the region with the area smaller than M;
step three, converting the reserved regional color expression from an RGB color space to an HIS color space, solving the average color expression of a single region, comparing the color change of adjacent regions, obviously reserving the difference, and under the condition that the difference is not obvious, if the area of a relatively small region is smaller than 1/5 of a large region, omitting a small region;
step four, extracting the shape characteristics of the reserved area;
step five, circulating the area numbering, setting the RGB value of the pixel point in the abandoned area to be 255, displaying the color value of the pixel point in the reserved area according to the original image, and obtaining a new image to replace the unprocessed image as a data sample;
and step six, loading a next picture sample in the data set, repeating the step one to the step five until all pictures are processed, and sending the new data set into a neural network for training.
The present invention may further comprise:
1. the method for segmenting the single picture according to the pixel points specifically comprises the following steps: each pixel point is expressed by a five-dimensional information vector, wherein f is (x, y, r, g and b), x represents an abscissa, y represents an ordinate, and r, g and b represent color information of three channels of pixels, and the image is divided into different areas according to the five-dimensional vector.
2. The extracting the shape feature of the reserved area specifically comprises: stretching or compressing each area into a 100 x 100 square block graph, scanning out the outline of the area, recording the area width once every two lines of pixels, recording the area height once every two columns of pixels to obtain a 100-dimensional vector, calculating the variance s of each vector, setting thresholds max and min, and reserving the areas with the variance larger than max and smaller than min.
The invention provides a research strategy for reducing the training sample size of a neural network based on image segmentation. The data set is preprocessed before being trained by the neural network, so that the effects of reducing the requirement quantity of training samples and keeping the accuracy are achieved. Belongs to the field of computer vision, and can be used for robot target detection. The training of normal CNN neural network needs to gather a large amount of data sets, and generally under the same condition of network model structure, the data set quantity of participating in the training is big more, and the scene diversity is better, and the effect of the model of training will be better, nevertheless gathers the data set at every turn simultaneously, all can spend a large amount of time to do repetitive work, and efficiency is very low. The invention provides a method for preprocessing a data set before training the data set, which comprises the steps of segmenting an image into different regions through a Mean Shift algorithm, reserving the regions with stronger characteristics, and removing the regions with weaker characteristics, wherein the regions with stronger characteristics are reserved, but the environment regions with stronger characteristics are also reserved, so that the diversity of samples is ensured, overfitting is prevented, and the effectiveness of the samples is also ensured.
The invention has the following advantages:
1. the demand for data set samples is reduced, a large amount of repeated work is avoided, and the efficiency is improved;
2. the processed data set obviously enhances the anti-interference capability of target detection under a relatively complex scene, but also ensures the diversity and effectiveness of samples, avoids the situations of over-fitting and under-fitting, and maintains or even improves the accuracy while improving the efficiency.
Drawings
FIG. 1: a schematic diagram of an image segmentation method;
FIG. 2: a schematic diagram of information of the divided regions;
FIG. 3: a flow chart of a method for reducing the amount of neural network training samples based on image segmentation.
Detailed Description
The invention discloses a method for reducing the training sample size of a neural network based on image segmentation, which comprises the following steps:
the method comprises the steps that firstly, a single picture is divided according to pixel points by using a Mean Shift algorithm, each pixel point expresses f ═ x, y, r, g and b by a five-dimensional information vector, x expresses an abscissa, y expresses an ordinate, and r, g and b express color information of three channels of the pixel, and the picture is divided into different areas according to the five-dimensional vector;
step two, labeling each divided region according to a division sequence (marking as N), counting the area of the region, finding a central pixel point of the region, setting a threshold value M, directly discarding the region with the area smaller than M, and representing f as (x, y, r, g, b, N) by each pixel through a six-dimensional information vector;
step three, converting the reserved regional color expression from the RGB color space to the HIS color space (because the correlation among three components of the HIS is not high, the color description habit of people is better met), solving the average color expression of a single region, comparing the color change of adjacent regions, reserving the obvious difference, and under the condition that the difference is not obvious, if the area of a relatively small region is smaller than 1/5 of a large region, omitting the small region;
step four, extracting the shape characteristics of the reserved area; stretching or compressing each area into a 100 x 100 square block graph, scanning out the outline of the area, recording the area width every two lines of pixels, and recording the area height every two columns of pixels, thus obtaining a 100-dimensional vector, calculating the variance s of each vector, setting a threshold value max and min, and reserving the areas with the variance larger than max and smaller than min;
processing the picture in the steps, setting a certain parameter of the abandoned region in the program, then circularly numbering the regions, setting the RGB value of the pixel point of the abandoned region to be 255, and displaying the color value of the pixel point of the reserved region according to the original image, thus obtaining a new picture replacing the unprocessed picture as a data sample;
and step six, loading a next picture sample in the data set, and repeating the step one to the step five.
The invention is described in more detail below by way of example.
Step one, Mean Shift image segmentation converges similar pixel points to an area according to the colors and the distribution space of the pixel points, Mean Shift vectors are the average value of the sum of Shift vectors of the current point pointing to other sample points, and Mean Shift vectors at x points are defined as:
Figure BDA0002198152150000041
wherein S ishIs a high dimensional space, x0As a central sample point, xiFor other sample points, (x)i-x0) (i ≠ 0) represents the offset from the central sample point to each sample point, and K is the total number of samples.
Because the influence of other sample points on the central point is related to the distance, the closer the distance is, the larger the influence is, and the farther the distance is, the smaller the influence is, the kernel function and the kernel density function need to be introduced for vector calculation, because the invention is used for image processing, the considered sample points are pixel points, and the quantity is a little bit, a Gaussian kernel function is adopted, and the formula is as follows:
Figure BDA0002198152150000042
considering that a pixel point is represented by a five-dimensional vector (x, y, r, g, b), the kernel function in the present invention is represented as follows:
Figure BDA0002198152150000043
where C is a normalization constant, xsRepresenting the image spatial domain vector, xrRepresenting a color gamut vector, hrRepresenting the color gamut bandwidth, hsRepresenting the spatial domain bandwidth, k (| | x | | non-woven phosphor)2) K (x), k (x) is the profile function of the gaussian kernel function k (x).
After introducing the kernel function, the Mean Shift vector can be expressed as:
Figure BDA0002198152150000044
where g (x) is-k (x), h is the core size, referred to as bandwidth.
To find the place where the density of the sample points is the greatest, i.e., the center point of a region, the place where the gradient of the sample point vector is 0 is calculated. By derivation calculation, the Mean Shift vector can be obtained as follows:
Figure BDA0002198152150000045
wherein g (x) Cg (| | x | | n circuitry)2) And C is a normalized constant. By simplifying equation (5), the following equation can be obtained:
Figure BDA0002198152150000051
memory mh(x0) Is of the formula:
Figure BDA0002198152150000052
the image segmentation process is as follows:
(1) traversing image pixels, finding unmarked pixel points, and (2) stopping traversing if no unmarked pixel points are found, and completing image segmentation;
(2) calculating m by taking the found unprocessed pixel points as central pointsh(xi) According to Mh,G(xi) Find the next point x to which the current sample pointsi1And the current point is taken into the set
Figure BDA0002198152150000053
Then with xi1Continuously searching the pointed next point for the starting point until the Mh,G(xij) Stop iteration if | < ε, when set S ═ xi1,xi2,xi3...xijI is an area label marked as N, j is the area size of the area marked as m, and i is increased by one when iteration is stopped to obtain a new set;
(3) returning to the step (1) to continue execution;
and step two, after the segmentation of the step one, obtaining the area label of each pixel point and the area value corresponding to the labeled area, and setting a label value sign for each area to initialize to 1. Setting a threshold value M in advance, traversing image pixel points, and setting a sign value of an area where the pixel points are located to be 0 when the area value of the area is smaller than M;
step three, converting the color expression of the pixel point from the RGB space to the HIS space, wherein the corresponding conversion formula is as follows:
Figure BDA0002198152150000054
Figure BDA0002198152150000055
Figure BDA0002198152150000056
Figure BDA0002198152150000061
after H, I, S values of all pixel points in each region are obtained, the average value of three components in each region is calculated
Figure BDA0002198152150000062
Figure BDA0002198152150000063
Traversing each region according to the region number, calculating the variation values delta h, delta i and delta s of the average values of the three components of the adjacent regions, and setting three thresholds mh、mi、msWhen the difference values of the three components of the adjacent regions are all smaller than the three thresholds, and the area of one region is smaller than 1/5 of the other region, setting the sign value of the region with the small area to be 0;
step four, traversing the pixel points of a single region, and finding the maximum horizontal and vertical value x of each pixel pointmax、ymaxAnd a minimum longitudinal and transverse value xmin、yminDividing the region into independent pictures based on the four coordinate values, drawing the pictures into 100 × 100 pictures, and establishing a 100-dimensional vector SspRecording the number w of pixel points of the line belonging to the area every other line of pixel pointsiRecording the number h of pixels belonging to the area of every other row of pixelsiFinally, the contour feature array S of every other region is obtainedspi{w1,w2...w50,h1,h2...h50Calculate the variance s of each array2Called the profile variance, is calculated as follows:
Figure BDA0002198152150000064
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0002198152150000065
and
Figure BDA0002198152150000066
are respectively wiAnd hiTwo threshold values max and min are set after the calculation is finished, each area is traversed circularly, and the value that min is less than s is obtained2Setting sign value of the area less than max to be 0;
step five, traversing the pixel points of the image, and displaying the RGB value of the point as (255 ) if the sign value of the area where the point is located is 0; if the sign value of the area where the point is located is 1, the point displays the RGB value of the original image, and the original image is replaced by a new image to be used as a data set;
step six, judging whether all the pictures are processed or not, and if not, repeating the steps one to five; and if all the pictures are processed completely, all the obtained pictures are used as a data set and sent to a network for training.

Claims (2)

1. A method for reducing the sample size of neural network training based on image segmentation is characterized in that before a data set is sent to the neural network training, the following operations are carried out on the data set:
the method comprises the following steps that firstly, a single picture is segmented according to pixel points by means of a Mean Shift algorithm;
labeling each segmented region according to the segmentation sequence, counting the region area, finding a region center pixel point, setting a threshold value M, and directly discarding the region with the area smaller than M;
step three, converting the reserved regional color expression from an RGB color space to an HIS color space, solving the average color expression of a single region, comparing the color change of adjacent regions, obviously reserving the difference, and under the condition that the difference is not obvious, if the area of a relatively small region is smaller than 1/5 of a large region, omitting a small region;
after H, I, S values of all pixel points in each region are obtained, the average value of three components in each region is calculated
Figure FDA0003608702410000011
Figure FDA0003608702410000012
Traversing each region according to the region number, calculating the variation values delta h, delta i and delta s of the average values of the three components of the adjacent regions, and setting three thresholds mh、mi、msWhen the difference values of the three components of adjacent regions are all smaller than the three thresholds and the area of one region is smaller than 1/5 of the other region, setting the sign value of the region with small area to be 0;
step four, extracting the shape characteristics of the reserved area;
stretching or compressing each area into a 100 x 100 square block graph, scanning out the outline of the area, recording the area width every two lines of pixels, recording the area height every two columns of pixels to obtain a 100-dimensional vector, calculating the variance s of each vector, setting a threshold value max and min, and reserving the areas with the variance larger than max and smaller than min;
step five, circulating the area numbering, setting the RGB value of the pixel point in the abandoned area to be 255, and displaying the color value of the pixel point in the reserved area according to the original image to obtain a new image which replaces the unprocessed image as a data sample; traversing image pixel points, and if the sign value of the area where the point is located is 0, displaying the RGB value of the point as (255 ); if the sign value of the area where the point is located is 1, the point displays the RGB value of the original image, and the original image is replaced by a new image to be used as a data set;
and step six, loading a next picture sample in the data set, repeating the step one to the step five until all pictures are processed, and sending the new data set into a neural network for training.
2. The method for reducing the amount of the neural network training samples based on the image segmentation as claimed in claim 1, wherein the step of segmenting a single picture according to pixel points specifically comprises: each pixel point is expressed by a five-dimensional information vector, wherein f is (x, y, r, g and b), x represents an abscissa, y represents an ordinate, and r, g and b represent color information of three channels of pixels, and the image is divided into different areas according to the five-dimensional vector.
CN201910855228.1A 2019-09-11 2019-09-11 Method for reducing neural network training sample size based on image segmentation Active CN110689057B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910855228.1A CN110689057B (en) 2019-09-11 2019-09-11 Method for reducing neural network training sample size based on image segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910855228.1A CN110689057B (en) 2019-09-11 2019-09-11 Method for reducing neural network training sample size based on image segmentation

Publications (2)

Publication Number Publication Date
CN110689057A CN110689057A (en) 2020-01-14
CN110689057B true CN110689057B (en) 2022-07-15

Family

ID=69107926

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910855228.1A Active CN110689057B (en) 2019-09-11 2019-09-11 Method for reducing neural network training sample size based on image segmentation

Country Status (1)

Country Link
CN (1) CN110689057B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679170A (en) * 2012-09-17 2014-03-26 复旦大学 Method for detecting salient regions based on local features
CN104021553A (en) * 2014-05-30 2014-09-03 哈尔滨工程大学 Sonar image object detection method based on pixel point layering
CN104063707A (en) * 2014-07-14 2014-09-24 金陵科技学院 Color image clustering segmentation method based on multi-scale perception characteristic of human vision
CN104599270A (en) * 2015-01-18 2015-05-06 北京工业大学 Breast neoplasms ultrasonic image segmentation method based on improved level set algorithm
CN105373794A (en) * 2015-12-14 2016-03-02 河北工业大学 Vehicle license plate recognition method
CN105678711A (en) * 2016-01-29 2016-06-15 中国科学院高能物理研究所 Attenuation correction method based on image segmentation
CN107016409A (en) * 2017-03-20 2017-08-04 华中科技大学 A kind of image classification method and system based on salient region of image
CN107330883A (en) * 2017-07-04 2017-11-07 南京信息工程大学 A kind of medical image lesion region positioning and sorting technique
CN107403200A (en) * 2017-08-10 2017-11-28 北京亚鸿世纪科技发展有限公司 Improve the multiple imperfect picture sorting technique of image segmentation algorithm combination deep learning
CN109102512A (en) * 2018-08-06 2018-12-28 西安电子科技大学 A kind of MRI brain tumor image partition method based on DBN neural network
CN109117937A (en) * 2018-08-16 2019-01-01 杭州电子科技大学信息工程学院 A kind of Leukocyte Image processing method and system subtracted each other based on connection area

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9741125B2 (en) * 2015-10-28 2017-08-22 Intel Corporation Method and system of background-foreground segmentation for image processing

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679170A (en) * 2012-09-17 2014-03-26 复旦大学 Method for detecting salient regions based on local features
CN104021553A (en) * 2014-05-30 2014-09-03 哈尔滨工程大学 Sonar image object detection method based on pixel point layering
CN104063707A (en) * 2014-07-14 2014-09-24 金陵科技学院 Color image clustering segmentation method based on multi-scale perception characteristic of human vision
CN104599270A (en) * 2015-01-18 2015-05-06 北京工业大学 Breast neoplasms ultrasonic image segmentation method based on improved level set algorithm
CN105373794A (en) * 2015-12-14 2016-03-02 河北工业大学 Vehicle license plate recognition method
CN105678711A (en) * 2016-01-29 2016-06-15 中国科学院高能物理研究所 Attenuation correction method based on image segmentation
CN107016409A (en) * 2017-03-20 2017-08-04 华中科技大学 A kind of image classification method and system based on salient region of image
CN107330883A (en) * 2017-07-04 2017-11-07 南京信息工程大学 A kind of medical image lesion region positioning and sorting technique
CN107403200A (en) * 2017-08-10 2017-11-28 北京亚鸿世纪科技发展有限公司 Improve the multiple imperfect picture sorting technique of image segmentation algorithm combination deep learning
CN109102512A (en) * 2018-08-06 2018-12-28 西安电子科技大学 A kind of MRI brain tumor image partition method based on DBN neural network
CN109117937A (en) * 2018-08-16 2019-01-01 杭州电子科技大学信息工程学院 A kind of Leukocyte Image processing method and system subtracted each other based on connection area

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Yuan Wang 等."LBP-SVD Based Copy Move Forgery Detection Algorithm".《2017 IEEE International Symposium on Multimedia (ISM)》.2018, *
刁智华 等."基于颜色和形状特征的棉花害螨图像分割方法".《计算机应用与软件》.2013,第30卷(第12期), *
张智 等."结合图像分割的室内环境静态人体检测研究".《小型微型计算机系统》.2019,第40卷(第5期), *
陈树越 等."基于凹点检测的粮仓粘连害虫图像分割算法".《计算机工程》.2017,第44卷(第6期), *
靳培飞 等."基于支持向量机提取感兴趣区域的行人检测".《计算机工程与设计》.2017,第38卷(第4期), *

Also Published As

Publication number Publication date
CN110689057A (en) 2020-01-14

Similar Documents

Publication Publication Date Title
Sankaranarayanan et al. Learning from synthetic data: Addressing domain shift for semantic segmentation
CN108537239B (en) Method for detecting image saliency target
CN108648161B (en) Binocular vision obstacle detection system and method of asymmetric kernel convolution neural network
CN108280397B (en) Human body image hair detection method based on deep convolutional neural network
CN108629783B (en) Image segmentation method, system and medium based on image feature density peak search
CN110163239B (en) Weak supervision image semantic segmentation method based on super-pixel and conditional random field
CN109410168B (en) Modeling method of convolutional neural network for determining sub-tile classes in an image
CN106778687B (en) Fixation point detection method based on local evaluation and global optimization
CN110866896B (en) Image saliency target detection method based on k-means and level set super-pixel segmentation
CN111583279A (en) Super-pixel image segmentation method based on PCBA
CN106529441A (en) Fuzzy boundary fragmentation-based depth motion map human body action recognition method
CN114445651A (en) Training set construction method and device of semantic segmentation model and electronic equipment
CN109741358B (en) Superpixel segmentation method based on adaptive hypergraph learning
CN112446417B (en) Spindle-shaped fruit image segmentation method and system based on multilayer superpixel segmentation
CN111274964A (en) Detection method for analyzing water surface pollutants based on visual saliency of unmanned aerial vehicle
Alvarez et al. Large-scale semantic co-labeling of image sets
CN110689057B (en) Method for reducing neural network training sample size based on image segmentation
CN110705394B (en) Scenic spot crowd behavior analysis method based on convolutional neural network
CN112132145A (en) Image classification method and system based on model extended convolutional neural network
Bian et al. Efficient hierarchical temporal segmentation method for facial expression sequences
CN113014923B (en) Behavior identification method based on compressed domain representation motion vector
Xia et al. Lazy texture selection based on active learning
CN114882303A (en) Livestock counting method based on frame filtering and target detection
CN107491761B (en) Target tracking method based on deep learning characteristics and point-to-set distance metric learning
CN109685119B (en) Random maximum pooling depth convolutional neural network noise pattern classification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant