CN111339823A - Threshing and sunning ground detection method based on machine vision and back projection algorithm - Google Patents

Threshing and sunning ground detection method based on machine vision and back projection algorithm Download PDF

Info

Publication number
CN111339823A
CN111339823A CN201911398320.6A CN201911398320A CN111339823A CN 111339823 A CN111339823 A CN 111339823A CN 201911398320 A CN201911398320 A CN 201911398320A CN 111339823 A CN111339823 A CN 111339823A
Authority
CN
China
Prior art keywords
threshing
background
gaussian
image
gaussian distribution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201911398320.6A
Other languages
Chinese (zh)
Inventor
郭唐仪
杨洁
练智超
丁俊杰
刘悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Aites Technology Co ltd
Original Assignee
Nanjing Aites Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Aites Technology Co ltd filed Critical Nanjing Aites Technology Co ltd
Priority to CN201911398320.6A priority Critical patent/CN111339823A/en
Publication of CN111339823A publication Critical patent/CN111339823A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene
    • G06V20/38Outdoor scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Abstract

The invention discloses a method for detecting a threshing sunning ground based on machine vision and a back projection algorithm, which comprises the steps of carrying out background modeling on an obtained frame image, separating a moving object from a background, and obtaining a background image; segmenting the background image by utilizing a PSPNet semantic segmentation model, and extracting a road area in the image; carrying out back projection on the road by using a back projection algorithm, and extracting a suspicious region with the color similar to that of the grains; classifying the suspicious regions of the threshing printing fields by adopting a support vector machine to obtain candidate regions which accord with the characteristics of the threshing printing fields; and judging whether the semantic segmentation model is divided into a sub-area of the threshing printing ground or not in all the candidate areas, and if so, judging the area as the threshing printing ground area. The method effectively reduces the interference of the complex background to the detection task, and improves the accuracy and reliability of the threshing and sunning ground detection.

Description

Threshing and sunning ground detection method based on machine vision and back projection algorithm
Technical Field
The invention belongs to the technical field of machine vision, and particularly relates to a threshing and sunning ground detection method based on machine vision and a back projection algorithm.
Background
The threshing and sunning ground event refers to illegal encroachment behaviors of sunning grains, piling grains and the like in the range of highways and road lands. The threshing and drying yard occupies the road randomly on the road, not only influences the normal passing of passing vehicles, but also brings great potential safety hazard to road traffic, and even possibly causes traffic accidents. At present, the patrol aiming at the 'threshing and sunning ground' of the occupied road mainly depends on law enforcement personnel of a road station to carry out manual patrol, consumes manpower and material resources and has low efficiency.
With the rapid development of smart cities, the market of the security industry is continuously increased, and the security industry is transformed and upgraded to large-scale, automatic and intelligent; meanwhile, due to rapid development of new-generation information technologies such as artificial intelligence, cloud computing and the internet of things, the integration of a machine vision technology into a video monitoring system is a necessary trend for future development. However, for the complex road scene image under the monitoring visual angle, the complex road scene image has the characteristics of complex background, small target dimension, degraded textural features and the like, the target detection of the threshing sunning ground is carried out on the complex road scene image by the conventional method without improvement, and the detection accuracy and reliability are not high.
Disclosure of Invention
The invention provides a threshing and sunning ground detection method based on machine vision and a back projection algorithm.
The technical solution for realizing the invention is as follows: a threshing and sunning ground detection method based on machine vision and a back projection algorithm comprises the following specific steps:
step 1, carrying out background modeling on the acquired frame image, and separating a moving object from a background to obtain a background image;
step 2, segmenting the background image by utilizing a PSPNet semantic segmentation model, and extracting a road area in the image;
3, carrying out back projection on the road by using a back projection algorithm, and extracting a suspicious region with the color similar to that of the grains;
step 4, classifying the suspicious regions of the threshing sunning ground by adopting a support vector machine to obtain candidate regions which accord with characteristics of the threshing sunning ground;
and 5, judging whether all candidate regions have sub-regions divided into the threshing printing fields by the semantic segmentation model in the step 2, and if so, judging the sub-regions to be the threshing printing fields.
Preferably, the method for performing background modeling on the frame image of the traffic monitoring video stream, separating the moving object from the background, and obtaining the background image specifically comprises the following steps:
step 1.1, establishing Gaussian mixture models for each pixel point in the first frame image, wherein each Gaussian mixture model is formed by adopting K Gaussian distributions, and each Gaussian distribution is formed according to omegai,ti,tArranged from large to small, σi,tMean square error, w, representing the ith Gaussian distribution at time ti,tIs the weight of the ith Gaussian distribution at time t;
satisfies X if there is a Gaussian distributionti,t-1|≤2.5σi,t-1The gaussian distribution is updated with parameters as follows, where μi,t-1The mean value of the corresponding Gaussian distribution of the Gaussian model of the previous frame image at the pixel point is:
ωi,t=(1-α)ωi,t-1
μi,t=(1-ρ)μi,t-1+ρXt
Figure BDA0002346892570000021
wherein, ω isi,tDenotes the weight of the point at time t, α denotes the learning rate, α is 0.005;. mu.i,tDenotes the mean value of the point at time t, and ρ is α · η (X)ti,ti,t) Represents the parameter update rate;
Figure BDA0002346892570000022
represents the variance of the point at time t;
gaussian distributions that do not satisfy the condition only update the weights: omegai,t=(1-a)ωi,t-1
If none of the Gaussian distributions of the pixel satisfies Xti,t-1|≤2.5σi,t-1Then, a new Gaussian distribution is introduced to replace the last Gaussian distribution, and the mean, standard deviation and weight of the new Gaussian distribution are Xt、σinitAnd ωinit
Figure BDA0002346892570000023
Carrying out normalization processing on the weight of each Gaussian distribution to determine the weight of each Gaussian distribution;
calculating the priority lambda of each Gaussian distribution of the pixel point in the current imagei,t=ωi,ti,tAnd sorting the data according to the sequence from big to small, and selecting the first B Gaussian distributions as the Gaussian distributions of all the pixel points in the background model.
Step 1.3, checking the pixel value X of each pixel point at the moment ttMatching with the probability density function η of the first B Gaussian distributions if XtSetting the pixel point as a background point when the pixel point is η, otherwise, setting the pixel point as a foreground to obtain a foreground mask;
and performing corrosion expansion operation on the obtained foreground mask, then performing negation to obtain a background mask, and performing bitwise AND operation on the background mask and the original image to obtain a current background image.
Preferably, the pixel value of the pixel point at the time t is XtThe probability of (c) is:
Figure BDA0002346892570000031
wherein, wi,tIs the weight of the ith Gaussian distribution at time t, μi,tIs the mean value of the ith Gaussian distribution at time t, sigmai,tIs the covariance matrix of the ith Gaussian distribution at time t, η (X)ti,ti,t) Representing a gaussian probability density function,.
Preferably, the gaussian probability density function is specifically:
Figure BDA0002346892570000032
wherein n is XtDimension (d) of (a).
Preferably, the PSPNet semantic segmentation model comprises a PSPNet semantic segmentation model comprising a base layer, a pyramid pooling module and a convolution layer which are connected in sequence; the pyramid pooling module comprises a pool layer and four hollow convolutions with different expansion rates; the basic layer is a ResNet network pre-trained through a hole convolution strategy.
Preferably, the PSPNet semantic segmentation model is used to segment the background image, and the specific method for extracting the road region in the image is as follows:
taking the background image obtained in the step 1 as the input of a semantic segmentation model, and extracting features by adopting a ResNet network pre-trained by a cavity convolution strategy through a basic layer;
the extracted features are used as input of a pyramid pooling module, and the pyramid pooling module is used for extracting feature maps under four different scales through convolution of four cavities with different expansion rates;
up-sampling the four characteristic graphs to enable the scales of the four characteristic graphs to be the same as those of the output characteristic graph of the ResNet network;
performing concat connection on the ResNet output characteristic diagram and the four characteristic diagrams after up-sampling to serve as the output of the pyramid pooling module;
and connecting the output of the pyramid pooling module with the N convolutional layers to extract features and outputting a semantic segmentation feature map, and extracting a road region according to the obtained semantic segmentation feature map.
Compared with the prior art, the invention has the following remarkable advantages:
1. the method is used for detecting the threshing and sunning ground based on the characteristic detection means and the back projection algorithm, so that the labor cost is reduced, the inspection efficiency is improved, and certain innovativeness is achieved;
2. according to the method, the initial background picture is extracted from the complex road scene by using a background difference method and a back projection algorithm, and the detection of the threshing sunning ground is realized by combining the color characteristics of grains, so that the interference of the complex background on the detection task is reduced to a certain extent, and the robustness of the algorithm is improved.
Drawings
FIG. 1 is a flow chart of the present invention.
FIG. 2 is a background modeling flowchart of the Gaussian mixture model according to the present invention.
FIG. 3 is a diagram of a semantic segmentation model network according to the present invention.
FIG. 4 is a diagram illustrating the detection effect of the present invention.
Detailed Description
As shown in fig. 1, a threshing and sunning ground detection method based on machine vision and back projection algorithm includes the following steps:
step 1, as shown in fig. 2, GMM is adopted to perform background modeling on a frame image of a traffic monitoring video stream, and a moving object and a background are separated to obtain a background image;
in a further embodiment, the method for performing background modeling on the image may be, but is not limited to, various background modeling methods such as differential background modeling, GMM, SACON, VIBE, and the like, and in this embodiment, the GMM background modeling is adopted to obtain the background image, specifically:
step 1.1, reading in a first frame of image, and establishing a Gaussian mixture model for each pixel point in the image. A mixed Gaussian model composed of K (K ═ 3) Gaussian distributions is adopted to represent the probability distribution of the same pixel point on a time domain, and the pixel value of the pixel point at the moment t is XtThe probability of (c) is:
Figure BDA0002346892570000041
wherein, wi,tIs the weight of the ith Gaussian distribution at time t, μi,tIs the mean value of the ith Gaussian distribution at time t, sigmai,tIs the covariance matrix of the ith Gaussian distribution at time t, η (X)ti,ti,t) Representing a gaussian probability density function:
Figure BDA0002346892570000042
wherein n is XtOf K gaussian distributions per pixel always in omegai,ti,tFrom big to bigTo small permutation, σi,tRepresents the mean square error of the ith gaussian distribution at time t.
Step 1.2, when a new image frame comes, comparing the pixel value of each pixel point of the image with K Gaussian distributions corresponding to the pixel point in the previous image frame, and if the Gaussian distributions exist, meeting Xti,t-1|≤2.5σi,t-1The gaussian distribution is updated with parameters as follows, where μi,t-1The mean value of the corresponding Gaussian distribution of the Gaussian model of the previous frame image at the pixel point is:
ωi,t=(1-α)ωi,t-1
μi,t=(1-ρ)μi,t-1+ρXt
Figure BDA0002346892570000051
wherein, ω isi,tDenotes the weight of the point at time t, α denotes the learning rate, α is 0.005;. mu.i,tDenotes the mean value of the point at time t, and ρ is α · η (X)ti,ti,t) Represents the parameter update rate;
Figure BDA0002346892570000052
represents the variance of the point at time t;
the mean and variance of the Gaussian distribution which does not meet the conditions are not changed, and only the weight needs to be modified: omegai,t=(1-α)ωi,t-1
If the Gaussian distribution of the pixel does not meet the condition, introducing a new Gaussian distribution to replace the last Gaussian distribution, wherein the mean value, the standard deviation and the weight of the new Gaussian distribution are respectively Xt
Figure BDA0002346892570000053
And ωinitinit=1/K)。
After the updating is finished, the weight of each Gaussian distribution is normalized, and the weight of each Gaussian distribution is determined;
when calculatingThe priority lambda of each Gaussian distribution of the pixel point in the previous imagei,t=ωi,ti,tAnd sorting the images in the descending order, selecting the first B Gaussian distributions as the Gaussian distributions of each pixel point in the background model, and using the parameter T to represent the proportion of the background (T is 0.7).
Figure BDA0002346892570000054
Step 1.3, checking the pixel value X of each pixel point at the moment ttMatching with the probability density function η of the first B Gaussian distributions if XtSetting the pixel point as a background point when the pixel point is η, otherwise setting the pixel point as a foreground, namely a moving object, thereby obtaining a foreground mask;
and performing corrosion expansion operation on the obtained foreground mask, then performing inversion to obtain a background mask, and then performing bitwise and operation on the background mask and the original image to obtain a current background image.
Step 2: and segmenting the background image by using a PSPNet semantic segmentation model, and extracting a road area in the image.
As shown in fig. 3, the PSPNet semantic segmentation model includes a base layer, a pyramid pooling module, and a convolution layer, which are connected in sequence.
The pyramid pooling module comprises a pool layer and four hollow convolutions with different expansion rates.
The basic layer is a ResNet network pre-trained through a hole convolution strategy.
In a further embodiment, the background image obtained in the step 1 is used as an input of a semantic segmentation model, and a ResNet network pre-trained by a hole convolution strategy is adopted by a basic layer to extract features; the extracted features are used as the input of a pyramid pooling module, the pyramid pooling module extracts feature maps under four different scales (the feature sizes are 1 x 1, 2 x 2, 3 x 3 and 6 x 6 respectively) through convolution of four cavities with different expansion rates, the four feature maps are subjected to upsampling to enable the scales to be the same as the output feature map of a ResNet network, and then the output feature map of the ResNet and the four feature maps subjected to upsampling are subjected to concat connection to be used as the output of the pyramid pooling module; and finally, connecting the output of the pyramid pooling module with the N convolutional layers to further extract features and output a semantic segmentation feature map, and extracting a road region according to the obtained semantic segmentation feature map.
And 3, carrying out back projection on the road by using a back projection algorithm, and extracting a suspicious region with a color similar to that of the grains, wherein the specific method comprises the following steps: respectively calculating the characteristic image (the threshing sunning ground) and the histogram of the road area extracted in the step 2, detecting whether the tone value of each pixel point of the histogram of the road area is located at the bin position in the histogram of the characteristic image, and if so, recording the tone value of the bin position in the histogram of the characteristic image, thereby obtaining the suspicious area of the threshing sunning ground.
Step 4, classifying the suspicious regions of the threshing sunning ground by adopting a support vector machine to obtain candidate regions which accord with the characteristics of the threshing sunning ground, and specifically comprising the following steps:
intercepting valley printing fields in all traffic monitoring images, uniformly scaling the valley printing fields into pictures with the size of 48 × 48 pixels as a positive sample set P, dividing areas, which do not contain the valley printing fields, in the traffic monitoring images into a plurality of pictures with the size of 48 × 48 pixels as a negative sample set N, extracting LBP texture characteristics of all positive and negative samples, and using the LBP texture characteristics to train a support vector machine, wherein a kernel function of the support vector machine is a linear kernel function, and a penalty factor C is set to be 1.319.
Inputting the suspicious region into a trained support vector machine for classification to obtain a candidate region conforming to the characteristics of a threshing and sunning ground
And 5, judging whether all candidate regions have sub-regions which are divided into the threshing printing fields by the semantic segmentation characteristic map in the step 2, and if so, judging that the sub-regions are the threshing printing field regions.
As shown in fig. 4, the result is obtained under the conditions of good weather conditions and good sight line sight distance, and the time and weather conditions of the event of the threshing and sunning ground are both specific, the grain color is obvious, so the detection precision is high, and the detection accuracy can reach 92%.

Claims (5)

1. A threshing and sunning ground detection method based on machine vision and a back projection algorithm is characterized by comprising the following specific steps:
step 1, carrying out background modeling on the acquired frame image, and separating a moving object from a background to obtain a background image;
step 2, segmenting the background image by utilizing a PSPNet semantic segmentation model, and extracting a road area in the image;
3, carrying out back projection on the road by using a back projection algorithm, and extracting a suspicious region with the color similar to that of the grains;
step 4, classifying the suspicious regions of the threshing sunning ground by adopting a support vector machine to obtain candidate regions which accord with characteristics of the threshing sunning ground;
and 5, judging whether all candidate regions have sub-regions divided into the threshing printing fields by the semantic segmentation model in the step 2, and if so, judging the sub-regions to be the threshing printing fields.
2. The valley print method based on machine vision and back projection algorithm of claim 1, wherein the background modeling is performed on the frame image of the traffic monitoring video stream, the moving object and the background are separated, and the specific method for obtaining the background image is as follows:
step 1.1, establishing Gaussian mixture models for each pixel point in the first frame image, wherein each Gaussian mixture model is formed by adopting K Gaussian distributions, and each Gaussian distribution is formed according to omegai,ti,tArranged from large to small, σi,tMean square error, w, representing the ith Gaussian distribution at time ti,tIs the weight of the ith Gaussian distribution at time t;
if the pixel point has Gaussian distribution satisfying Xti,t-1|≤2.5σi,t-1The gaussian distribution is updated with parameters as follows, where μi,t-1The mean value of the corresponding Gaussian distribution of the Gaussian model of the previous frame image at the pixel point is:
ωi,t=(1-α)ωi,t-1
μi,t=(1-ρ)μi,t-1+ρXt
Figure FDA0002346892560000011
wherein, ω isi,tDenotes the weight of the point at time t, α denotes the learning rate, α is 0.005;. mu.i,tDenotes the mean value of the point at time t, and ρ is α · η (X)ti,ti,t) Represents the parameter update rate;
Figure FDA0002346892560000012
represents the variance of the point at time t;
gaussian distributions that do not satisfy the condition only update the weights: omegai,t=(1-α)ωi,t-1
If none of the Gaussian distributions of the pixel satisfies Xti,t-1|≤2.5σi,t-1Then, a new Gaussian distribution is introduced to replace the last Gaussian distribution, and the mean, standard deviation and weight of the new Gaussian distribution are Xt、σinitAnd ωinit
Figure FDA0002346892560000022
Carrying out normalization processing on the weight of each Gaussian distribution to determine the weight of each Gaussian distribution;
calculating the priority lambda of each Gaussian distribution of the pixel point in the current imagei,t=ωi,ti,tAnd sorting the data according to the sequence from big to small, and selecting the first B Gaussian distributions as the Gaussian distributions of all the pixel points in the background model.
Step 1.3, checking the pixel value X of each pixel point at the moment ttMatching with the probability density function η of the first B Gaussian distributions if XtSetting the pixel point as a background point when the pixel point is η, otherwise, setting the pixel point as a foreground to obtain a foreground mask;
and performing corrosion expansion operation on the obtained foreground mask, then performing negation to obtain a background mask, and performing bitwise AND operation on the background mask and the original image to obtain a current background image.
3. The valley print detection method based on machine vision and back projection algorithm according to claim 2, characterized in that the gaussian probability density function is specifically:
Figure FDA0002346892560000021
wherein n is XtDimension (d) of (a).
4. The valley printing solarium detection method based on machine vision and back projection algorithm of claim 1, wherein, the PSPNet semantic segmentation model comprises a PSPNet semantic segmentation model comprising a base layer, a pyramid pooling module and a convolution layer which are connected in sequence; the pyramid pooling module comprises a pool layer and four hollow convolutions with different expansion rates; the basic layer is a ResNet network pre-trained through a hole convolution strategy.
5. The valley-threshing printing ground detection method based on the machine vision and back projection algorithm as claimed in claim 1, characterized in that the background image is segmented by utilizing a PSPNet semantic segmentation model, and the specific method for extracting the road area in the image is as follows:
taking the background image obtained in the step 1 as the input of a semantic segmentation model, and extracting features by adopting a ResNet network pre-trained by a cavity convolution strategy through a basic layer;
the extracted features are used as input of a pyramid pooling module, and the pyramid pooling module is used for extracting feature maps under four different scales through convolution of four cavities with different expansion rates;
up-sampling the four characteristic graphs to enable the scales of the four characteristic graphs to be the same as those of the output characteristic graph of the ResNet network;
performing concat connection on the ResNet output characteristic diagram and the four characteristic diagrams after up-sampling to serve as the output of the pyramid pooling module;
and connecting the output of the pyramid pooling module with the N convolutional layers to extract features and outputting a semantic segmentation feature map, and extracting a road region according to the obtained semantic segmentation feature map.
CN201911398320.6A 2019-12-30 2019-12-30 Threshing and sunning ground detection method based on machine vision and back projection algorithm Withdrawn CN111339823A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911398320.6A CN111339823A (en) 2019-12-30 2019-12-30 Threshing and sunning ground detection method based on machine vision and back projection algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911398320.6A CN111339823A (en) 2019-12-30 2019-12-30 Threshing and sunning ground detection method based on machine vision and back projection algorithm

Publications (1)

Publication Number Publication Date
CN111339823A true CN111339823A (en) 2020-06-26

Family

ID=71181400

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911398320.6A Withdrawn CN111339823A (en) 2019-12-30 2019-12-30 Threshing and sunning ground detection method based on machine vision and back projection algorithm

Country Status (1)

Country Link
CN (1) CN111339823A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111539494A (en) * 2020-07-08 2020-08-14 浙江浙能天然气运行有限公司 Hydraulic protection damage detection method based on U-Net and SVM
CN113269794A (en) * 2021-05-27 2021-08-17 中山大学孙逸仙纪念医院 Image area segmentation method and device, terminal equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130148853A1 (en) * 2011-12-12 2013-06-13 Samsung Electronics Co., Ltd. Image processing apparatus and image processing method
CN110532876A (en) * 2019-07-26 2019-12-03 纵目科技(上海)股份有限公司 Night mode camera lens pays detection method, system, terminal and the storage medium of object

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130148853A1 (en) * 2011-12-12 2013-06-13 Samsung Electronics Co., Ltd. Image processing apparatus and image processing method
CN110532876A (en) * 2019-07-26 2019-12-03 纵目科技(上海)股份有限公司 Night mode camera lens pays detection method, system, terminal and the storage medium of object

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李勇: "基于改进混合高斯模型的前景目标提取算法" *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111539494A (en) * 2020-07-08 2020-08-14 浙江浙能天然气运行有限公司 Hydraulic protection damage detection method based on U-Net and SVM
CN113269794A (en) * 2021-05-27 2021-08-17 中山大学孙逸仙纪念医院 Image area segmentation method and device, terminal equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109816024B (en) Real-time vehicle logo detection method based on multi-scale feature fusion and DCNN
CN111368687B (en) Sidewalk vehicle illegal parking detection method based on target detection and semantic segmentation
Siriborvornratanakul An automatic road distress visual inspection system using an onboard in-car camera
CN106683119B (en) Moving vehicle detection method based on aerial video image
Chen et al. Vehicle detection in high-resolution aerial images based on fast sparse representation classification and multiorder feature
CN109325502B (en) Shared bicycle parking detection method and system based on video progressive region extraction
CN111709416A (en) License plate positioning method, device and system and storage medium
CN111160205B (en) Method for uniformly detecting multiple embedded types of targets in traffic scene end-to-end
CN111340855A (en) Road moving target detection method based on track prediction
CN108416316B (en) Detection method and system for black smoke vehicle
CN109635789B (en) High-resolution SAR image classification method based on intensity ratio and spatial structure feature extraction
CN105868734A (en) Power transmission line large-scale construction vehicle recognition method based on BOW image representation model
CN110717886A (en) Pavement pool detection method based on machine vision in complex environment
CN111582339A (en) Vehicle detection and identification method based on deep learning
CN111582070B (en) Foreground extraction method for detecting video sprinkles on expressway
CN114596500A (en) Remote sensing image semantic segmentation method based on channel-space attention and DeeplabV3plus
CN111339823A (en) Threshing and sunning ground detection method based on machine vision and back projection algorithm
CN110889360A (en) Crowd counting method and system based on switching convolutional network
CN115376108A (en) Obstacle detection method and device in complex weather
CN111524121A (en) Road and bridge fault automatic detection method based on machine vision technology
Ghahremannezhad et al. Automatic road detection in traffic videos
CN113158954B (en) Automatic detection method for zebra crossing region based on AI technology in traffic offsite
Harianto et al. Data augmentation and faster rcnn improve vehicle detection and recognition
CN111160282B (en) Traffic light detection method based on binary Yolov3 network
CN112464731A (en) Traffic sign detection and identification method based on image processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20200626