CN107704864A - Well-marked target detection method based on image object Semantic detection - Google Patents
Well-marked target detection method based on image object Semantic detection Download PDFInfo
- Publication number
- CN107704864A CN107704864A CN201610546190.6A CN201610546190A CN107704864A CN 107704864 A CN107704864 A CN 107704864A CN 201610546190 A CN201610546190 A CN 201610546190A CN 107704864 A CN107704864 A CN 107704864A
- Authority
- CN
- China
- Prior art keywords
- image
- window
- detection
- characteristic
- under
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
- G06F18/24155—Bayesian classification
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of well-marked target detection method based on image object Semantic detection, comprise the following steps:S1:Multiple images and its well-marked target annotation results figure composition sample database are found out at random;S2:The random intensive sampling x in each image range1、x2、y1And y2, generation detection window w (x1,y1,x2,y2);S3:Objectivity marginal density feature BF, the objectivity convex closure feature CF of image and the objectivity brightness contrast feature LF of image of image are calculated under detection window w;S4:Using the probability density of each characteristic value in S3 under Bayesian frame statistic mixed-state window, and calculate the conditional probability of each feature;S5:Three kinds of characteristics of image, which are merged, using model-naive Bayesian establishes well-marked target identification model;S6:The well-marked target that image is carried out using aforesaid way is detected:Image to be detected I ' is inputted, calculates characteristic value BF, CF and LF under each detection window respectively, Fusion Features are carried out using the model-naive Bayesian in S5, the testing result of well-marked target is marked using non-maxima suppression selection best window.
Description
Technical Field
The invention relates to the technical field of image detection, in particular to a salient object detection method based on image object semantic detection.
Background
The salient object detection of the image refers to the extraction of a target object interested by a visual organ from the image, and the identification of the salient object has important research significance for the aspects of retrieval, classification and detection of the image, target tracking and the like. At present, most visual saliency target detection is mainly based on image global features, and the feature saliency difference is extracted to be used as a saliency map of an image, so that the use of image object semantics is lacked; in the other part of algorithms, methods for calculating layer semantic features such as SIFT, BOW and the like in the image are adopted, so that the calculated amount in the detection process is increased; meanwhile, the saliency map, which is a detection result of a salient object, cannot highlight the most noticeable object and cannot specify the specific position of the salient object.
Disclosure of Invention
According to the problems in the prior art, the invention discloses a method for detecting a salient object based on image object semantic detection, which starts from the edge significance, the brightness significance and the convex hull object significance of an image, provides a method for defining and calculating image object features under a detection window, and utilizes a Bayes frame to realize the detection of the definite position of the salient object, and the specific scheme is as follows:
s1: randomly finding out a plurality of images and a remarkable target marking result graph to form a sample database;
s2: randomly densely sampling x within each image range 1 、x 2 、y 1 And y 2 Generating a detection window w (x) 1 ,y 1 ,x 2 ,y 2 );
S3: calculating an object edge density characteristic BF of the image, an object convex hull characteristic CF of the image and an object brightness contrast characteristic LF of the image under a detection window w;
s4: and (3) counting the probability density of each characteristic value in the S3 under the detection window by adopting a Bayesian framework, and calculating the conditional probability of each characteristic.
S5: a naive Bayes model is used for fusing three image characteristics to establish a significant target recognition model;
s6: the method is adopted for detecting the salient objects of the images: and inputting an image I' to be detected, respectively calculating characteristic values BF, CF and LF under each detection window, performing characteristic fusion by using a naive Bayes model in S5, and selecting an optimal window by adopting non-maximum value inhibition to mark a detection result of a significant target.
The calculation of the objectification edge density characteristic BF of the image adopts the following mode:
converting a known image into a corresponding gray scale image, obtaining a binary edge image by using canny edge detection, filling an edge contour gap to obtain an image Ic, and considering the image as a gap when the pixel interval of the image is more than 1; under the detection window W (W, H), the continuous edge density of the image is defined as:
wherein E r Representing a continuous edge, S, within the annular window r Is the area of a rectangular ring, R w And R h The width and the height of the set annular window are shown, and the calculation formula is as follows:
R w =W/4 (2)
R h =H/4 。 (3)
the object convex hull characteristic CF of the image is calculated by adopting the following method:
obtaining an image I by a clustering method m To find the neighborhood image I of the image n Then, a curvature scale space angular point extraction algorithm of self-adaptive threshold and angle is utilized to solve the neighborhood image I n C1; neighborhood image I n The following calculation is adopted:
I n =g k *I m -I m (4)
wherein g is k Averaging convolution operators for the image;
setting image edge threshold, removing image edge corner point c 2 The set of image salient object points is finally defined as: c = { C 1 -c 2 Solving a convex hull C of the image by utilizing a Graham algorithm and calculating a convex hull area S c . Defined under the window w, the convex hull object feature CF is:
the object brightness contrast characteristic LF of the image is as follows:
knowing the detection window W (W, H) and its peripheral window ring W', making the area of the rectangular ring equal to the area of the window W, the width and height of the window ring are set as:
using the integral image to respectively count the brightness histograms of the images in w and w ', and performing normalization processing to obtain H (w) and H (w'):
defining the contrast characteristics of the image brightness center under the window as follows:
where N is the number of bins in the histogram.
S4: when the Bayesian framework is adopted for feature training, the following method is adopted:
for the sampleGenerating a detection window w for each image in the library, and counting whether the foreground image and the background image are in the window to obtain N o And N b In which N is o Number of foreground images representing statistics, N b Representing the number of statistical background images, and calculating the prior probabilities of the foreground image and the background image as follows:
p(b)=1-p(o)(10)
calculating according to the three target characteristics in the above steps, counting the probability density of each characteristic value under the detection window, and then calculating the conditional probability of each characteristic as:
where F = { EF, CF, LF } represents the objective feature, and H (F (w)) represents the number of pixels in the probability statistical distribution for each feature value.
In S5, three image characteristics are fused by using a naive Bayes model, and the establishment of a significant target recognition model is as follows:
due to the adoption of the technical scheme, the invention sets the characteristics that the salient object of the image has continuous edges, the consistency of internal elements and the consistency of brightness from the viewpoint of the image saliency to define the object semantic features of the image to accord with the following three conditions: the 1 salient object has edge density semantic features, the 2 salient object convex hull contains interest points which concentrate most of visual positioning areas, the 2 salient object has obvious object semantic features, and the 3 salient object has semantic features with brightness consistency. The method starts from basic visual features, realizes the definition and calculation of three significant target object semantic features under a detection window, realizes the learning of image object features under a Bayes framework, establishes a detection model capable of quickly and accurately detecting the specific position of a significant target, and can be more generally applied to the detection of the significant target of common object types.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic diagram of a technical scheme of the identification method of the present invention;
FIG. 2 is a flowchart illustrating extraction of convex hull objects in an embodiment;
FIG. 3 (a), FIG. 3 (b), FIG. 3 (c) and FIG. 3 (d) are schematic diagrams illustrating a target source image to be determined in an embodiment;
FIG. 4 (a) FIG. 4 (b) is a schematic illustration of a detection window generated;
FIG. 5 (a) FIG. 5 (b) is a binary continuous edge image in an embodiment;
FIG. 6 (a) FIG. 6 (b) is a feature diagram of a convex hull object in an embodiment;
FIG. 7 (a) FIG. 7 (b) is a graph showing the result of extracting significant objects in the example;
fig. 8 (a) fig. 8 (b) is a significant object result detected in the example.
Detailed Description
In order to make the technical solutions and advantages of the present invention clearer, the following describes the technical solutions in the embodiments of the present invention clearly and completely with reference to the drawings in the embodiments of the present invention:
as shown in fig. 1, the method for detecting a salient object based on image object semantic detection specifically includes the following steps:
A. firstly, the sample data source:
from the image database Pascal VOC and Caltech101, 200 images with one target are randomly found, which include 20 categories, including articles, planes, etc.
B. Generating a detection window:
before the training process starts, for each image, a detection window is randomly generated: the height and width of the image are respectively h and w, the size of the salient object is not less than 10 multiplied by 10 pixels, and x is randomly sampled in the image range 1 、x 2 、y 1 And y 2 So that it satisfies the condition (x) 2 -x 1 )>10,(y 2 -y 1 ) > 10, 1000 sliding windows (x) are generated 1 ,y 1 ,x 2 ,y 2 ). Fig. 3 shows the number of windows generated, the number of windows randomly generated from right to left being 5, 50, 100 and 1000, respectively.
C. Extracting the edge density characteristics of the object under the detection window w:
firstly, the picture is changed into a gray scale image, then a binary edge image is obtained by canny edge detection, and an edge contour gap (the gap is considered to be a gap if the interval is more than 1 pixel) is filled to obtain an image Ic. Under the detection window W (W, H), the continuous edge density EF of the image is calculated according to the formulas (1), (2) and (3).
D. Extracting the features of the object convex hull under the detection window w according to the method shown in fig. 2:
firstly, median filtering is carried out to remove partial noise of the image, and then a Meanshift clustering method is adopted to obtain an image I m . The specific operation is that, in the luv color mode, a kernel function is defined:the mean shift vector calculation is performed using the following formula:
where g (x) = -K' (x), h is the bandwidth of the kernel function, the set value is 10.
And (4) solving an image neighborhood image In according to a formula (4), and solving a CSS corner set c1 of the In by utilizing a self-adaptive threshold and angle curvature scale space corner extraction algorithm. Setting an image edge threshold, removing an image edge corner point c2, and defining a set of image salient object points as: c = { C 1 -c 2 }. Then, the Graham algorithm is utilized to obtain the convex hull c and calculate the convex hull area S c . Then, the characteristic of the object convex hull is calculated according to the formula (5). In the calculation process, the integral image is adopted to calculate the value of each area so as to accelerate the running speed.
E. And then extracting the contrast characteristics of the image target brightness under the rectangular ring:
firstly, converting an image from an rgb mode to a lab mode, knowing a detection window W (W, H), setting the width and the height of a peripheral window ring according to formulas (6) and (7), then respectively counting the brightness histograms of the images in W and W' by using an integral image, carrying out normalization processing, and then calculating the contrast characteristic of the brightness center of the image under the detection window by using a formula (8).
F. And (3) performing feature training by adopting a Bayesian framework:
for each image of the sample library, a detection window w is generated, the image within the window w is compared to the positive (target) sample image of the marker (box) in the sample image: the image area within window w is known as s w And the image area in the artificial labeling box in the sample image is s b CalculatingWhen the result is p&And gt, 0.5, which means that the image is a foreground image and the label value is set to be 1, otherwise, the image is a background image and the label value is set to be-1. Whether the foreground image and the background image are in the window or not is counted to obtain N o And N b In which N is o Number of foreground images representing statistics, N b Representing the number of statistical background images. Calculating the prior probability of the foreground image and the background image according to the formulas (9) and (10), counting the probability density of each characteristic value under the detection window, and then calculating the conditional probability p (EF | o), p (EF | b), p (CF | o), p (CF | b), p (LF | o) and p (LF | b) of each object characteristic according to the formula (11).
G. Establishing an object target detection model
And (4) fusing three image characteristics according to a formula (13) to establish a significant target recognition model BM (EF, CF, LF).
H. Example image salient object target detection:
inputting an image I to be detected, and generating 1000 detection windows according to the step B as shown in FIG. 4; then, the characteristic values BF, CF and LF under each detection window are calculated according to the step C, D, E. FIG. 5 shows a binary edge image of an image I, FIG. 6 shows a cluster image Im of the image I, and the detection result of the object convex hull region of the image I is shown in FIG. 7; then, BM (EF, CF, LF) is used for fusing three objective characteristics; the non-maximum suppression is used to select the best window, and the detected significant target result is shown in fig. 8.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.
Claims (6)
1. A salient object detection method based on image object semantic detection is characterized by comprising the following steps:
s1: randomly finding out a plurality of images and a remarkable target marking result graph to form a sample database;
s2: randomly densely sampling x within each image range 1 、x 2 、y 1 And y 2 Generating a detection window w (x) 1 ,y 1 ,x 2 ,y 2 );
S3: calculating an object edge density characteristic BF of the image, an object convex hull characteristic CF of the image and an object brightness contrast characteristic LF of the image under a detection window w;
s4: counting the probability density of each characteristic value in S3 under a detection window by adopting a Bayesian frame, and calculating the conditional probability of each characteristic;
s5: a naive Bayes model is used for fusing three image characteristics to establish a significant target recognition model;
s6: the method is adopted for detecting the salient objects of the images: and inputting an image I' to be detected, respectively calculating characteristic values BF, CF and LF under each detection window, performing characteristic fusion by using a naive Bayes model in S5, and selecting an optimal window by adopting non-maximum value inhibition to mark a detection result of a significant target.
2. The salient object detection method based on image object semantic detection according to claim 1, further characterized by: the calculation of the objectification edge density characteristic BF of the image adopts the following mode:
converting a known image into a corresponding gray scale image, obtaining a binary edge image by using canny edge detection, filling an edge contour gap to obtain an image Ic, and considering the image as a gap when the pixel interval of the image is more than 1; under the detection window W (W, H), the continuous edge density of the image is defined as:
wherein E r Representing a continuous edge, S, within the annular window r Is the area of a rectangular ring, R w And R h The width and the height of the set annular window are shown, and the calculation formula is as follows:
R w =W/4 (2)
R h =H/4。 (3)
3. the salient object detection method based on image object semantic detection according to claim 1, further characterized by: the object convex hull characteristic CF of the image is calculated by adopting the following method:
obtaining an image I by a clustering method m Calculating a neighborhood image I of the image n Then, a curvature scale space angular point extraction algorithm of self-adaptive threshold and angle is utilized to solve neighborhood image I n C1; neighborhood image I n The following calculation is adopted:
I n =g k *I m -I m (4)
wherein g is k Averaging convolution operators for the image;
setting image edge threshold, removing image edge corner point c 2 The set of image salient object points is finally defined as: c = { C 1 -c 2 Solving a convex hull C of the image by utilizing a Graham algorithm and calculating a convex hull area S c . Defined under the window w, the convex hull object feature CF is:
4. the salient object detection method based on image object semantic detection according to claim 1, further characterized by: the object brightness contrast characteristic LF of the image is as follows:
knowing the detection window W (W, H) and its peripheral window ring W', making the area of the rectangular ring equal to the area of the window W, the width and height of the window ring are set to:
using the integral image to respectively count the brightness histograms of the images in w and w ', and performing normalization processing to obtain H (w) and H (w'):
defining the contrast characteristics of the image brightness center under the window as follows:
where N is the number of bins in the histogram.
5. The salient object detection method based on image object semantic detection according to claim 1, further characterized by: s4: when the Bayesian framework is adopted for feature training, the following method is adopted:
generating a detection window w for each image in the sample library, and counting whether the foreground image and the background image are in the window to obtain N o And N b In which N is o Number of foreground images representing statistics, N b Representing the number of statistical background images, and calculating the prior probabilities of the foreground image and the background image as follows:
p(b)=1-p(o) (10)
calculating according to the three target characteristics in the above steps, counting the probability density of each characteristic value under the detection window, and then calculating the conditional probability of each characteristic as:
where F = { EF, CF, LF } represents an objective feature, and H (F (w)) represents the number of pixels in the probability statistical distribution of each feature value.
6. The salient object detection method based on image object semantic detection according to claim 1 is further characterized in that: in S5, three image characteristics are fused by using a naive Bayes model, and the establishment of a significant target recognition model is as follows:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610546190.6A CN107704864B (en) | 2016-07-11 | 2016-07-11 | Salient object detection method based on image object semantic detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610546190.6A CN107704864B (en) | 2016-07-11 | 2016-07-11 | Salient object detection method based on image object semantic detection |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107704864A true CN107704864A (en) | 2018-02-16 |
CN107704864B CN107704864B (en) | 2020-10-27 |
Family
ID=61168695
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610546190.6A Active CN107704864B (en) | 2016-07-11 | 2016-07-11 | Salient object detection method based on image object semantic detection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107704864B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108960042A (en) * | 2018-05-17 | 2018-12-07 | 新疆医科大学第附属医院 | The echinococcus protoscolex survival rate detection method of vision significance and SIFT feature |
CN110598776A (en) * | 2019-09-03 | 2019-12-20 | 成都信息工程大学 | Image classification method based on intra-class visual mode sharing |
CN111639672A (en) * | 2020-04-23 | 2020-09-08 | 中国科学院空天信息创新研究院 | Deep learning city functional area classification method based on majority voting |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103049751A (en) * | 2013-01-24 | 2013-04-17 | 苏州大学 | Improved weighting region matching high-altitude video pedestrian recognizing method |
US20130124514A1 (en) * | 2011-03-10 | 2013-05-16 | International Business Machines Corporaiton | Hierarchical ranking of facial attributes |
CN104050460A (en) * | 2014-06-30 | 2014-09-17 | 南京理工大学 | Pedestrian detection method with multi-feature fusion |
CN104103082A (en) * | 2014-06-06 | 2014-10-15 | 华南理工大学 | Image saliency detection method based on region description and priori knowledge |
US10353948B2 (en) * | 2013-09-04 | 2019-07-16 | Shazura, Inc. | Content based image retrieval |
-
2016
- 2016-07-11 CN CN201610546190.6A patent/CN107704864B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130124514A1 (en) * | 2011-03-10 | 2013-05-16 | International Business Machines Corporaiton | Hierarchical ranking of facial attributes |
CN103049751A (en) * | 2013-01-24 | 2013-04-17 | 苏州大学 | Improved weighting region matching high-altitude video pedestrian recognizing method |
US10353948B2 (en) * | 2013-09-04 | 2019-07-16 | Shazura, Inc. | Content based image retrieval |
CN104103082A (en) * | 2014-06-06 | 2014-10-15 | 华南理工大学 | Image saliency detection method based on region description and priori knowledge |
CN104050460A (en) * | 2014-06-30 | 2014-09-17 | 南京理工大学 | Pedestrian detection method with multi-feature fusion |
Non-Patent Citations (7)
Title |
---|
CHUAN YANG ET AL.: ""Graph-Regularized Saliency Detection With Convex-Hull-Based Center Prior"", 《IEEE SIGNAL PROCESSING LETTERS》 * |
FANJIE MENG ET AL.: ""Image fusion with saliency map and interest points"", 《NEUROCOMPUTING》 * |
TTTI ET AL.: ""A model of saliency-dased visual attention for rapid scene analysis"", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 * |
徐威 等: ""利用层次先验估计的显著性目标检测"", 《自动化学报》 * |
景慧昀: ""视觉显著性检测关键技术研究"", 《万方数据知识服务平台》 * |
李君浩 等: ""基于视觉显著性图与似物性的对象检测"", 《计算机应用》 * |
沈峘 等: ""融合多种特征的路面车辆检测方法"", 《光电子 激光》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108960042A (en) * | 2018-05-17 | 2018-12-07 | 新疆医科大学第附属医院 | The echinococcus protoscolex survival rate detection method of vision significance and SIFT feature |
CN108960042B (en) * | 2018-05-17 | 2021-06-08 | 新疆医科大学第一附属医院 | Echinococcus proctostermias survival rate detection method based on visual saliency and SIFT characteristics |
CN110598776A (en) * | 2019-09-03 | 2019-12-20 | 成都信息工程大学 | Image classification method based on intra-class visual mode sharing |
CN111639672A (en) * | 2020-04-23 | 2020-09-08 | 中国科学院空天信息创新研究院 | Deep learning city functional area classification method based on majority voting |
CN111639672B (en) * | 2020-04-23 | 2023-12-19 | 中国科学院空天信息创新研究院 | Deep learning city function classification method based on majority voting |
Also Published As
Publication number | Publication date |
---|---|
CN107704864B (en) | 2020-10-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115861135B (en) | Image enhancement and recognition method applied to panoramic detection of box body | |
CN107545239B (en) | Fake plate detection method based on license plate recognition and vehicle characteristic matching | |
EP3455782B1 (en) | System and method for detecting plant diseases | |
JP6330385B2 (en) | Image processing apparatus, image processing method, and program | |
CN105184763B (en) | Image processing method and device | |
EP3101594A1 (en) | Saliency information acquisition device and saliency information acquisition method | |
CN106683119B (en) | Moving vehicle detection method based on aerial video image | |
CN108629286B (en) | Remote sensing airport target detection method based on subjective perception significance model | |
CN111145209A (en) | Medical image segmentation method, device, equipment and storage medium | |
CN107038416B (en) | Pedestrian detection method based on binary image improved HOG characteristics | |
CN109241973B (en) | Full-automatic soft segmentation method for characters under texture background | |
Bedruz et al. | Real-time vehicle detection and tracking using a mean-shift based blob analysis and tracking approach | |
CN110717896A (en) | Plate strip steel surface defect detection method based on saliency label information propagation model | |
CN107154044B (en) | Chinese food image segmentation method | |
WO2019197021A1 (en) | Device and method for instance-level segmentation of an image | |
WO2017135120A1 (en) | Computationally efficient frame rate conversion system | |
CN110751619A (en) | Insulator defect detection method | |
Zhu et al. | Automatic object detection and segmentation from underwater images via saliency-based region merging | |
CN113780110A (en) | Method and device for detecting weak and small targets in image sequence in real time | |
CN107704864B (en) | Salient object detection method based on image object semantic detection | |
CN111695373A (en) | Zebra crossing positioning method, system, medium and device | |
Liu et al. | Splicing forgery exposure in digital image by detecting noise discrepancies | |
CN116563591A (en) | Optical smoke detection method based on feature extraction under sea-sky background | |
JP2019021243A (en) | Object extractor and super pixel labeling method | |
Widyantara et al. | Gamma correction-based image enhancement and canny edge detection for shoreline extraction from coastal imagery |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |