CN107704864B - Salient object detection method based on image object semantic detection - Google Patents

Salient object detection method based on image object semantic detection Download PDF

Info

Publication number
CN107704864B
CN107704864B CN201610546190.6A CN201610546190A CN107704864B CN 107704864 B CN107704864 B CN 107704864B CN 201610546190 A CN201610546190 A CN 201610546190A CN 107704864 B CN107704864 B CN 107704864B
Authority
CN
China
Prior art keywords
image
characteristic
detection
window
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610546190.6A
Other languages
Chinese (zh)
Other versions
CN107704864A (en
Inventor
于纯妍
张维石
宋梅萍
王春阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Maritime University
Original Assignee
Dalian Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Maritime University filed Critical Dalian Maritime University
Priority to CN201610546190.6A priority Critical patent/CN107704864B/en
Publication of CN107704864A publication Critical patent/CN107704864A/en
Application granted granted Critical
Publication of CN107704864B publication Critical patent/CN107704864B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06F18/24155Bayesian classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a salient object detection method based on image object semantic detection, which comprises the following steps of: s1: randomly finding out a plurality of images and a remarkable target marking result graph to form a sample database; s2: randomly densely sampling x within each image range1、x2、y1And y2Generating a detection window w (x)1,y1,x2,y2) (ii) a S3: calculating an object edge density characteristic BF of the image, an object convex hull characteristic CF of the image and an object brightness contrast characteristic LF of the image under a detection window w; s4: counting the probability density of each characteristic value in S3 under a detection window by adopting a Bayesian frame, and calculating the conditional probability of each characteristic; s5: a naive Bayes model is used for fusing three image characteristics to establish a significant target recognition model; s6: the method is adopted for detecting the salient objects of the images: and inputting an image I' to be detected, respectively calculating characteristic values BF, CF and LF under each detection window, performing characteristic fusion by using a naive Bayes model in S5, and selecting an optimal window by adopting non-maximum value inhibition to mark a detection result of a significant target.

Description

Salient object detection method based on image object semantic detection
Technical Field
The invention relates to the technical field of image detection, in particular to a method for detecting a salient object based on image object semantic detection.
Background
The salient object detection of the image refers to the extraction of a target object interested by a visual organ from the image, and the identification of the salient object has important research significance for the aspects of retrieval, classification and detection of the image, target tracking and the like. At present, most visual saliency target detection is mainly based on image global features, and the feature saliency difference is extracted to be used as a saliency map of an image, so that the use of image object semantics is lacked; in addition, in some algorithms, the method of calculating the middle-layer semantic features of the image, such as SIFT, BOW and the like, is adopted, so that the calculated amount in the detection process is increased; meanwhile, the saliency map as a result of detection of a saliency target cannot highlight the most noticeable target and cannot specify the specific position of the saliency target.
Disclosure of Invention
According to the problems in the prior art, the invention discloses a method for detecting a salient object based on image object semantic detection, which starts from the edge significance, the brightness significance and the convex hull object significance of an image, provides a method for defining and calculating image object features under a detection window, and utilizes a Bayes frame to realize the detection of the definite position of the salient object, and the specific scheme is as follows:
s1: randomly finding out a plurality of images and a remarkable target marking result graph to form a sample database;
s2: randomly densely sampling x within each image range1、x2、y1And y2Generating a detection window w (x)1,y1,x2,y2);
S3: calculating an object edge density characteristic BF of the image, an object convex hull characteristic CF of the image and an object brightness contrast characteristic LF of the image under a detection window w;
s4: and (4) counting the probability density of each characteristic value in S3 under the detection window by adopting a Bayesian framework, and calculating the conditional probability of each characteristic.
S5: a naive Bayes model is used for fusing three image characteristics to establish a significant target recognition model;
s6: the method is adopted for detecting the salient objects of the images: and inputting an image I' to be detected, respectively calculating characteristic values BF, CF and LF under each detection window, performing characteristic fusion by using a naive Bayes model in S5, and selecting an optimal window by adopting non-maximum value inhibition to mark a detection result of a significant target.
The calculation of the objectification edge density characteristic BF of the image adopts the following mode:
converting a known image into a corresponding gray scale image, obtaining a binary edge image by using canny edge detection, filling an edge contour gap to obtain an image Ic, and considering the image as a gap when the pixel interval of the image is more than 1; under the detection window W (W, H), the continuous edge density of the image is defined as:
Figure GDA0002663260100000021
wherein ErRepresenting a continuous edge, S, within the annular windowrIs the area of a rectangular ring, RwAnd RhThe width and the height of the set annular window are shown, and the calculation formula is as follows:
Rw=W/4 (2)
Rh=H/4。 (3)
the object convex hull characteristic CF of the image is calculated by adopting the following method:
obtaining an image I by a clustering methodmCalculating a neighborhood image I of the imagenThen, a curvature scale space angular point extraction algorithm of self-adaptive threshold and angle is utilized to solve the neighborhood image InCSS corner set c 1; neighborhood image InThe following calculation is adopted:
In=gk*Im-Im(4)
wherein g iskAveraging convolution operators for the image;
setting image edge threshold, removing image edge corner point c2The set of image salient object points is finally defined as: c ═ C1-c2Are then multiplied bySolving the convex hull C by utilizing the Graham algorithm and calculating the convex hull area Sc. Defined under the window w, the convex hull object feature CF is:
Figure GDA0002663260100000022
the object brightness contrast characteristic LF of the image is as follows:
knowing the detection window W (W, H) and its peripheral window ring W', making the area of the rectangular ring equal to the area of the window W, the width and height of the window ring are set to:
Figure GDA0002663260100000023
Figure GDA0002663260100000024
using the integral image to respectively count the brightness histograms of the images in w and w ', and carrying out normalization processing to obtain H (w) and H (w'):
defining the contrast characteristics of the image brightness center under the window as follows:
Figure GDA0002663260100000031
where N is the number of bins in the histogram.
S4: when the Bayesian framework is adopted for feature training, the following method is adopted:
generating a detection window w for each image in the sample library, and counting whether the foreground image and the background image are in the window to obtain NoAnd NbIn which N isoNumber of foreground images representing statistics, NbRepresenting the number of statistical background images, and calculating the prior probabilities of the foreground image and the background image as follows:
Figure GDA0002663260100000032
p(b)=1-p(o) (10)
calculating according to the three target characteristics in the above steps, counting the probability density of each characteristic value under the detection window, and then calculating the conditional probability of each characteristic as:
Figure GDA0002663260100000033
Figure GDA0002663260100000034
where F ═ { EF, CF, LF } represents the objective characteristic, and H (F (w)) represents the number of pixels in the probability statistical distribution of each characteristic value.
In S5, three image features are fused by using a naive Bayes model, and the establishment of a significant target recognition model is as follows:
Figure GDA0002663260100000035
due to the adoption of the technical scheme, the invention sets the characteristics that the salient object of the image has continuous edges, consistency of internal elements and consistency of brightness from the perspective of the image saliency to define the object semantic features of the image to accord with the following three conditions: the 1 salient object has edge density semantic features, the 2 salient object convex hull contains interest points which concentrate most of visual positioning areas, the 2 salient object has obvious object semantic features, and the 3 salient object has semantic features with brightness consistency. The method starts from basic visual features, realizes the definition and calculation of three significant target object semantic features under a detection window, realizes the learning of image object features under a Bayes framework, establishes a detection model capable of quickly and accurately detecting the specific position of a significant target, and can be more generally applied to the detection of the significant target of common object types.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic diagram of a technical scheme of the identification method of the present invention;
FIG. 2 is a flowchart of convex hull object extraction in an embodiment;
FIG. 3 is a schematic diagram of a detection window generated in the embodiment;
FIG. 4(a) FIG. 4(b) is a target source image to be judged;
FIG. 5(a) FIG. 5(b) is a binary continuous edge image in an embodiment;
FIG. 6(a) FIG. 6(b) is a feature diagram of a convex hull object in an embodiment;
FIG. 7(a) FIG. 7(b) is a graph showing the result of extracting significant objects in the example;
fig. 8(a) fig. 8(b) is a significant object result detected in the example.
Detailed Description
In order to make the technical solutions and advantages of the present invention clearer, the following describes the technical solutions in the embodiments of the present invention clearly and completely with reference to the drawings in the embodiments of the present invention:
as shown in fig. 1, the method for detecting a salient object based on image object semantic detection specifically includes the following steps:
A. firstly, the sample data source:
from the image database Pascal VOC and Caltech101, 200 images with one target are randomly found, which include 20 categories, including articles, planes, etc.
B. Generating a detection window:
before the training process starts, for each image, a detection window is randomly generated: the height and width of the image are respectively h and w, the size of the salient object is not less than 10 multiplied by 10 pixels, and x is randomly sampled in the image range1、x2、y1And y2So that it satisfies the condition (x)2-x1)>10,(y2-y1)>10, 1000 sliding windows (x) are generated1,y1,x2,y2). Fig. 3(a) fig. 3(b) fig. 3(c) fig. 3(d) shows the number of windows generated, and the number of windows randomly generated from right to left are 5, 50, 100 and 1000, respectively.
C. Extracting the edge density characteristics of the object under the detection window w:
firstly, the picture is changed into a gray scale image, then a binary edge image is obtained by canny edge detection, and an edge contour gap (the gap is considered to be a gap if the interval is more than 1 pixel) is filled to obtain an image Ic. Under the detection window W (W, H), the continuous edge density EF of the image is calculated according to the formulas (1), (2) and (3).
D. Extracting the features of the object convex hull under the detection window w according to the method shown in fig. 2:
firstly, median filtering is carried out to remove partial noise of the image, and then a Meanshift clustering method is adopted to obtain an image Im. In particular, in luv color mode, a kernel function is defined:
Figure GDA0002663260100000051
the mean shift vector calculation is performed using the following formula:
Figure GDA0002663260100000052
where g (x) ═ -K' (x), h is the bandwidth of the kernel function, set to 10.
And (4) solving an image neighborhood image In according to a formula (4), and solving a CSS corner set c1 of the In by utilizing a self-adaptive threshold and angle curvature scale space corner extraction algorithm. Setting an image edge threshold, and removing an image edge corner point c2 to obtain a set of image salient object points defined as: c ═ C1-c2}. Then, the Graham algorithm is utilized to obtain the convex hull c and calculate the convex hull area Sc. Then, the characteristic of the object convex hull is calculated according to the formula (5). In the calculation process, the integral image is adopted to calculate the value of each area so as to accelerate the operation speed.
E. And then extracting the contrast characteristics of the image target brightness under the rectangular ring:
firstly, converting an image from an rgb mode to a lab mode, knowing a detection window W (W, H), setting the width and the height of a peripheral window ring according to formulas (6) and (7), then respectively counting the brightness histograms of the images in W and W' by using an integral image, carrying out normalization processing, and then calculating the contrast characteristic of the brightness center of the image under the detection window by using a formula (8).
F. And (3) performing feature training by adopting a Bayesian framework:
for each image of the sample library, a detection window w is generated, the image within the window w is compared to the positive (target) sample image of the marker (box) in the sample image: the image area within window w is known as swAnd the image area in the artificial labeling box in the sample image is sbCalculating
Figure GDA0002663260100000053
When the result is p>0.5, meaning it is a foreground image, the tag value is set to 1, otherwise, the tag value is set to-1 for the background image. Whether the foreground image and the background image are in the window or not is counted to obtain NoAnd NbIn which N isoNumber of foreground images representing statistics, NbRepresenting the number of statistical background images. Calculating the prior probability of the foreground image and the background image according to the formulas (9) and (10), calculating the probability density of each characteristic value under the detection window, and then calculating the conditional probability p (EF | o), p (EF | b), p (CF | o), p (CF | b), p (LF | o) and p (LF | b) of each object characteristic according to the formula (11).
G. Establishing an object target detection model
And (4) fusing three image characteristics according to a formula (13) to establish a significant target recognition model BM (EF, CF, LF).
H. Example image salient object target detection:
inputting an image I to be detected, and generating 1000 detection windows according to the step B as shown in a figure 4(a) and a figure 4 (B); the eigenvalues BF, CF and LF are then calculated separately for each detection window, as per step C, D, E. Fig. 5(a) and 5(b) show binary edge images of the image I, fig. 6(a) and 6(b) show a cluster image Im of the image I, and the detection result of the target convex hull region of the image I is shown in fig. 7(a) and 7 (b); then, BM (EF, CF, LF) is used for fusing three objective characteristics; the non-maximum suppression is used to select the best window, and the detected significant target result is shown in fig. 8(a) and fig. 8 (b).
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.

Claims (3)

1. A salient object detection method based on image object semantic detection is characterized by comprising the following steps:
s1: randomly finding out a plurality of images and a remarkable target marking result graph to form a sample database;
s2: randomly densely sampling x within each image range1、x2、y1And y2Generating a detection window w (x)1,y1,x2,y2);
S3: calculating an object edge density characteristic BF of the image, an object convex hull characteristic CF of the image and an object brightness contrast characteristic LF of the image under a detection window w;
s4: counting the probability density of each characteristic value in S3 under a detection window by adopting a Bayesian frame, and calculating the conditional probability of each characteristic;
s5: a naive Bayes model is used for fusing three image characteristics to establish a significant target recognition model;
s6: the method is adopted for detecting the salient objects of the images: inputting an image I' to be detected, respectively calculating characteristic values BF, CF and LF under each detection window, performing characteristic fusion by using a naive Bayes model in S5, and selecting an optimal window by adopting non-maximum value inhibition to mark a detection result of a significant target;
the calculation of the objectification edge density characteristic BF of the image adopts the following mode:
converting a known image into a corresponding gray scale image, obtaining a binary edge image by using canny edge detection, filling an edge contour gap to obtain an image Ic, and considering the image as a gap when the pixel interval of the image is more than 1; under the detection window W (W, H), the continuous edge density of the image is defined as:
Figure FDA0002663260090000011
wherein ErRepresenting a continuous edge, S, within the annular windowrIs the area of a rectangular ring, RwAnd RhThe width and the height of the set annular window are shown, and the calculation formula is as follows:
Rw=W/4 (2)
Rh=H/4 (3)
the object convex hull characteristic CF of the image is calculated by adopting the following method:
obtaining an image I by a clustering methodmCalculating a neighborhood image I of the imagenThen, a curvature scale space angular point extraction algorithm of self-adaptive threshold and angle is utilized to solve the neighborhood image InCSS corner set c 1; neighborhood image InThe following calculation is adopted:
In=gk*Im-Im(4)
wherein g iskAveraging convolution operators for the image;
setting image edge threshold, removing image edge corner point c2The set of image salient object points is finally defined as: c ═ C1-c2Solving a convex hull c of the image by utilizing a Graham algorithm and calculating the convex hull area ScDefining the convex hull object characteristics CF as follows under the window w:
Figure FDA0002663260090000021
the object brightness contrast characteristic LF of the image is as follows:
knowing the detection window W (W, H) and its peripheral window ring W', making the area of the rectangular ring equal to the area of the window W, the width and height of the window ring are set to:
Figure FDA0002663260090000022
Figure FDA0002663260090000023
using the integral image to respectively count the brightness histograms of the images in w and w ', and carrying out normalization processing to obtain H (w) and H (w'):
defining the contrast characteristics of the image brightness center under the window as follows:
Figure FDA0002663260090000024
where N is the number of bins in the histogram.
2. The salient object detection method based on image object semantic detection according to claim 1, further characterized by: s4: when the Bayesian framework is adopted for feature training, the following method is adopted:
generating a detection window w for each image in the sample library, and counting whether the foreground image and the background image are in the window to obtain NoAnd NbIn which N isoNumber of foreground images representing statistics, NbRepresenting the number of statistical background images, and calculating the prior probabilities of the foreground image and the background image as follows:
Figure FDA0002663260090000025
p(b)=1-p(o) (10)
calculating according to the three target characteristics in the above steps, counting the probability density of each characteristic value under the detection window, and then calculating the conditional probability of each characteristic as:
Figure FDA0002663260090000026
Figure FDA0002663260090000031
where F ═ { EF, CF, LF } represents the objective characteristic, and H (F (w)) represents the number of pixels in the probability statistical distribution of each characteristic value.
3. The salient object detection method based on image object semantic detection according to claim 1, further characterized by: in S5, three image features are fused by using a naive Bayes model, and the establishment of a significant target recognition model is as follows:
Figure FDA0002663260090000032
CN201610546190.6A 2016-07-11 2016-07-11 Salient object detection method based on image object semantic detection Active CN107704864B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610546190.6A CN107704864B (en) 2016-07-11 2016-07-11 Salient object detection method based on image object semantic detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610546190.6A CN107704864B (en) 2016-07-11 2016-07-11 Salient object detection method based on image object semantic detection

Publications (2)

Publication Number Publication Date
CN107704864A CN107704864A (en) 2018-02-16
CN107704864B true CN107704864B (en) 2020-10-27

Family

ID=61168695

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610546190.6A Active CN107704864B (en) 2016-07-11 2016-07-11 Salient object detection method based on image object semantic detection

Country Status (1)

Country Link
CN (1) CN107704864B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108960042B (en) * 2018-05-17 2021-06-08 新疆医科大学第一附属医院 Echinococcus proctostermias survival rate detection method based on visual saliency and SIFT characteristics
CN110598776A (en) * 2019-09-03 2019-12-20 成都信息工程大学 Image classification method based on intra-class visual mode sharing
CN111639672B (en) * 2020-04-23 2023-12-19 中国科学院空天信息创新研究院 Deep learning city function classification method based on majority voting

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103049751A (en) * 2013-01-24 2013-04-17 苏州大学 Improved weighting region matching high-altitude video pedestrian recognizing method
CN104050460A (en) * 2014-06-30 2014-09-17 南京理工大学 Pedestrian detection method with multi-feature fusion
CN104103082A (en) * 2014-06-06 2014-10-15 华南理工大学 Image saliency detection method based on region description and priori knowledge
US10353948B2 (en) * 2013-09-04 2019-07-16 Shazura, Inc. Content based image retrieval

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8380711B2 (en) * 2011-03-10 2013-02-19 International Business Machines Corporation Hierarchical ranking of facial attributes

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103049751A (en) * 2013-01-24 2013-04-17 苏州大学 Improved weighting region matching high-altitude video pedestrian recognizing method
US10353948B2 (en) * 2013-09-04 2019-07-16 Shazura, Inc. Content based image retrieval
CN104103082A (en) * 2014-06-06 2014-10-15 华南理工大学 Image saliency detection method based on region description and priori knowledge
CN104050460A (en) * 2014-06-30 2014-09-17 南京理工大学 Pedestrian detection method with multi-feature fusion

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
"A model of saliency-dased visual attention for rapid scene analysis";Ttti et al.;《IEEE Transactions on pattern analysis and machine intelligence》;19981231;全文 *
"Graph-Regularized Saliency Detection With Convex-Hull-Based Center Prior";Chuan Yang et al.;《IEEE SIGNAL PROCESSING LETTERS》;20130731;第20卷(第7期);全文 *
"Image fusion with saliency map and interest points";Fanjie Meng et al.;《Neurocomputing》;20151118;全文 *
"利用层次先验估计的显著性目标检测";徐威 等;《自动化学报》;20150430;第41卷(第4期);全文 *
"基于视觉显著性图与似物性的对象检测";李君浩 等;《计算机应用》;20151210;第35卷(第12期);全文 *
"融合多种特征的路面车辆检测方法";沈峘 等;《光电子 激光》;20100131;第21卷(第1期);全文 *
"视觉显著性检测关键技术研究";景慧昀;《万方数据知识服务平台》;20150817;全文 *

Also Published As

Publication number Publication date
CN107704864A (en) 2018-02-16

Similar Documents

Publication Publication Date Title
US11037291B2 (en) System and method for detecting plant diseases
CN115861135B (en) Image enhancement and recognition method applied to panoramic detection of box body
CN106651872B (en) Pavement crack identification method and system based on Prewitt operator
CN107545239B (en) Fake plate detection method based on license plate recognition and vehicle characteristic matching
CN107346409B (en) pedestrian re-identification method and device
CN107610114B (en) optical satellite remote sensing image cloud and snow fog detection method based on support vector machine
CN105184763B (en) Image processing method and device
JP6330385B2 (en) Image processing apparatus, image processing method, and program
EP3101594A1 (en) Saliency information acquisition device and saliency information acquisition method
CN108629286B (en) Remote sensing airport target detection method based on subjective perception significance model
CN111145209A (en) Medical image segmentation method, device, equipment and storage medium
CN109918971B (en) Method and device for detecting number of people in monitoring video
CN110717896A (en) Plate strip steel surface defect detection method based on saliency label information propagation model
CN107154044B (en) Chinese food image segmentation method
WO2019197021A1 (en) Device and method for instance-level segmentation of an image
Zhu et al. Automatic object detection and segmentation from underwater images via saliency-based region merging
JP2018124689A (en) Moving body detection device, moving body detection system and moving body detection method
CN107704864B (en) Salient object detection method based on image object semantic detection
CN113780110A (en) Method and device for detecting weak and small targets in image sequence in real time
CN106446832B (en) Video-based pedestrian real-time detection method
WO2024016632A1 (en) Bright spot location method, bright spot location apparatus, electronic device and storage medium
JP2013080389A (en) Vanishing point estimation method, vanishing point estimation device, and computer program
Sanmiguel et al. Pixel-based colour contrast for abandoned and stolen object discrimination in video surveillance
CN116563591A (en) Optical smoke detection method based on feature extraction under sea-sky background
CN116563659A (en) Optical smoke detection method combining priori knowledge and feature classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant