CN102509072A - Method for detecting salient object in image based on inter-area difference - Google Patents

Method for detecting salient object in image based on inter-area difference Download PDF

Info

Publication number
CN102509072A
CN102509072A CN2011103120919A CN201110312091A CN102509072A CN 102509072 A CN102509072 A CN 102509072A CN 2011103120919 A CN2011103120919 A CN 2011103120919A CN 201110312091 A CN201110312091 A CN 201110312091A CN 102509072 A CN102509072 A CN 102509072A
Authority
CN
China
Prior art keywords
indicate
saliency maps
rectangular area
pixel
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011103120919A
Other languages
Chinese (zh)
Other versions
CN102509072B (en
Inventor
史冉
刘志
杜欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN 201110312091 priority Critical patent/CN102509072B/en
Publication of CN102509072A publication Critical patent/CN102509072A/en
Application granted granted Critical
Publication of CN102509072B publication Critical patent/CN102509072B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a method for detecting a salient object in an image based on inter-area difference. The method specifically comprises the following steps of: (1) inputting an original image and calculating a saliency map of the original image; (2) calculating and modifying the saliency map; and (3) updating the saliency map through iteration and finding a target rectangle in the greatest difference from the external area, wherein the image content of the internal area of the target rectangle is the detected salient object. The method can accurately detect the salient object in the image without setting any parameter.

Description

The detection method of significant object in image based on region difference
Technical field
The present invention relates to computer vision, technical field of image processing, a kind of detection method of significant object in the image based on region difference is concretely related to.
Background technique
The research achievement of psychology and perception science is it has been shown that when people observes piece image, to the concern in each region of image and unequal, to generate Saliency maps corresponding with attention rate.In most cases, it is focused on a certain region of image when people observes piece image, which is referred to as significant object.In other words, significant object obtains higher attention rate compared to other regions of image.If can come out significant object detection, applications many to significant Object identifying, image adaptive, compression of images, image retrieval etc. can provide biggish help.Significant method for checking object is exactly applied in this background and is given birth to, it is intended to quickly and accurately detected the significant object in image using Saliency maps picture corresponding with image attention degree.The result of detection shows as marking one piece of rectangular area in the picture, the rectangular area it is as much as possible it is as few as possible comprising significant object include background.Significant method for checking object has had preliminary research at present, as Liu et al. people in June, 2007 American Institute of Electrical and Electronics Engineers computer vision and pattern identification meeting on " the significant object detection based on study " one text for delivering, significant object detection described in this article is to search for target rectangle on Saliency maps picture using exhaust algorithm, which has framed at least 95% high pixel of conspicuousness.This detection method needs given threshold, and the detection speed of significant object is slow, and detection effect depends on the effect of Saliency maps picture." the conspicuousness model based on concentric curve and color " text that Valenti et al. is delivered in American Institute of Electrical and Electronics Engineers computer vision meeting in 2009, significant object detection described in this article is to search for target rectangle on Saliency maps picture using efficient child window searching algorithm, this algorithm accelerates the speed of search target rectangle, but cannot accurately detect significant object, specific step is as follows for efficient child window searching algorithm:
(1) p is set as an empty ordered queue, a point set is formed with four apex coordinates of image, using the point set as the point set of the head of the queue of ordered queue p;
(2) point set of the head of the queue of ordered queue p is divided into two subsets from that maximum side of its interval;
(3) upper bound of each subset is calculated by borderline quality function;
(4) two subsets for obtaining second step are inserted into ordered queue p according to the calculated upper bound of third step;
(5) step (2)-(4) are repeated, when the subset taken out from the head of the queue of ordered queue p only includes a rectangle, which is global maximum, the target rectangle exactly searched.
Luo et al. has delivered " object detection based on maximum conspicuousness density " text in Asia computer vision meeting in 2010, " object detection " described in this article is to search for target rectangle on Saliency maps picture using maximum significant density algorithm, this algorithm improves the accuracy of significant object detection, but this method needs to design different parameters for different conspicuousness models, can not achieve adaptivity.Liu et al. people has delivered " detecting without ginseng conspicuousness based on Density Estimator " in the American Institute of Electrical and Electronics Engineers image procossing international conference of in September, 2010, this article establishes the conspicuousness model of printenv with nonparametric probability algorithm, for obtaining the Saliency maps of image, specific step is as follows for the algorithm:
It (1) is several regions image pre-segmentation with mean shift algorithm;
(2) each pixel in image and the color similarity of each region are calculated with nonparametric probability algorithm;
(3) color distance between each region is calculated using the color similarity of each pixel and each region in image, forms the color Saliency maps of image;
(4) space length between each region is calculated using the color similarity of each pixel and each region in image, forms the spatial saliency figure of image;
(5) by the color Saliency maps of image and spatial saliency figure, the final Saliency maps of image are formed.
Described in summary, existing significant method for checking object needs that corresponding parameter is arranged for various conspicuousness models, is just able to achieve the accurate detection of significant object, this affects the extensive use of significant object detection.
Summary of the invention
It is an object of the invention to a kind of detection method of significant object in the image based on region difference is proposed for defect present in prior art, this method can accurately detect significant object, without corresponding parameter is arranged for various conspicuousness models.
In order to achieve the above object, the technical scheme adopted by the invention is that it is as follows:
The detection method of significant object in a kind of image based on region difference, the specific steps of which are as follows:
(1), original image is inputted, the Saliency maps of original image are calculated;
(2), modification Saliency maps are calculated;
(3), by iteration update Saliency maps, find with the maximum target rectangle of perimeter difference, the picture material of the target rectangle interior zone is the significant object detected.
Input original image described in above-mentioned steps (1), calculates the Saliency maps of original image, the specific steps of which are as follows:
(1-1), original image is divided into several regions with mean shift algorithm;
(1-2), each pixel in image and the color similarity of each region, calculating formula are calculated are as follows:
(1)
Wherein,
Figure 763187DEST_PATH_IMAGE002
Indicate the
Figure 731143DEST_PATH_IMAGE002
A region,
Figure 827275DEST_PATH_IMAGE003
Indicate the region being divided into
Figure 601196DEST_PATH_IMAGE004
In pixel number,
Figure 731963DEST_PATH_IMAGE005
Indicate region
Figure 933137DEST_PATH_IMAGE004
In pixel,
Figure 313303DEST_PATH_IMAGE006
Indicate pixelThe color characteristic at place,
Figure 813871DEST_PATH_IMAGE007
Indicate pixel
Figure 185947DEST_PATH_IMAGE008
The color characteristic at place,Indicate the
Figure 106815DEST_PATH_IMAGE002
The kernel function in a region,
Figure 946595DEST_PATH_IMAGE010
It indicates
Figure 427255DEST_PATH_IMAGE008
The pixel at placeWith
Figure 642040DEST_PATH_IMAGE002
The color similarity in a region;
(1-3), the color Saliency maps for calculating original image, calculating formula are as follows:
(2)
Wherein,
Figure 95521DEST_PATH_IMAGE002
Indicate the
Figure 553047DEST_PATH_IMAGE002
A region, n indicate region sum,
Figure 367420DEST_PATH_IMAGE010
It indicates
Figure 189882DEST_PATH_IMAGE008
The pixel at place
Figure 519232DEST_PATH_IMAGE011
With
Figure 718132DEST_PATH_IMAGE002
The color similarity in a region,
Figure 449328DEST_PATH_IMAGE014
Indicate the
Figure 442692DEST_PATH_IMAGE002
The color conspicuousness in a region,
Figure 197021DEST_PATH_IMAGE015
Indicate the color Saliency maps of original image;
(1-4), the spatial saliency figure for calculating original image, calculating formula are as follows:
Figure 261929DEST_PATH_IMAGE016
(3)
Wherein,
Figure 785315DEST_PATH_IMAGE002
Indicate the
Figure 11896DEST_PATH_IMAGE002
A region, n indicate region sum,
Figure 253522DEST_PATH_IMAGE010
It indicates
Figure 794225DEST_PATH_IMAGE008
The pixel at place
Figure 234433DEST_PATH_IMAGE011
With
Figure 835179DEST_PATH_IMAGE002
The color similarity in a region,
Figure 375487DEST_PATH_IMAGE017
Indicate the
Figure 719880DEST_PATH_IMAGE002
The spatial saliency in a region,
Figure 952279DEST_PATH_IMAGE018
Indicate the spatial saliency figure of original image;
(1-5), the Saliency maps for calculating original image, calculating formula are as follows:
Figure 786242DEST_PATH_IMAGE019
(4)
Wherein,
Figure 736881DEST_PATH_IMAGE015
Indicate the color Saliency maps of original image,
Figure 681703DEST_PATH_IMAGE018
Indicate the spatial saliency figure of original image,
Figure 34187DEST_PATH_IMAGE020
Indicate the Saliency maps of original image.
The numerical value of each pixel position is the significance value of the pixel in Saliency maps, and the value range of the significance value is 0~255, and significance value is bigger, and the expression pixel is more significant, and the smaller expression pixel of significance value is not significant.
Saliency maps are modified in calculating described in above-mentioned steps (2), the specific steps of which are as follows:
(2-1), the center of gravity for setting Saliency maps
Figure 976735DEST_PATH_IMAGE021
(2-2), each pixel on Saliency maps is calculated
Figure 476987DEST_PATH_IMAGE011
To the center of gravityEuclidean distance
Figure 370173DEST_PATH_IMAGE022
, calculating formula are as follows:
Figure 545940DEST_PATH_IMAGE023
(5)
Wherein,
Figure 471170DEST_PATH_IMAGE024
Figure 757795DEST_PATH_IMAGE025
Indicate pixel coordinate
Figure 84871DEST_PATH_IMAGE026
,
Figure 103643DEST_PATH_IMAGE027
Figure 578487DEST_PATH_IMAGE028
Indicate barycentric coodinates
Figure 606486DEST_PATH_IMAGE029
,
Figure 850385DEST_PATH_IMAGE022
Indicate pixelTo the Euclidean distance of the center of gravity of Saliency maps;
(2-3), modified Saliency maps, calculating formula are calculated are as follows:
Figure 939881DEST_PATH_IMAGE030
(6)
Wherein, W and H respectively indicates the width and height of image,
Figure 571238DEST_PATH_IMAGE022
Indicate pixel
Figure 872906DEST_PATH_IMAGE011
To the Euclidean distance of the center of gravity of Saliency maps,
Figure 561377DEST_PATH_IMAGE031
Indicate original Saliency maps,
Figure 682916DEST_PATH_IMAGE032
Indicate modified Saliency maps.
Described in above-mentioned steps (3) by iteration update Saliency maps, find with the maximum target rectangle of Saliency maps perimeter difference, the picture material of the target rectangle interior zone is the significant object detected, the specific steps of which are as follows:
(3-1), the initial value that iteration is set, the specific steps of which are as follows:
(3-1-1), it sets
Figure 52718DEST_PATH_IMAGE033
Indicate the number of iterations, wherein
Figure 271210DEST_PATH_IMAGE033
It is 0,1,2,3 ... ...;
(3-1-2), it setsIt indicates theThe Saliency maps updated in secondary iteration, Saliency maps when original state
Figure 647330DEST_PATH_IMAGE035
, wherein
Figure 658012DEST_PATH_IMAGE036
Indicate modified Saliency maps obtained in step (2);
(3-1-3), it sets
Figure 688285DEST_PATH_IMAGE037
It indicates the
Figure 784416DEST_PATH_IMAGE038
The rectangular area obtained in secondary iteration,
Figure 558337DEST_PATH_IMAGE039
It indicates rectangular area when original state, is entire Saliency maps;
(3-1-4), it sets
Figure 423525DEST_PATH_IMAGE040
Indicate the
Figure 562383DEST_PATH_IMAGE033
The difference value of the rectangular area and perimeter that obtain in secondary iteration, perimeter refer to the region removed behind rectangular area in Saliency maps, the difference value of rectangular area and perimeter when original state
Figure 208128DEST_PATH_IMAGE041
(3-1-5), it setsIndicate theSaliency maps in secondary iteration
Figure 815192DEST_PATH_IMAGE034
The mean value of the significance value of upper all pixels point, Saliency maps when original state
Figure 885917DEST_PATH_IMAGE043
The mean value of the significance value of upper all pixels point is
Figure 4570DEST_PATH_IMAGE044
(3-2), Saliency maps are updated by iteration, obtains rectangular area, the specific steps of which are as follows:
(3-2-1), it is located at
Figure 844350DEST_PATH_IMAGE038
In secondary iteration, with
Figure 325010DEST_PATH_IMAGE033
The Saliency maps updated in -1 iteration
Figure 945347DEST_PATH_IMAGE045
The significance value of upper each pixel subtracts Saliency maps
Figure 536865DEST_PATH_IMAGE045
The mean value of the significance value of upper all pixels point
Figure 293469DEST_PATH_IMAGE046
, the Saliency maps that are updated
Figure 210609DEST_PATH_IMAGE034
; 
(3-2-2), using efficient child window searching algorithm, in the Saliency maps of update
Figure 990346DEST_PATH_IMAGE034
One rectangular area of middle acquisition
Figure 447872DEST_PATH_IMAGE037
, the sum of all pixels point is greater than the Saliency maps updated in the rectangular area
Figure 262245DEST_PATH_IMAGE034
The sum of pixel in upper any other rectangle;
(3-2-3), the rectangular area obtained in step (3-2-2) is calculatedWith the difference value of perimeter, calculating formula are as follows:
Figure 414057DEST_PATH_IMAGE047
(7)
Wherein,
Figure 612957DEST_PATH_IMAGE037
It indicates the
Figure 344153DEST_PATH_IMAGE038
The rectangular area obtained in secondary iteration,Indicate modified Saliency maps obtained in step (2),
Figure 154163DEST_PATH_IMAGE048
For with
Figure 891175DEST_PATH_IMAGE037
It is corresponding
Figure 476877DEST_PATH_IMAGE032
On rectangular area,
Figure 906721DEST_PATH_IMAGE040
Indicate the
Figure 213594DEST_PATH_IMAGE033
The difference value of rectangular area and perimeter in secondary iteration,
Figure 488717DEST_PATH_IMAGE049
Indicate an intermediate variable;
Figure 194505DEST_PATH_IMAGE050
(8)
Wherein,
Figure 529671DEST_PATH_IMAGE037
It indicates the
Figure 320910DEST_PATH_IMAGE038
The rectangular area obtained in secondary iteration,
Figure 399724DEST_PATH_IMAGE032
Indicate modified Saliency maps obtained in step (2),
Figure 960019DEST_PATH_IMAGE048
For with
Figure 731665DEST_PATH_IMAGE037
It is corresponding
Figure 682304DEST_PATH_IMAGE032
On rectangular area,
Figure 627126DEST_PATH_IMAGE051
Indicate rectangular area
Figure 979610DEST_PATH_IMAGE048
Internal pixel number,Indicate rectangular area
Figure 422410DEST_PATH_IMAGE048
The mean value of the significance value of internal all pixels point,
Figure 843027DEST_PATH_IMAGE053
Indicate rectangular area
Figure 377913DEST_PATH_IMAGE048
External pixel number,
Figure 491363DEST_PATH_IMAGE054
Indicate rectangular area
Figure 478911DEST_PATH_IMAGE048
The mean value of the significance value of external all pixels point;
(3-2-4), Saliency maps are updated
Figure 703218DEST_PATH_IMAGE034
The mean value of the significance value of upper all pixels point, calculating formula are as follows:
Figure 102594DEST_PATH_IMAGE055
(9)
Wherein,
Figure 515121DEST_PATH_IMAGE056
Indicate Saliency maps before updating
Figure 605437DEST_PATH_IMAGE034
The mean value of the significance value of upper all pixels point,
Figure 787019DEST_PATH_IMAGE052
Indicate rectangular area
Figure 976692DEST_PATH_IMAGE048
The mean value of the significance value of internal all pixels point,
Figure 938832DEST_PATH_IMAGE048
For with
Figure 504942DEST_PATH_IMAGE037
It is corresponding
Figure 868928DEST_PATH_IMAGE032
On rectangular area,
Figure 229502DEST_PATH_IMAGE037
It indicates theThe rectangular area obtained in secondary iteration,
Figure 48739DEST_PATH_IMAGE032
Indicate modified Saliency maps obtained in step (2),
Figure 939335DEST_PATH_IMAGE042
Indicate Saliency maps after updating
Figure 798706DEST_PATH_IMAGE034
The mean value of the significance value of upper all pixels point;
(3-2-5), in Saliency maps
Figure 407542DEST_PATH_IMAGE034
On by
Figure 643352DEST_PATH_IMAGE038
The rectangular area obtained in secondary iteration
Figure 654033DEST_PATH_IMAGE037
The significance value of all pixels point in addition is set to
Figure 684306DEST_PATH_IMAGE057
(3-3) if,
Figure 845684DEST_PATH_IMAGE038
The difference value of the rectangular area and perimeter that are obtained in secondary iteration
Figure 619605DEST_PATH_IMAGE058
, then
Figure 484793DEST_PATH_IMAGE059
For the target rectangle of acquisition
Figure 623650DEST_PATH_IMAGE060
;Otherwise continue step (3-2) and Saliency maps are updated by iteration, obtain target rectangle, the picture material of the target rectangle interior zone is the significant object detected.
Compared with prior art, the detection of significant object in image can relatively accurately be realized to the detection method of significant object by having the advantages that this method does not need any parameter is arranged in image based on region difference of the invention.
Detailed description of the invention
Fig. 1 is the flow chart of the detection method of significant object in the image of the invention based on region difference;
Fig. 2 is the original image of input;
Fig. 3 is the Saliency maps picture of original image; 
Fig. 4 is the target rectangle figure obtained in the notable figure of modification;
Fig. 5 is to obtain the significant object diagram detected on the original image.
Specific embodiment
The embodiment of the present invention is described in further detail with reference to the accompanying drawings of the specification.
The emulation experiment that the present invention carries out is 2.53GHz, programs and realize on the interior PC test platform for saving as 1.96GB in CPU.
As shown in Figure 1, in the image of the invention based on region difference significant object detection method, described using following steps:
(1), input original image calculates the Saliency maps of original image such as Fig. 2 (a), the specific steps of which are as follows:
(1-1), original image is divided into several regions with mean shift algorithm;
(1-2), each pixel in image and the color similarity of each region, calculating formula are calculated are as follows:
Figure 269395DEST_PATH_IMAGE001
(1)
Wherein,
Figure 519111DEST_PATH_IMAGE002
Indicate the
Figure 566702DEST_PATH_IMAGE002
A region,
Figure 876460DEST_PATH_IMAGE003
Indicate the region being divided into
Figure 9501DEST_PATH_IMAGE004
In pixel number,
Figure 62908DEST_PATH_IMAGE005
Indicate region
Figure 902688DEST_PATH_IMAGE004
In pixel,
Figure 445665DEST_PATH_IMAGE006
Indicate pixel
Figure 738106DEST_PATH_IMAGE005
The color characteristic at place,
Figure 657520DEST_PATH_IMAGE007
Indicate pixel
Figure 351807DEST_PATH_IMAGE008
The color characteristic at place,
Figure 3368DEST_PATH_IMAGE009
Indicate theThe kernel function in a region,It indicates
Figure 120250DEST_PATH_IMAGE008
The pixel at place
Figure 208292DEST_PATH_IMAGE011
With
Figure 475325DEST_PATH_IMAGE002
The color similarity in a region;
(1-3), the color Saliency maps for calculating original image, calculating formula are as follows:
Figure 736542DEST_PATH_IMAGE012
Figure 139842DEST_PATH_IMAGE013
(2)
Wherein,
Figure 398785DEST_PATH_IMAGE002
Indicate the
Figure 215431DEST_PATH_IMAGE002
A region, n indicate region sum,
Figure 952443DEST_PATH_IMAGE010
It indicates
Figure 538145DEST_PATH_IMAGE008
The pixel at place
Figure 967989DEST_PATH_IMAGE011
With
Figure 944036DEST_PATH_IMAGE002
The color similarity in a region,
Figure 547055DEST_PATH_IMAGE014
Indicate the
Figure 190526DEST_PATH_IMAGE002
The color conspicuousness in a region,Indicate the color Saliency maps of original image;
(1-4), the spatial saliency figure for calculating original image, calculating formula are as follows:
Figure 316931DEST_PATH_IMAGE016
(3)
Wherein,
Figure 395746DEST_PATH_IMAGE002
Indicate the
Figure 956040DEST_PATH_IMAGE002
A region, n indicate region sum,
Figure 462108DEST_PATH_IMAGE010
It indicates
Figure 740642DEST_PATH_IMAGE008
The pixel at place
Figure 623147DEST_PATH_IMAGE011
With
Figure 975631DEST_PATH_IMAGE002
The color similarity in a region,Indicate the
Figure 421361DEST_PATH_IMAGE002
The spatial saliency in a region,Indicate the spatial saliency figure of original image;
(1-5), the Saliency maps for calculating original image, calculating formula are as follows:
Figure 376864DEST_PATH_IMAGE019
(4)
Wherein,
Figure 224735DEST_PATH_IMAGE015
Indicate the color Saliency maps of original image,Indicate the spatial saliency figure of original image,The Saliency maps for indicating original image, such as Fig. 2 (b).
The numerical value of each pixel position is the significance value of the pixel in Saliency maps, and the value range of the significance value is 0~255, significance value is bigger, and the expression pixel is more significant, and the smaller expression pixel of significance value is not significant;
(2), modification Saliency maps are calculated, the specific steps of which are as follows:
(2-1), the center of gravity for setting Saliency maps
(2-2), each pixel on Saliency maps is calculated
Figure 110334DEST_PATH_IMAGE011
To the center of gravity
Figure 522861DEST_PATH_IMAGE021
Euclidean distance
Figure 347597DEST_PATH_IMAGE022
, calculating formula are as follows:
Figure 529180DEST_PATH_IMAGE061
(5)
Wherein,
Figure 46749DEST_PATH_IMAGE024
Figure 946572DEST_PATH_IMAGE025
Indicate pixel coordinate
Figure 512682DEST_PATH_IMAGE026
,
Figure 611088DEST_PATH_IMAGE027
Indicate barycentric coodinates,
Figure 790900DEST_PATH_IMAGE022
Indicate pixel
Figure 947075DEST_PATH_IMAGE011
To the Euclidean distance of the center of gravity of Saliency maps;
(2-3), modified Saliency maps, calculating formula are calculated are as follows:
Figure 633595DEST_PATH_IMAGE030
(6)
Wherein, W and H respectively indicates the width and height of image,
Figure 242431DEST_PATH_IMAGE022
Indicate pixel
Figure 478240DEST_PATH_IMAGE011
To the Euclidean distance of the center of gravity of Saliency maps,
Figure 488921DEST_PATH_IMAGE031
Indicate original Saliency maps, such as Fig. 2 (b),
Figure 456877DEST_PATH_IMAGE032
Modified Saliency maps are indicated, such as Fig. 3 (a);
(3), by iteration update Saliency maps, find with the maximum target rectangle of Saliency maps perimeter difference, the picture material of the target rectangle interior zone is the significant object detected, the specific steps of which are as follows:
(3-1), the initial value that iteration is set, the specific steps of which are as follows:
(3-1-1), it sets
Figure 615326DEST_PATH_IMAGE033
Indicate the number of iterations, wherein
Figure 61351DEST_PATH_IMAGE033
It is 0,1,2,3 ... ...;
(3-1-2), it sets
Figure 254435DEST_PATH_IMAGE034
It indicates the
Figure 393292DEST_PATH_IMAGE062
The Saliency maps updated in secondary iteration, Saliency maps when original state
Figure 976720DEST_PATH_IMAGE035
, wherein
Figure 288753DEST_PATH_IMAGE036
Indicate modified Saliency maps obtained in step (2);
(3-1-3), it sets
Figure 274027DEST_PATH_IMAGE037
It indicates the
Figure 583785DEST_PATH_IMAGE038
The rectangular area obtained in secondary iteration,
Figure 716826DEST_PATH_IMAGE039
It indicates rectangular area when original state, is entire Saliency maps;The resolution ratio of original image is 378 × 400 in this experiment, the rectangular area of original state
Figure 770233DEST_PATH_IMAGE039
It is 378 × 400;
(3-1-4), it sets
Figure 672330DEST_PATH_IMAGE040
Indicate the
Figure 152990DEST_PATH_IMAGE033
The difference value of the rectangular area and perimeter that obtain in secondary iteration, perimeter refer to the region removed behind rectangular area in Saliency maps, the difference value of rectangular area and perimeter when original state
Figure 445431DEST_PATH_IMAGE041
(3-1-5), it sets
Figure 364845DEST_PATH_IMAGE042
Indicate the
Figure 59132DEST_PATH_IMAGE033
Saliency maps in secondary iteration
Figure 775940DEST_PATH_IMAGE034
The mean value of the significance value of upper all pixels point, Saliency maps when original state
Figure 821256DEST_PATH_IMAGE043
The mean value of the significance value of upper all pixels point is
Figure 278782DEST_PATH_IMAGE063
, wherein
Figure 827575DEST_PATH_IMAGE064
Indicate Saliency maps
Figure 915617DEST_PATH_IMAGE043
Middle pixel coordinate
Figure 244967DEST_PATH_IMAGE065
The significance value at place,Indicate Saliency maps
Figure 909484DEST_PATH_IMAGE043
In pixel number,
Figure 168427DEST_PATH_IMAGE043
Indicate Saliency maps when original state; 
(3-2), Saliency maps are updated by iteration, obtains rectangular area, the specific steps of which are as follows:
(3-2-1), by taking the 1st iteration as an example, with the Saliency maps of original state
Figure 844127DEST_PATH_IMAGE067
The significance value of upper each pixel subtracts
Figure 909035DEST_PATH_IMAGE043
The mean value of the significance value of upper all pixels point
Figure 432421DEST_PATH_IMAGE068
, the Saliency maps that are updated
Figure 786566DEST_PATH_IMAGE069
(3-2-2), using efficient child window searching algorithm, in the Saliency maps of update
Figure 90509DEST_PATH_IMAGE069
One rectangular area of middle acquisition
Figure 631211DEST_PATH_IMAGE070
, the sum of all pixels point is greater than the Saliency maps updated in the rectangular area
Figure 71420DEST_PATH_IMAGE069
The sum of pixel in upper any other rectangle;
(3-2-3), the rectangular area obtained in step (3-2-2) is calculatedWith the difference value of perimeter, calculating formula are as follows:
Figure 135508DEST_PATH_IMAGE047
(7)
Wherein,It indicates the
Figure 40196DEST_PATH_IMAGE038
The rectangular area obtained in secondary iteration,
Figure 608580DEST_PATH_IMAGE032
Indicate modified Saliency maps obtained in step (2),
Figure 824798DEST_PATH_IMAGE048
For with
Figure 707304DEST_PATH_IMAGE037
It is corresponding
Figure 122104DEST_PATH_IMAGE032
On rectangular area, such as the purple rectangle in Fig. 3 (a),
Figure 799073DEST_PATH_IMAGE040
Indicate the
Figure 302254DEST_PATH_IMAGE033
The difference value of rectangular area and perimeter in secondary iteration,
Figure 988451DEST_PATH_IMAGE049
Indicate an intermediate variable;
Figure 257758DEST_PATH_IMAGE050
(8)
Wherein,
Figure 371208DEST_PATH_IMAGE037
It indicates the
Figure 296438DEST_PATH_IMAGE038
The rectangular area obtained in secondary iteration,
Figure 583063DEST_PATH_IMAGE032
Indicate modified Saliency maps obtained in step (2),
Figure 910139DEST_PATH_IMAGE048
For with
Figure 256807DEST_PATH_IMAGE037
It is correspondingOn rectangular area,
Figure 431753DEST_PATH_IMAGE051
Indicate rectangular areaInternal pixel number,
Figure 130905DEST_PATH_IMAGE052
Indicate rectangular area
Figure 30728DEST_PATH_IMAGE048
The mean value of the significance value of internal all pixels point,
Figure 659155DEST_PATH_IMAGE053
Indicate rectangular area
Figure 695244DEST_PATH_IMAGE048
External pixel number,Indicate rectangular area
Figure 505255DEST_PATH_IMAGE048
The mean value of the significance value of external all pixels point;
For example, calculating the rectangular area obtained in the 1st iteration
Figure 875056DEST_PATH_IMAGE070
With the difference value of perimeter, then obtained according to formula (7), wherein
Figure 890602DEST_PATH_IMAGE070
Indicate the rectangular area obtained in the 1st iteration,
Figure 499438DEST_PATH_IMAGE032
Indicate modified Saliency maps obtained in step (2),
Figure 460879DEST_PATH_IMAGE072
For with
Figure 471561DEST_PATH_IMAGE070
It is corresponding
Figure 501834DEST_PATH_IMAGE032
On rectangular area,
Figure 597966DEST_PATH_IMAGE073
Indicate the intermediate variable obtained by formula (8); 
(3-2-4), Saliency maps are updatedThe mean value of the significance value of upper all pixels point, calculating formula are as follows:
Figure 641511DEST_PATH_IMAGE055
(9)
Wherein,
Figure 959360DEST_PATH_IMAGE056
Indicate Saliency maps before updating
Figure 536972DEST_PATH_IMAGE034
The mean value of the significance value of upper all pixels point,
Figure 522245DEST_PATH_IMAGE052
Indicate rectangular area
Figure 894321DEST_PATH_IMAGE048
The mean value of the significance value of internal all pixels point,
Figure 699466DEST_PATH_IMAGE048
For with
Figure 752872DEST_PATH_IMAGE037
It is corresponding
Figure 654969DEST_PATH_IMAGE032
On rectangular area,
Figure 135629DEST_PATH_IMAGE037
It indicates the
Figure 755966DEST_PATH_IMAGE038
The rectangular area obtained in secondary iteration,
Figure 347485DEST_PATH_IMAGE032
Indicate modified Saliency maps obtained in step (2),Indicate Saliency maps after updating
Figure 21228DEST_PATH_IMAGE034
The mean value of the significance value of upper all pixels point;
For example, updating Saliency maps according to formula (9) in the 1st iterationThe significance value of upper all pixels point is worth
Figure 196175DEST_PATH_IMAGE074
(3-2-5), in Saliency maps
Figure 75794DEST_PATH_IMAGE034
On byThe rectangular area obtained in secondary iteration
Figure 227606DEST_PATH_IMAGE037
The significance value of all pixels point in addition is set to
Figure 426507DEST_PATH_IMAGE057
(3-3) if,The difference value of the rectangular area and perimeter that are obtained in secondary iteration, then
Figure 905395DEST_PATH_IMAGE059
For the target rectangle of acquisition
Figure 970303DEST_PATH_IMAGE060
;Otherwise continuing step (3-2) and Saliency maps are updated by iteration, obtain target rectangle, the picture material of the target rectangle interior zone is the significant object detected, for example, in the 1st iteration,
Figure 493689DEST_PATH_IMAGE041
,
Figure 657954DEST_PATH_IMAGE075
,
Figure 961896DEST_PATH_IMAGE076
Then continue step (3-2) and Saliency maps are updated by iteration, obtain target rectangle, if the yellow rectangle in Fig. 3 (b) is objective correct target rectangle, purple rectangle is the target rectangle detected on the original image, and the picture material of the target rectangle interior zone is the significant object detected.
It can be seen that method of the invention under conditions of not needing that any parameter is arranged from above-mentioned the simulation experiment result, can accurately come out significant object detection.

Claims (4)

1. the detection method of significant object in a kind of image based on region difference, the specific steps of which are as follows:
(1), original image is inputted, the Saliency maps of original image are calculated;
(2), modification Saliency maps are calculated;
(3), by iteration update Saliency maps, find with the maximum target rectangle of perimeter difference, the picture material of the target rectangle interior zone is the significant object detected.
2. the detection method of significant object in the image according to claim 1 based on region difference, which is characterized in that input original image described in above-mentioned steps (1) calculates the Saliency maps of original image, the specific steps of which are as follows:
(1-1), original image is divided into several regions with mean shift algorithm;
(1-2), each pixel in image and the color similarity of each region, calculating formula are calculated are as follows:
Figure 772004DEST_PATH_IMAGE001
(1)
Wherein,
Figure 197825DEST_PATH_IMAGE002
Indicate the
Figure 584944DEST_PATH_IMAGE002
A region,
Figure 17062DEST_PATH_IMAGE003
Indicate the region being divided into
Figure 907658DEST_PATH_IMAGE004
In pixel number,
Figure 767029DEST_PATH_IMAGE005
Indicate regionIn pixel,Indicate pixel
Figure 622356DEST_PATH_IMAGE005
The color characteristic at place,
Figure 590312DEST_PATH_IMAGE007
Indicate pixelThe color characteristic at place,
Figure 460365DEST_PATH_IMAGE009
Indicate the
Figure 450186DEST_PATH_IMAGE002
The kernel function in a region,
Figure 589043DEST_PATH_IMAGE010
It indicates
Figure 234788DEST_PATH_IMAGE008
The pixel at place
Figure 484504DEST_PATH_IMAGE011
WithThe color similarity in a region;
(1-3), the color Saliency maps for calculating original image, calculating formula are as follows:
Figure 844783DEST_PATH_IMAGE012
Figure 915507DEST_PATH_IMAGE013
(2)
Wherein,
Figure 31231DEST_PATH_IMAGE002
Indicate the
Figure 871011DEST_PATH_IMAGE002
A region, n indicate region sum,
Figure 413988DEST_PATH_IMAGE010
It indicates
Figure 706429DEST_PATH_IMAGE008
The pixel at placeWith
Figure 320130DEST_PATH_IMAGE002
The color similarity in a region,
Figure 34008DEST_PATH_IMAGE014
Indicate theThe color conspicuousness in a region,
Figure 536850DEST_PATH_IMAGE015
Indicate the color Saliency maps of original image;
(1-4), the spatial saliency figure for calculating original image, calculating formula are as follows:
Figure 85643DEST_PATH_IMAGE016
(3)
Wherein,
Figure 236002DEST_PATH_IMAGE002
Indicate the
Figure 503035DEST_PATH_IMAGE002
A region, n indicate region sum,
Figure 701935DEST_PATH_IMAGE010
It indicatesThe pixel at place
Figure 426495DEST_PATH_IMAGE011
With
Figure 257789DEST_PATH_IMAGE002
The color similarity in a region,Indicate the
Figure 580503DEST_PATH_IMAGE002
The spatial saliency in a region,
Figure 134982DEST_PATH_IMAGE018
Indicate the spatial saliency figure of original image;
(1-5), the Saliency maps for calculating original image, calculating formula are as follows:
Figure 111028DEST_PATH_IMAGE019
(4)
Wherein,
Figure 714047DEST_PATH_IMAGE015
Indicate the color Saliency maps of original image,
Figure 357518DEST_PATH_IMAGE018
Indicate the spatial saliency figure of original image,
Figure 692685DEST_PATH_IMAGE020
Indicate the Saliency maps of original image, the numerical value of each pixel position is the significance value of the pixel in Saliency maps, and the value range of the significance value is 0~255, significance value is bigger, and the expression pixel is more significant, and the smaller expression pixel of significance value is not significant.
3. the detection method of significant object in the image according to claim 2 based on region difference, which is characterized in that Saliency maps are modified in calculating described in above-mentioned steps (2), the specific steps of which are as follows:
(2-1), the center of gravity for setting Saliency maps
Figure 483923DEST_PATH_IMAGE021
(2-2), each pixel on Saliency maps is calculatedTo the center of gravityEuclidean distance
Figure 629100DEST_PATH_IMAGE022
, calculating formula are as follows:
(5)
Wherein,
Figure 790140DEST_PATH_IMAGE024
Figure 142624DEST_PATH_IMAGE025
Indicate pixel coordinate,
Figure 588353DEST_PATH_IMAGE027
Figure 8970DEST_PATH_IMAGE028
Indicate barycentric coodinates
Figure 543857DEST_PATH_IMAGE029
,Indicate pixel
Figure 379274DEST_PATH_IMAGE011
To the Euclidean distance of the center of gravity of Saliency maps;
(2-3), modified Saliency maps, calculating formula are calculated are as follows:
Figure 603582DEST_PATH_IMAGE030
(6)
Wherein, W and H respectively indicates the width and height of image,
Figure 930659DEST_PATH_IMAGE022
Indicate pixel
Figure 277326DEST_PATH_IMAGE011
To the Euclidean distance of the center of gravity of Saliency maps,
Figure 689853DEST_PATH_IMAGE031
Indicate original Saliency maps,
Figure 514590DEST_PATH_IMAGE032
Indicate modified Saliency maps.
4. the detection method of significant object in the image according to claim 3 based on region difference, it is characterized in that, Saliency maps are updated by iteration described in above-mentioned steps (3), it finds and the maximum target rectangle of Saliency maps perimeter difference, the picture material of the target rectangle interior zone is the significant object detected, the specific steps of which are as follows:
(3-1), the initial value that iteration is set, the specific steps of which are as follows:
(3-1-1), it sets
Figure 696172DEST_PATH_IMAGE033
Indicate the number of iterations, wherein
Figure 151424DEST_PATH_IMAGE033
It is 0,1,2,3 ... ...;
(3-1-2), it sets
Figure 113564DEST_PATH_IMAGE034
It indicates theThe Saliency maps updated in secondary iteration, Saliency maps when original state
Figure 778081DEST_PATH_IMAGE035
, wherein
Figure 404234DEST_PATH_IMAGE036
Indicate modified Saliency maps obtained in step (2);
(3-1-3), it setsIt indicates the
Figure 957892DEST_PATH_IMAGE038
The rectangular area obtained in secondary iteration,It indicates rectangular area when original state, is entire Saliency maps;
(3-1-4), it sets
Figure 976368DEST_PATH_IMAGE040
Indicate theThe difference value of the rectangular area and perimeter that obtain in secondary iteration, perimeter refer to the region removed behind rectangular area in Saliency maps, the difference value of rectangular area and perimeter when original state
(3-1-5), it sets
Figure 831695DEST_PATH_IMAGE042
Indicate the
Figure 799651DEST_PATH_IMAGE033
Saliency maps in secondary iterationThe mean value of the significance value of upper all pixels point, Saliency maps when original state
Figure 404124DEST_PATH_IMAGE043
The mean value of the significance value of upper all pixels point is
Figure 534892DEST_PATH_IMAGE044
(3-2), Saliency maps are updated by iteration, obtains rectangular area, the specific steps of which are as follows:
(3-2-1), it is located at
Figure 736066DEST_PATH_IMAGE038
In secondary iteration, with
Figure 319494DEST_PATH_IMAGE033
The Saliency maps updated in -1 iterationThe significance value of upper each pixel subtracts Saliency maps
Figure 616800DEST_PATH_IMAGE045
The mean value of the significance value of upper all pixels point
Figure 926559DEST_PATH_IMAGE046
, the Saliency maps that are updated
Figure 59600DEST_PATH_IMAGE034
; 
(3-2-2), using efficient child window searching algorithm, in the Saliency maps of update
Figure 113006DEST_PATH_IMAGE034
One rectangular area of middle acquisition
Figure 952786DEST_PATH_IMAGE037
, the sum of all pixels point is greater than the Saliency maps updated in the rectangular area
Figure 495763DEST_PATH_IMAGE034
The sum of pixel in upper any other rectangle;
(3-2-3), the rectangular area obtained in step (3-2-2) is calculated
Figure 788204DEST_PATH_IMAGE037
With the difference value of perimeter, calculating formula are as follows:
Figure 707619DEST_PATH_IMAGE047
(7)
Wherein,
Figure 401905DEST_PATH_IMAGE037
It indicates the
Figure 53466DEST_PATH_IMAGE038
The rectangular area obtained in secondary iteration,
Figure 164029DEST_PATH_IMAGE032
Indicate modified Saliency maps obtained in step (2),
Figure 559239DEST_PATH_IMAGE048
For with
Figure 170349DEST_PATH_IMAGE037
It is corresponding
Figure 258390DEST_PATH_IMAGE032
On rectangular area,
Figure 525424DEST_PATH_IMAGE040
Indicate the
Figure 521061DEST_PATH_IMAGE033
The difference value of rectangular area and perimeter in secondary iteration,
Figure 189940DEST_PATH_IMAGE049
Indicate an intermediate variable;
Figure 511200DEST_PATH_IMAGE050
(8)
Wherein,
Figure 999950DEST_PATH_IMAGE037
It indicates the
Figure 2541DEST_PATH_IMAGE038
The rectangular area obtained in secondary iteration,
Figure 588243DEST_PATH_IMAGE032
Indicate modified Saliency maps obtained in step (2),
Figure 752509DEST_PATH_IMAGE048
For with
Figure 56451DEST_PATH_IMAGE037
It is corresponding
Figure 597154DEST_PATH_IMAGE032
On rectangular area,
Figure 975045DEST_PATH_IMAGE051
Indicate rectangular area
Figure 638108DEST_PATH_IMAGE048
Internal pixel number,
Figure 101450DEST_PATH_IMAGE052
Indicate rectangular area
Figure 508161DEST_PATH_IMAGE048
The mean value of the significance value of internal all pixels point,
Figure 6138DEST_PATH_IMAGE053
Indicate rectangular areaExternal pixel number,
Figure 781951DEST_PATH_IMAGE054
Indicate rectangular areaThe mean value of the significance value of external all pixels point;
(3-2-4), Saliency maps are updated
Figure 79258DEST_PATH_IMAGE034
The mean value of the significance value of upper all pixels point
Figure 756227DEST_PATH_IMAGE042
, calculating formula are as follows:
Figure 194161DEST_PATH_IMAGE055
(9)
Wherein,Indicate Saliency maps before updating
Figure 149665DEST_PATH_IMAGE034
The mean value of the significance value of upper all pixels point,
Figure 325431DEST_PATH_IMAGE052
Indicate rectangular area
Figure 250662DEST_PATH_IMAGE048
The mean value of the significance value of internal all pixels point,
Figure 474970DEST_PATH_IMAGE048
For with
Figure 864363DEST_PATH_IMAGE037
It is corresponding
Figure 211031DEST_PATH_IMAGE032
On rectangular area,It indicates the
Figure 448294DEST_PATH_IMAGE038
The rectangular area obtained in secondary iteration,
Figure 692193DEST_PATH_IMAGE032
Indicate modified Saliency maps obtained in step (2),
Figure 147446DEST_PATH_IMAGE042
Indicate Saliency maps after updating
Figure 112515DEST_PATH_IMAGE034
The mean value of the significance value of upper all pixels point;
(3-2-5), in Saliency mapsOn by
Figure 714715DEST_PATH_IMAGE038
The rectangular area obtained in secondary iteration
Figure 403185DEST_PATH_IMAGE037
The significance value of all pixels point in addition is set to
Figure 587042DEST_PATH_IMAGE057
(3-3) if,The difference value of the rectangular area and perimeter that are obtained in secondary iteration
Figure 113018DEST_PATH_IMAGE058
, then
Figure 972390DEST_PATH_IMAGE059
For the target rectangle of acquisition;Otherwise continue step (3-2) and Saliency maps are updated by iteration, obtain target rectangle, the picture material of the target rectangle interior zone is the significant object detected.
CN 201110312091 2011-10-17 2011-10-17 Method for detecting salient object in image based on inter-area difference Expired - Fee Related CN102509072B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110312091 CN102509072B (en) 2011-10-17 2011-10-17 Method for detecting salient object in image based on inter-area difference

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110312091 CN102509072B (en) 2011-10-17 2011-10-17 Method for detecting salient object in image based on inter-area difference

Publications (2)

Publication Number Publication Date
CN102509072A true CN102509072A (en) 2012-06-20
CN102509072B CN102509072B (en) 2013-08-28

Family

ID=46221153

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110312091 Expired - Fee Related CN102509072B (en) 2011-10-17 2011-10-17 Method for detecting salient object in image based on inter-area difference

Country Status (1)

Country Link
CN (1) CN102509072B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102938139A (en) * 2012-11-09 2013-02-20 清华大学 Automatic synthesis method for fault finding game images
CN103218832A (en) * 2012-10-15 2013-07-24 上海大学 Visual saliency algorithm based on overall color contrast ratio and space distribution in image
CN106407978A (en) * 2016-09-24 2017-02-15 上海大学 Unconstrained in-video salient object detection method combined with objectness degree
CN110689007A (en) * 2019-09-16 2020-01-14 Oppo广东移动通信有限公司 Subject recognition method and device, electronic equipment and computer-readable storage medium
CN111461139A (en) * 2020-03-27 2020-07-28 武汉工程大学 Multi-target visual saliency layered detection method in complex scene
CN113114943A (en) * 2016-12-22 2021-07-13 三星电子株式会社 Apparatus and method for processing image
US11670068B2 (en) 2016-12-22 2023-06-06 Samsung Electronics Co., Ltd. Apparatus and method for processing image

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510299A (en) * 2009-03-04 2009-08-19 上海大学 Image self-adapting method based on vision significance

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510299A (en) * 2009-03-04 2009-08-19 上海大学 Image self-adapting method based on vision significance

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
郎丛妍等: "一种基于模糊信息粒化的视频时空显著单元提取方法", 《电子学报》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103218832A (en) * 2012-10-15 2013-07-24 上海大学 Visual saliency algorithm based on overall color contrast ratio and space distribution in image
CN103218832B (en) * 2012-10-15 2016-01-13 上海大学 Based on the vision significance algorithm of global color contrast and spatial distribution in image
CN102938139A (en) * 2012-11-09 2013-02-20 清华大学 Automatic synthesis method for fault finding game images
CN102938139B (en) * 2012-11-09 2015-03-04 清华大学 Automatic synthesis method for fault finding game images
CN106407978A (en) * 2016-09-24 2017-02-15 上海大学 Unconstrained in-video salient object detection method combined with objectness degree
CN106407978B (en) * 2016-09-24 2020-10-30 上海大学 Method for detecting salient object in unconstrained video by combining similarity degree
CN113114943A (en) * 2016-12-22 2021-07-13 三星电子株式会社 Apparatus and method for processing image
US11670068B2 (en) 2016-12-22 2023-06-06 Samsung Electronics Co., Ltd. Apparatus and method for processing image
CN113114943B (en) * 2016-12-22 2023-08-04 三星电子株式会社 Apparatus and method for processing image
CN110689007A (en) * 2019-09-16 2020-01-14 Oppo广东移动通信有限公司 Subject recognition method and device, electronic equipment and computer-readable storage medium
CN110689007B (en) * 2019-09-16 2022-04-15 Oppo广东移动通信有限公司 Subject recognition method and device, electronic equipment and computer-readable storage medium
CN111461139A (en) * 2020-03-27 2020-07-28 武汉工程大学 Multi-target visual saliency layered detection method in complex scene

Also Published As

Publication number Publication date
CN102509072B (en) 2013-08-28

Similar Documents

Publication Publication Date Title
CN102509072A (en) Method for detecting salient object in image based on inter-area difference
CN110232311B (en) Method and device for segmenting hand image and computer equipment
US11062124B2 (en) Face pose detection method, device and storage medium
CN108052624A (en) Processing Method of Point-clouds, device and computer readable storage medium
Ye et al. A new method based on hough transform for quick line and circle detection
CN107063261B (en) Multi-feature information landmark detection method for precise landing of unmanned aerial vehicle
CN104615986B (en) The method that pedestrian detection is carried out to the video image of scene changes using multi-detector
CN103605978A (en) Urban illegal building identification system and method based on three-dimensional live-action data
CN106384355B (en) A kind of automatic calibration method in projection interactive system
CN101490711A (en) Image processing device and image processing method
CN102663401B (en) Image characteristic extracting and describing method
CN105574533B (en) A kind of image characteristic extracting method and device
CN105457908B (en) The sorting method for rapidly positioning and system of small size glass panel based on monocular CCD
US10235576B2 (en) Analysis method of lane stripe images, image analysis device, and non-transitory computer readable medium thereof
CN108052869B (en) Lane line recognition method, lane line recognition device and computer-readable storage medium
CN110490839B (en) Method and device for detecting damaged area in expressway and computer equipment
CN107507226A (en) A kind of method and device of images match
CN109509222B (en) Method and device for detecting linear object
CN108615014B (en) Eye state detection method, device, equipment and medium
CN102411705A (en) Method and interface of recognizing user's dynamic organ gesture and elec tric-using apparatus using the interface
CN103778436A (en) Pedestrian gesture inspecting method based on image processing
CN104123554A (en) SIFT image characteristic extraction method based on MMTD
EP2884459A1 (en) Image processing device, image processing method, and image processing program
CN115359295A (en) Decoupling knowledge distillation hardware target detection method and system
US20150103080A1 (en) Computing device and method for simulating point clouds

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130828

Termination date: 20201017

CF01 Termination of patent right due to non-payment of annual fee