CN107895162B - Image saliency target detection algorithm based on object prior - Google Patents

Image saliency target detection algorithm based on object prior Download PDF

Info

Publication number
CN107895162B
CN107895162B CN201710967092.4A CN201710967092A CN107895162B CN 107895162 B CN107895162 B CN 107895162B CN 201710967092 A CN201710967092 A CN 201710967092A CN 107895162 B CN107895162 B CN 107895162B
Authority
CN
China
Prior art keywords
target candidate
saliency
image
target
initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710967092.4A
Other languages
Chinese (zh)
Other versions
CN107895162A (en
Inventor
周圆
毛爱玲
霍树伟
张天昊
李绰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201710967092.4A priority Critical patent/CN107895162B/en
Publication of CN107895162A publication Critical patent/CN107895162A/en
Application granted granted Critical
Publication of CN107895162B publication Critical patent/CN107895162B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/34Smoothing or thinning of the pattern; Morphological operations; Skeletonisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image saliency target detection algorithm based on object prior, which comprises the steps of (1) segmenting an image into N super-pixels, and then calculating an initial saliency value of each region according to the contrast of a spatial weighting region, thereby obtaining an initial saliency map; step (2), generating a plurality of target candidate blocks by a single input original image through an algorithm, and screening out a series of high-quality target candidate blocks; step (3), calculating the score of each target candidate block covering the saliency target by comparing the overlapping rate of each target candidate block with the initial saliency map obtained in the step (1); and (4) taking the scores of all the target candidate blocks as weights, and performing weighted fusion on the screened target candidate blocks to obtain a target-level saliency map Sobj(ii) a And (5) solving a minimized energy equation to obtain a final significant value S. The invention can simultaneously keep higher accuracy and recall ratio in different data sets; the salient object can be accurately positioned.

Description

Image saliency target detection algorithm based on object prior
Technical Field
The invention relates to the technical field of digital image processing, in particular to a method for detecting a salient object of an image.
Background
With the development of information technology and the increasing popularization of intelligent terminal products, hundreds of millions of multimedia information data are continuously generated and spread every day, which brings great challenges to image and video processing work. In the face of massive information in the big data era, how to effectively improve the efficiency of the computer for image analysis and processing becomes a focus of attention of researchers in the field of computer vision.
The neuropsychological research finds that the human visual system usually screens out the most interesting area firstly in the process of processing the complex scene, and the area is processed preferentially, so that the rapid analysis and understanding of the complex scene are realized. Inspired by the mechanism, researchers are dedicated to searching a method for enabling a computer to detect a salient region containing main information in an image and filter redundant background information by simulating a human visual attention mechanism so as to reduce the time complexity of the computer for analyzing and understanding the image content, and image visual saliency detection research is generated.
The saliency detection aims to extract a region which can draw the most visual attention from an image, can be used as an image preprocessing step to reduce the operation complexity of a subsequent processing algorithm, and has been applied to a plurality of fields of computer vision. In the existing saliency detection algorithms, various prior information in an image is mostly utilized. Wei et al[1]A background prior model is provided according to prior knowledge that the periphery of an image is usually a background, the periphery of the image is extracted as the background, and the significance is defined by the geodesic distance between a region to be detected and the background[2]Federico et al describes saliency with position information by coarse localization of saliency targets by convex hulls containing Harris points of interest[3]According to the assumption in the target color distribution set, a color distribution prior method is provided for calculating a significant value. The existing methods achieve good significance detection effect, but the methods usually calculate significance values for each small region, neglect the integrity of significant targets, and therefore the detected significant region existsThe problem of internal discontinuities.
Reference to the literature
[1]Yichen Wei,Fang Wen,Wangjiang Zhu,Jian Sun.Geodesic Saliency Using Background Priors[C].European Conference on Computer Vision,2012.
[2]Chuan Yang,Lihe Zhang,and Huchuan Lu.Graph-Regularized Saliency Detection with Convex-Hull-Based Center Prior[J].IEEE Signal Processing Letters,2013,20(7):637-640.
[3]Federico Perazzi,Philipp Krahenbuhl,Yael Pritch,Alexander Hornung.Saliency Filters:Contrast Based Filtering for Salient Region Detection[C].IEEE Conference on Computer Vision and Pattern Recognition,2012.
[4]R.Achanta,K.Smith,A.Lucchi,P.Fua,S.Susstrunk.SLIC Superpixels Compared to State-of-the-Art Superpixel Methods[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2012,34(11):2274-2282.
[5]Ian Endres,Derek Hoiem.Category Independen tObject Proposals[C].European Conference on Computer Vision,2010,36(2):575-588.
Disclosure of Invention
In order to realize the improvement of the prior art, the invention provides an image saliency target detection algorithm based on object prior, and the image saliency target detection algorithm is provided by introducing an object prior method into the algorithm and detecting the approximate position and shape of an object in an image by utilizing the object prior method in consideration of the fact that the salient target in the image is always the object.
The invention relates to an image saliency target detection algorithm based on object prior, which comprises the following steps:
step 1, adopting SLIC superpixel segmentation algorithm to segment the image into N superpixels { R }iThen each superpixel R is calculated according to the contrast ratio of the spatial weighted areaiInitial saliency value S ofi initialThe calculation formula is as follows
Figure BDA0001436664980000032
Wherein, ciAnd cjRespectively representing super-pixels RiAnd RjValue on CIE-Lab color space, piAnd pjRespectively representing super-pixels RiAnd RjNormalized spatial position value of σpRepresents a constant control global contrast weight;
thereby obtaining an initial saliency map;
step 2, generating a plurality of target candidate blocks for a single input original image through an algorithm, and screening out a series of high-quality target candidate blocks
Figure BDA0001436664980000033
Representing the generated set of target candidate blocks;
step 3, comparing each target candidate block with the initial saliency map S obtained in step 1i initialCalculating the score F of each target candidate block covering the saliency targetkThe calculation formula is as follows:
Figure BDA0001436664980000035
wherein, Ok selectRepresents the kth target candidate block, SinitialRepresenting the entire initial saliency map;
step 4, using the score F of each target candidate blockkWeighting and fusing the screened target candidate blocks to obtain a target-level saliency map SobjThe calculation formula is as follows:
Figure BDA0001436664980000036
wherein, Num (O)select) Representing the total number of screened target candidate blocks;
and 5, defining a significant energy equation by taking the super-pixel obtained in the step 1 as a calculation basic unit:
Figure BDA0001436664980000037
Figure BDA0001436664980000041
wherein λ issAnd λrWeight, w, representing two terms in a constant parameter control equationijRepresenting the similarity of two adjacent superpixels, σcA weight representing a constant control color difference;
optimizing the saliency map by adopting a smooth constraint, and solving a minimized energy equation to obtain a final saliency value S:
S=argmin{E=λs(S-Sobj)T(S-Sobj)+λrST(Dw-W)S}
Figure BDA0001436664980000042
Figure BDA0001436664980000043
wherein λ issAnd λrWeight, w, representing two terms in a constant parameter control equationijRepresenting the similarity of two adjacent superpixels, σcRepresenting a constant weight controlling the color difference.
Compared with the prior art, the image saliency target detection algorithm based on the object prior provided by the invention fully utilizes the position and shape information of the target extracted by the object prior method, and the actual detection effect can simultaneously keep higher accuracy and recall rate in different data sets; for different scene types and target sizes, the salient objects can be accurately positioned, and meanwhile, the smoothness of the salient areas is kept; in addition, for the picture with smaller front background region graduation, the method can also obtain satisfactory detection effect.
Drawings
FIG. 1 is a schematic flow chart of an image saliency target detection algorithm based on object priors according to the present invention;
FIG. 2 is a schematic diagram of an embodiment;
FIG. 3 is a graph comparing the results of the algorithm of the present invention with those of the prior art (a) PR curve comparison results; (b) and qualitatively comparing the results of the saliency maps.
Detailed Description
Embodiments of the present invention will be described in further detail below with reference to the accompanying drawings.
As shown in the first drawing, the detailed details of the image saliency target detection algorithm based on object prior in the present invention are as follows:
step 1, adopting SLIC superpixel segmentation algorithm[4]Segmenting an image into N superpixels { R }iThen each superpixel R is calculated according to the contrast ratio of the spatial weighted areaiInitial saliency value S ofi initialThe calculation formula is as follows
Figure BDA0001436664980000051
Wherein, ciAnd cjRespectively representing super-pixels RiAnd RjValue on CIE-Lab color space, piAnd pjRespectively representing super-pixels RiAnd RjNormalized spatial position value of σpRepresents a constant control global contrast weight;
thereby obtaining an initial saliency map;
and 2, extracting an image area which possibly contains the object in the image by using the prior knowledge of the object, namely the obvious target of the image is often the object, and adopting the existing image object detection technology. A single input original image is processed by the Category Independent Object disposals algorithm[5]Generating a plurality of target candidate blocks, and screening a series of high-quality target candidate blocks by adopting the following two principles
Figure BDA0001436664980000052
Representing the generated set of target candidate blocks:
(1) the area of the target candidate block is less than 50% of the total area of the image;
(2) the ratio of the edge of the target candidate block to the connected part of the periphery of the image to the total periphery of the target candidate block is less than 40 percent;
step 3, comparing each target candidate block with the initial saliency map S obtained in step 1i initialCalculating the score F of each target candidate block covering the saliency targetkThe calculation formula is as follows:
Figure BDA0001436664980000061
wherein, Ok selectRepresents the kth target candidate block, SinitialRepresenting the entire initial saliency map;
step 4, using the score F of each target candidate blockkWeighting and fusing the screened target candidate blocks to obtain a target-level saliency map SobjThe calculation formula is as follows:
Figure BDA0001436664980000062
wherein, Num (O)select) Representing the total number of screened target candidate blocks;
step 5, in order to ensure the continuity of similar and adjacent area significant values in the significant map, a smooth constraint is adopted to optimize the significant map; and (3) defining a significance energy equation by taking the super-pixel obtained in the step (1) as a calculation basic unit:
Figure BDA0001436664980000063
Figure BDA0001436664980000064
wherein λ issAnd λrWeight, w, representing two terms in a constant parameter control equationijRepresenting the similarity of two adjacent superpixels, σcA weight representing a constant control color difference;
the final significant value S is obtained by solving the minimization energy equation:
S=argmin{E=λs(S-Sobj)T(S-Sobj)+λrST(Dw-W)S}
W=(wij)N×N
Figure BDA0001436664980000065
wherein W is an adjacency matrix representing the degree of similarity between superpixels, DwIs a degree matrix, the sum of the similarities of a single superpixel and its neighboring superpixels, λ, is calculatedsWeight, λ, representing the initial saliency maprWeights representing smoothing constraint terms;
when the derivative of E is 0, the closed solution formula for S is obtained as follows:
Figure BDA0001436664980000071
Figure BDA0001436664980000072
the value of S is expressed as follows:
S=(λSrDwrW)-1SSobj)。
as shown in fig. 3, the comparison graph of the algorithm of the present invention with the execution result of the existing algorithm, (a) the PR curve comparison result; (b) and qualitatively comparing the results of the saliency maps. Parameters used in the experiment: n is set to 300, σpSet to 0.25, σcIs set to 20, lambdasIs set to 20, lambdarSet to 30. The experimental image is from an ECSSD dataset, which is a complex datasetAnd 1000 pictures with complex scenes are available.
First, the experiment shows the effect achieved by the algorithm of the patent using PR curves. Fig. 3(a) is a comparison result of PR curves. As can be seen from the comparison, the algorithm of the invention has better effect than the compared several significant target detection algorithms. Then, to further illustrate the effectiveness of the algorithm of this patent, fig. 3(b) illustrates the results of qualitative comparison of saliency maps of different algorithms, taking 6 pictures with complex scenes as an example. Compared with other algorithms, the algorithm disclosed by the invention can effectively inhibit a complex background in a picture, remove a noise area in the background and simultaneously ensure the integrity of a remarkable target.

Claims (3)

1. An image saliency target detection algorithm based on object priors, characterized in that the algorithm comprises the following steps:
step (1), adopting SLIC superpixel segmentation algorithm to segment the image into N superpixels { R }iThen each superpixel R is calculated according to the contrast ratio of the spatial weighted areaiInitial saliency value S ofi initialThe calculation formula is as follows
Figure FDA0003026351490000011
Wherein, ciAnd cjRespectively representing super-pixels RiAnd RjValue on CIE-Lab color space, piAnd pjRespectively representing super-pixels RiAnd RjNormalized spatial position value of σpRepresents a constant control global contrast weight;
thereby obtaining an initial saliency map;
step (2), generating a plurality of target candidate blocks for a single input original image through an algorithm, and screening out a series of high-quality target candidate blocks
Figure FDA0003026351490000012
Representing the generated set of target candidate blocks;
step (3) of comparing each target candidate block with the initial saliency map S obtained in step (1)i initialCalculating the score F of each target candidate block covering the saliency targetkThe calculation formula is as follows:
Figure FDA0003026351490000013
wherein, Ok selectRepresents the kth target candidate block, SinitialRepresenting the entire initial saliency map;
step (4) of calculating the score F of each target candidate blockkWeighting and fusing the screened target candidate blocks to obtain a target-level saliency map SobjThe calculation formula is as follows:
Figure FDA0003026351490000014
wherein, Num (O)select) Representing the total number of screened target candidate blocks;
step (5), the superpixel obtained in the step (1) is used as a basic unit for calculation, and a significance energy equation is defined:
Figure FDA0003026351490000021
Figure FDA0003026351490000022
optimizing the saliency map by adopting a smooth constraint, and solving a minimized energy equation to obtain a final saliency value S:
S=argmin{E=λs(S-Sobj)T(S-Sobj)+λrST(Dw-W)S}
Figure FDA0003026351490000023
Figure FDA0003026351490000024
wherein λ issAnd λrWeight, w, representing two terms in a constant parameter control equationijRepresenting the similarity of two adjacent superpixels, σcRepresenting a constant weight controlling the color difference.
2. The object prior-based image saliency target detection algorithm of claim 1, characterized in that in said step (2), a series of high-quality target candidate blocks are screened out
Figure FDA0003026351490000025
The screening principle comprises the following steps: in principle one, the area of the target candidate block is less than 50% of the total area of the image; and secondly, the ratio of the edge of the target candidate block to the connection part of the periphery of the image to the total periphery of the image is less than 40%.
3. The object prior-based image saliency target detection algorithm of claim 1, characterized in that in said step (5), when the derivative of E is 0, the closed solution formula of S is obtained as follows:
Figure FDA0003026351490000026
Figure FDA0003026351490000027
the value of S is expressed as follows:
S=(λSrDwrW)-1SSobj)。
CN201710967092.4A 2017-10-17 2017-10-17 Image saliency target detection algorithm based on object prior Active CN107895162B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710967092.4A CN107895162B (en) 2017-10-17 2017-10-17 Image saliency target detection algorithm based on object prior

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710967092.4A CN107895162B (en) 2017-10-17 2017-10-17 Image saliency target detection algorithm based on object prior

Publications (2)

Publication Number Publication Date
CN107895162A CN107895162A (en) 2018-04-10
CN107895162B true CN107895162B (en) 2021-08-03

Family

ID=61803681

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710967092.4A Active CN107895162B (en) 2017-10-17 2017-10-17 Image saliency target detection algorithm based on object prior

Country Status (1)

Country Link
CN (1) CN107895162B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109446976A (en) * 2018-10-24 2019-03-08 闽江学院 A kind of video big data information extracting method based on wavelet transform and Characteristic Contrast
EP4057225B1 (en) * 2019-01-28 2023-10-25 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Localization of elements in the space
CN110211078B (en) * 2019-05-14 2021-01-19 大连理工大学 Significance detection method based on anisotropic diffusion
CN113420671A (en) * 2021-06-24 2021-09-21 杭州电子科技大学 Saliency target detection method based on global information attention

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103208115A (en) * 2013-03-01 2013-07-17 上海交通大学 Detection method for salient regions of images based on geodesic line distance
CN104240244A (en) * 2014-09-10 2014-12-24 上海交通大学 Significant object detection method based on propagation modes and manifold ranking
CN104680546A (en) * 2015-03-12 2015-06-03 安徽大学 Salient image target detection method
CN105513070A (en) * 2015-12-07 2016-04-20 天津大学 RGB-D salient object detection method based on foreground and background optimization
CN105787930A (en) * 2016-02-17 2016-07-20 上海文广科技(集团)有限公司 Sharpness-based significance detection method and system for virtual images
CN106373131A (en) * 2016-08-25 2017-02-01 上海交通大学 Edge-based image significant region detection method
CN106815842A (en) * 2017-01-23 2017-06-09 河海大学 A kind of improved image significance detection method based on super-pixel
CN107203781A (en) * 2017-05-22 2017-09-26 浙江大学 A kind of object detection method Weakly supervised end to end instructed based on conspicuousness

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8432491B2 (en) * 2011-08-29 2013-04-30 National Taiwan University Object-based system and method of directing visual attention by a subliminal cue
KR101537174B1 (en) * 2013-12-17 2015-07-15 가톨릭대학교 산학협력단 Method for extracting salient object from stereoscopic video
JP2015215741A (en) * 2014-05-09 2015-12-03 キヤノン株式会社 Subject detection device, subject detection method and program

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103208115A (en) * 2013-03-01 2013-07-17 上海交通大学 Detection method for salient regions of images based on geodesic line distance
CN104240244A (en) * 2014-09-10 2014-12-24 上海交通大学 Significant object detection method based on propagation modes and manifold ranking
CN104680546A (en) * 2015-03-12 2015-06-03 安徽大学 Salient image target detection method
CN105513070A (en) * 2015-12-07 2016-04-20 天津大学 RGB-D salient object detection method based on foreground and background optimization
CN105787930A (en) * 2016-02-17 2016-07-20 上海文广科技(集团)有限公司 Sharpness-based significance detection method and system for virtual images
CN106373131A (en) * 2016-08-25 2017-02-01 上海交通大学 Edge-based image significant region detection method
CN106815842A (en) * 2017-01-23 2017-06-09 河海大学 A kind of improved image significance detection method based on super-pixel
CN107203781A (en) * 2017-05-22 2017-09-26 浙江大学 A kind of object detection method Weakly supervised end to end instructed based on conspicuousness

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Category Independent Object Proposals;Ian Endres等;《European Conference on Computer Vision 2010》;20101231;1-14 *
Improved Saliency Optimization Based on Superpixel-Wised Objectness and Boundary Connectivity;Yanzhao Wang 等;《CCPR 2016》;20161231;218-228 *
Salient Object Detection Based on Objectness;Baoyan Wang等;《 2015 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC)》;20151130;1-5 *
利用层次先验估计的显著性目标检测;徐威 等;《自动化学报》;20150430;第41卷(第4期);799-812 *
图像通用目标的无监督检测;宋修锐 等;《光学精密工程》;20140131;第22卷(第1期);160-168 *
融合背景先验与中心先验的显著性目标检测;周帅骏 等;《中国图象图形学报》;20170531;第22卷(第5期);584-595 *

Also Published As

Publication number Publication date
CN107895162A (en) 2018-04-10

Similar Documents

Publication Publication Date Title
US11681418B2 (en) Multi-sample whole slide image processing in digital pathology via multi-resolution registration and machine learning
CN107895162B (en) Image saliency target detection algorithm based on object prior
WO2019071976A1 (en) Panoramic image saliency detection method based on regional growth and eye movement model
CN107273832B (en) License plate recognition method and system based on integral channel characteristics and convolutional neural network
JP2002288658A (en) Object extracting device and method on the basis of matching of regional feature value of segmented image regions
CN111815579B (en) Image change detection method, device and computer readable storage medium
CN108564598B (en) Improved online Boosting target tracking method
CN103164856B (en) Video copy and paste blind detection method based on dense scale-invariant feature transform stream
CN107146219B (en) Image significance detection method based on manifold regularization support vector machine
CN106529441B (en) Depth motion figure Human bodys' response method based on smeared out boundary fragment
CN111508006A (en) Moving target synchronous detection, identification and tracking method based on deep learning
CN115375737B (en) Target tracking method and system based on adaptive time and serialized space-time characteristics
CN111640138A (en) Target tracking method, device, equipment and storage medium
CN112258403A (en) Method for extracting suspected smoke area from dynamic smoke
Li et al. Coarse-to-fine salient object detection based on deep convolutional neural networks
CN112164093A (en) Automatic person tracking method based on edge features and related filtering
CN112613565B (en) Anti-occlusion tracking method based on multi-feature fusion and adaptive learning rate updating
CN102509308A (en) Motion segmentation method based on mixtures-of-dynamic-textures-based spatiotemporal saliency detection
CN110688512A (en) Pedestrian image search algorithm based on PTGAN region gap and depth neural network
CN116912184B (en) Weak supervision depth restoration image tampering positioning method and system based on tampering area separation and area constraint loss
CN108765384B (en) Significance detection method for joint manifold sequencing and improved convex hull
CN115311327A (en) Target tracking method and system integrating co-occurrence statistics and fhog gradient features
CN113763474A (en) Scene geometric constraint-based indoor monocular depth estimation method
Duan et al. Bio-inspired visual attention model and saliency guided object segmentation
Yan et al. Building Extraction at Amodal-Instance-Segmentation Level: Datasets and Framework

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant