CN108920711A - Deep learning label data generation method oriented to unmanned aerial vehicle take-off and landing guide - Google Patents

Deep learning label data generation method oriented to unmanned aerial vehicle take-off and landing guide Download PDF

Info

Publication number
CN108920711A
CN108920711A CN201810825689.XA CN201810825689A CN108920711A CN 108920711 A CN108920711 A CN 108920711A CN 201810825689 A CN201810825689 A CN 201810825689A CN 108920711 A CN108920711 A CN 108920711A
Authority
CN
China
Prior art keywords
marked
client
mark
unmanned plane
scene image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810825689.XA
Other languages
Chinese (zh)
Other versions
CN108920711B (en
Inventor
胡天江
周勇
周晗
赵框
唐邓清
常远
周正元
方强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN201810825689.XA priority Critical patent/CN108920711B/en
Publication of CN108920711A publication Critical patent/CN108920711A/en
Application granted granted Critical
Publication of CN108920711B publication Critical patent/CN108920711B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft

Abstract

A deep learning label data generation method facing unmanned aerial vehicle take-off and landing guide is characterized in that an administrator client establishes a database system, defines marking requirements and dispatches tasks; each user logs in a labeling client, receives labeling tasks and labeling requirements through each labeling client, manually labels each scene image to be labeled, stores each labeled scene image in a database system in an xml format, and updates the database system in real time. After all the scene images to be labeled are labeled, the auditor logs in the client of the auditor, accesses the database system through the network of the client of the auditor, and audits the labeling results (namely, the labeled scene images) by the client of the auditor. According to the invention, the labeling task is issued in a networked manner, and the design auditing method automatically audits the labeling result, so that the data labeling efficiency and the labeling result reliability are greatly improved, and the practical requirement of deep learning of large-scale sample labeling is effectively met.

Description

Deep learning label data generation method towards unmanned plane landing guidance
Technical field
It independently takes off the design field of guidance system of landing present invention relates generally to unmanned plane, refers in particular to one kind towards nobody The deep learning label data generation method of machine landing guidance.
Background technique
Unmanned plane landing guidance system aims to solve the problem that the autonomous take-off and landing problem in weak GPS or GPS defence environment. Guidance system by video camera obtain unmanned plane landing during include unmanned plane target scene image, by extract image in Unmanned plane target region and anchor point coordinate resolve the world of unmanned plane with the methods of computer vision measurement and filtering estimation Coordinate pose, to realize the autonomous landing of guidance unmanned plane.Unmanned plane target region is extracted from image and anchor point coordinate is The necessary function of guidance system.
Calibration method is sat there are applicabilities weak, ginseng for the feature extractions such as angle point and edge unmanned plane target region and anchor point The deficiencies of number is sensitive proposes that deep learning scheme removal parameter relies on and improves scene applicability.Deep learning method automatically extracts Unmanned plane target and anchor point need to construct label data collection, since deep learning sample data scale is big, are badly in need of interactive convenient, behaviour Make efficient, networked operation label data Core Generator.
Summary of the invention
In view of the deficiencies in the prior art, the present invention provides a kind of deep learning mark towards unmanned plane landing guidance Sign data creation method.
To realize the above-mentioned technical purpose, the technical scheme is that:
Deep learning label data generation method towards unmanned plane landing guidance, method are as follows;
(1) Database Systems are established,
Database Systems are established by Administrator Client, Administrator Client is managed Database Systems, Neng Goushang Blit piece can delete the picture in Database Systems to Database Systems, can be to the picture of the middle preservation of Database Systems Inquire and annotation results can be exported.
All scene images to be marked are stored in Database Systems, wherein scene image to be marked include never by The unmanned plane landing by video camera shooting marked includes the scene image of unmanned plane target and manually marks in the process Infused the scene image by during the unmanned plane landing of video camera shooting including unmanned plane target more than once.
(2) according to mission requirements, determine that unmanned plane target region and anchor point out to be marked is sat by Administrator Client Mark.
Anchor point coordinate can choose head, port wing, starboard wing, port tailplane, starboard tailplane, left foot frame, middle foot prop, right crus of diaphragm frame Deng the coordinate of eight anchor points.
(3) scene image to be marked is distributed to each mark client by network dynamic by Administrator Client;
The number that all scene images to be marked were manually marked according to it is successively sorted from less to more, wherein from The not labeled number by during the unmanned plane landing of video camera shooting including the scene image of unmanned plane target is 0.
When distributing scene image to be marked, the sequence preferentially distributed is:First will never labeled scene image it is excellent It is first randomly assigned to each mark client, it then successively will be respectively wait mark according to the sequence of the number manually marked from less to more The scene image of note is randomly assigned to each mark client, it is ensured that all scene images to be marked can be marked.Wherein Be randomly assigned to refer to mark client that scene image to be marked is randomly assigned to more than one of them in epicycle mark into Rower note.Therefore in epicycle mark, scene image to be marked is possible to be labeled by more than one mark client, It actually there may be certain scene images to be marked in epicycle mark to be marked repeatedly.
(4) respectively mark client receives mark task and mark requires, and mark task is that Administrator Client is distributed to respectively The scene image to be marked of mark client, mark require to be the unmanned plane target area out to be marked determined in step (2) Domain and anchor point coordinate.
Each mark client manually marks each scene image to be marked, i.e., frame selects the unmanned plane in each image Target area and anchor point coordinate, and each scene image after mark is saved in Database Systems in the form of xml format, it is right Database Systems carry out real-time update.
(5) annotation results are audited
For all scene images to be marked all after mark, auditor's client passes through network access data library system System audits annotation results (each scene image after marking) by auditor's client.
Audit mode can use manual examination and verification or automatic auditing method.
The present invention provides a kind of automatic auditing method, and method is as follows:
In order to exclude influence of individual samples to annotation results, individual unusual sample brings are reduced using statistical average method Error, its implementation are:
Scene image after marking for one saved in Database Systems then can if its number being marked is n times N group unmanned plane anchor point coordinate sample value after obtaining n times mark, the seat of the unmanned plane anchor point of the extraction of the kth time of i-th of anchor point It is designated asIts abscissa is chosen to be handled as follows:
The maximum value in this N group abscissa is obtained firstAnd minimum valueBy area BetweenIt is divided into N-1 subinterval, if each subinterval length is △ xi, siding-to-siding block lengthThen j-th of subinterval of i-th of anchor point of available abscissa is xi,j=xi min + (j-1)×△xi, then available xiDistribution probability be
Wherein,
Obtain p (xi,j) after, set a threshold valueProbability is rejected to be lower thanCoordinate points, obtain new data point setWherein NpFor the number of new data point set,
By formula (2), then the assembly average of this available N group abscissa
Utilize the assembly average for obtaining N group abscissaThe statistics of same method, this available N group ordinate is flat Mean valueObtain centre point coordinateAfterwards, it takes with coordinate pointsFor the center of circle, r pixel value is radius, when user obtains Anchor point coordinate (the x takeni,yi) meet as being only when following formula (3) effectively;When being unsatisfactory for this condition, then user annotation is prompted Mistake, and refuse the annotation results.
Wherein, r is the threshold value for needing to set according to precision.
Compared with prior art, the present invention can generate following technical effect:
Deep learning label data towards unmanned plane landing guidance designed by the present invention generates system, passes through networking Mode issues mark task, and algorithm for design audits annotation results automatically, substantially increases data annotating efficiency and annotation results can By property, the current demand of the extensive sample mark of effective solution deep learning.Main characteristics of the invention have:First is that passing through The mode of networking issues operation, open source crowdsourcing advantage is given full play to, so that the user group towards mark is more extensive;Second is that The mark priority setting of figure source is more scientific, and system will be by counting same figure source labeled times, and preferential labeled times are less Figure source, the case where avoiding part figure source from not being marked appearance;Third is that system has audit function, examined by artificial or algorithm Core, reject marking error as a result, making label data more reliable.The label data Core Generator that the present invention designs is for depth Quickly and precisely obtaining for degree study label data collection has important application value and meaning.
Detailed description of the invention
Fig. 1 is system structure diagram of the invention.
Fig. 2 is flow chart of the invention.
Specific embodiment
With reference to the accompanying drawings of the specification, technical solution of the present invention is further shown and is illustrated.
Referring to Fig.1 with 2, the deep learning label data generation method towards unmanned plane landing guidance, method is as follows;
(1) Database Systems are established,
Administrator logs in Administrator Client and establishes Database Systems by Administrator Client, and Administrator Client is to data Library system is managed, can uploading pictures to Database Systems, the picture in Database Systems can be deleted, can be to data The picture of the middle preservation of library system inquire and can export annotation results.
All scene images to be marked are stored in Database Systems, wherein scene image to be marked include never by The unmanned plane landing by video camera shooting marked includes the scene image of unmanned plane target and manually marks in the process Infused the scene image by during the unmanned plane landing of video camera shooting including unmanned plane target more than once.
(2) according to mission requirements, administrator is required i.e. true by Administrator Client by pipeline person's client definition mark Fixed unmanned plane target region and anchor point coordinate out to be marked.
Anchor point coordinate can choose head, port wing, starboard wing, port tailplane, starboard tailplane, left foot frame, middle foot prop, right crus of diaphragm frame Deng the coordinate of eight anchor points.
(3) task is distributed
Scene image to be marked is distributed to each mark client by network dynamic by Administrator Client.
The number that all scene images to be marked were manually marked according to it is successively sorted from less to more, wherein from The not labeled number by during the unmanned plane landing of video camera shooting including the scene image of unmanned plane target is 0.
When distributing scene image to be marked, the sequence preferentially distributed is:First will never labeled scene image it is excellent It is first randomly assigned to each mark client, it then successively will be respectively wait mark according to the sequence of the number manually marked from less to more The scene image of note is randomly assigned to each mark client, it is ensured that all scene images to be marked can be marked.Wherein Be randomly assigned to refer to mark client that scene image to be marked is randomly assigned to more than one of them in epicycle mark into Rower note.Therefore in epicycle mark, scene image to be marked is possible to be labeled by more than one mark client, It actually there may be certain scene images to be marked in epicycle mark to be marked repeatedly.
(4) each user logs in mark client, receives mark task by each mark client and mark requires, mark is appointed Business is the scene image to be marked that Administrator Client is distributed to each mark client, and mark requires to be to determine in step (2) It is to be marked go out unmanned plane target region and anchor point coordinate.
Each mark client manually marks each scene image to be marked, i.e., frame selects the unmanned plane in each image Target area and anchor point coordinate, and each scene image after mark is saved in Database Systems in the form of xml format, it is right Database Systems carry out real-time update.
(5) annotation results are audited
For all scene images to be marked all after mark, auditor logs in auditor's client, passes through auditor Client network accesses Database Systems, is examined by auditor's client annotation results (each scene image after marking) Core.
Audit mode can use manual examination and verification or automatic auditing method.
The present invention provides a kind of automatic auditing method, and method is as follows:
In order to exclude influence of individual samples to annotation results, individual unusual sample brings are reduced using statistical average method Error, its implementation are:
Scene image after marking for one saved in Database Systems then can if its number being marked is n times N group unmanned plane anchor point coordinate sample value after obtaining n times mark, the seat of the unmanned plane anchor point of the extraction of the kth time of i-th of anchor point It is designated asIts abscissa is chosen to be handled as follows:
The maximum value in this N group abscissa is obtained firstAnd minimum valueIt will SectionIt is divided into N-1 subinterval, if each subinterval length is △ xi, siding-to-siding block length isThen j-th of subinterval of i-th of anchor point of available abscissa is xi,j=xi min + (j-1)×△xi, then available xiDistribution probability be
Wherein,
Obtain p (xi,j) after, set a threshold valueProbability is rejected to be lower thanCoordinate points, obtain new data point setWherein NpFor the number of new data point set,
By formula (2), then the assembly average of this available N group abscissa
Utilize the assembly average for obtaining N group abscissaThe statistics of same method, this available N group ordinate is flat Mean valueObtain centre point coordinateAfterwards, it takes with coordinate pointsFor the center of circle, r pixel value is radius, when user obtains Anchor point coordinate (the x takeni,yi) meet as being only when following formula (3) effectively;When being unsatisfactory for this condition, then user annotation is prompted Mistake, and refuse the annotation results.
Wherein, r is the threshold value for needing to set according to precision.
Deep learning label data towards unmanned plane landing guidance generates system, including Administrator Client, mark visitor Family end and auditor's client;Pass through network communication between Administrator Client, mark client and auditor's client Connection;
The Administrator Client establishes Database Systems, will wrap during all unmanned plane landings by video camera shooting In scene image deposit Database Systems to be marked containing unmanned plane target.Simultaneously according to mission requirements, determine it is to be marked go out Unmanned plane target region and anchor point coordinate, and to each mark client by Web Publishing mark task and mark want It asks, the mark task, that is, Administrator Client is distributed to the scene image to be marked of each mark client, and mark requirement is Number, type and the every image for the image labeling anchor point that Administrator Client establishes according to mission requirements in release tasks Labeled times.Scene image, that is, annotation results after mark can be saved in Database Systems in the form of xml format, to data Library system carries out real-time update.It can determine mark client and annotation results by the xml document saved in Database Systems Between corresponding relationship.
The mark client receives the mark task and labeled standards issued from Administrator Client by network, Network address is logged in using the browser in mark client to mark each scene image to be marked in mark task manually, I.e. frame selects the unmanned plane target region in each image and anchor point coordinate, and annotation results are saved in the form of xml format Get up.The Database Systems that all annotation results of each mark client are sent to Administrator Client's foundation save.
Auditor's client by access Database Systems, to all annotation results saved in Database Systems into Row audit.Audit mode can use manual examination and verification or algorithm automatic auditing method.
The foregoing is merely a preferred embodiment of the present invention, are not intended to restrict the invention, for this field For technical staff, the invention may be variously modified and varied.All within the spirits and principles of the present invention, made any Modification, equivalent replacement, improvement etc., should all be included in the protection scope of the present invention.

Claims (5)

1. the deep learning label data generation method towards unmanned plane landing guidance, it is characterised in that:Include the following steps:
(1) Database Systems are established;
Database Systems are established by Administrator Client, all scene images to be marked are stored in Database Systems, wherein Scene image to be marked includes that the never labeled unmanned plane landing by video camera shooting includes in the process unmanned plane mesh Target scene image and the unmanned plane landing by video camera shooting manually marked more than once include in the process nothing The scene image of man-machine target;
(2) according to mission requirements, unmanned plane target region and anchor point coordinate out to be marked is determined by Administrator Client;
(3) scene image to be marked is distributed to each mark client by network dynamic by Administrator Client;
The number that all scene images to be marked were manually marked according to it is successively sorted from less to more, wherein never by The unmanned plane landing by video camera shooting marked includes that the number of the scene image of unmanned plane target is 0 in the process;
When distributing scene image to be marked, the sequence preferentially distributed is:First will never labeled scene image preferentially with Machine distributes to each mark client, then successively will be each to be marked according to the sequence of the number manually marked from less to more Scene image is randomly assigned to each mark client, it is ensured that all scene images to be marked can be marked;It is wherein random The mark client that distribution refers to that scene image to be marked is randomly assigned to more than one of them in epicycle mark is marked Note;
(4) respectively mark client receives mark task and mark requires, and mark task is that Administrator Client is distributed to each mark The scene image to be marked of client, mark require be step (2) in determine it is to be marked go out unmanned plane target region with And anchor point coordinate;
Each mark client manually marks each scene image to be marked, i.e., frame selects the unmanned plane target in each image Region and anchor point coordinate, and each scene image after mark is saved in Database Systems in the form of xml format, to data Library system carries out real-time update;
(5) annotation results are audited
For all scene images to be marked all after mark, auditor's client passes through network access data library system, by Each scene image after auditor's client marks annotation results is audited.
2. the deep learning label data generation method according to claim 1 towards unmanned plane landing guidance, feature It is:Administrator Client is managed Database Systems, can uploading pictures to Database Systems, database can be deleted Picture in system can carry out the picture of the middle preservation of Database Systems to inquire and can export annotation results.
3. the deep learning label data generation method according to claim 1 towards unmanned plane landing guidance, feature It is:Anchor point coordinate selection head, port wing, starboard wing, port tailplane, starboard tailplane, left foot frame, middle foot prop and the right side in step (2) The coordinate of multiple anchor points in foot prop.
4. the deep learning label data generation method according to claim 1 towards unmanned plane landing guidance, feature It is:Audit mode in step (5) uses manual examination and verification or automatic auditing method.
5. the deep learning label data generation method according to claim 4 towards unmanned plane landing guidance, feature It is:Automatic auditing method is as follows in step (5):
Scene image after marking for one saved in Database Systems can then obtain if its number being marked is n times The coordinate of N group unmanned plane anchor point coordinate sample value after n times mark, the unmanned plane anchor point of the secondary extraction of the kth of i-th of anchor point isIts abscissa is chosen to be handled as follows:
The maximum value in this N group abscissa is obtained firstAnd minimum valueBy sectionIt is divided into N-1 subinterval, if each subinterval length is △ xi, siding-to-siding block lengthThen j-th of subinterval of i-th of anchor point of available abscissa is xi,j=xi min + (j-1)×△xi, then available xiDistribution probability be:
Wherein,
Obtain p (xi,j) after, set a threshold valueProbability is rejected to be lower thanCoordinate points, obtain new data point setWherein NpFor the number of new data point set,
By formula (2), then the assembly average of this N group abscissa is obtained
Utilize the assembly average for obtaining N group abscissaSame method obtains the assembly average of this N group ordinate
Obtain centre point coordinateAfterwards, it takes with coordinate pointsFor the center of circle, r pixel value is radius, is obtained as user Anchor point coordinate (xi,yi) meet as being only when following formula (3) effectively;When being unsatisfactory for this condition, then prompt user annotation wrong Accidentally, and refuse the annotation results;
Wherein, r is the threshold value for needing to set according to precision.
CN201810825689.XA 2018-07-25 2018-07-25 Deep learning label data generation method oriented to unmanned aerial vehicle take-off and landing guide Active CN108920711B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810825689.XA CN108920711B (en) 2018-07-25 2018-07-25 Deep learning label data generation method oriented to unmanned aerial vehicle take-off and landing guide

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810825689.XA CN108920711B (en) 2018-07-25 2018-07-25 Deep learning label data generation method oriented to unmanned aerial vehicle take-off and landing guide

Publications (2)

Publication Number Publication Date
CN108920711A true CN108920711A (en) 2018-11-30
CN108920711B CN108920711B (en) 2021-09-24

Family

ID=64416638

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810825689.XA Active CN108920711B (en) 2018-07-25 2018-07-25 Deep learning label data generation method oriented to unmanned aerial vehicle take-off and landing guide

Country Status (1)

Country Link
CN (1) CN108920711B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110058756A (en) * 2019-04-19 2019-07-26 北京朗镜科技有限责任公司 A kind of mask method and device of image pattern
CN112347947A (en) * 2020-11-10 2021-02-09 厦门长江电子科技有限公司 Image data processing system and method integrating intelligent detection and automatic test
CN112990202A (en) * 2021-05-08 2021-06-18 中国人民解放军国防科技大学 Scene graph generation method and system based on sparse representation
CN113010739A (en) * 2021-03-18 2021-06-22 北京奇艺世纪科技有限公司 Video tag auditing method and device and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2778819A1 (en) * 2013-03-12 2014-09-17 Thomson Licensing Method for shooting a film performance using an unmanned aerial vehicle
US20170053538A1 (en) * 2014-03-18 2017-02-23 Sri International Real-time system for multi-modal 3d geospatial mapping, object recognition, scene annotation and analytics
CN108230240A (en) * 2017-12-31 2018-06-29 厦门大学 It is a kind of that the method for position and posture in image city scope is obtained based on deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2778819A1 (en) * 2013-03-12 2014-09-17 Thomson Licensing Method for shooting a film performance using an unmanned aerial vehicle
US20170053538A1 (en) * 2014-03-18 2017-02-23 Sri International Real-time system for multi-modal 3d geospatial mapping, object recognition, scene annotation and analytics
CN108230240A (en) * 2017-12-31 2018-06-29 厦门大学 It is a kind of that the method for position and posture in image city scope is obtained based on deep learning

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110058756A (en) * 2019-04-19 2019-07-26 北京朗镜科技有限责任公司 A kind of mask method and device of image pattern
CN112347947A (en) * 2020-11-10 2021-02-09 厦门长江电子科技有限公司 Image data processing system and method integrating intelligent detection and automatic test
CN113010739A (en) * 2021-03-18 2021-06-22 北京奇艺世纪科技有限公司 Video tag auditing method and device and electronic equipment
CN113010739B (en) * 2021-03-18 2024-01-26 北京奇艺世纪科技有限公司 Video tag auditing method and device and electronic equipment
CN112990202A (en) * 2021-05-08 2021-06-18 中国人民解放军国防科技大学 Scene graph generation method and system based on sparse representation

Also Published As

Publication number Publication date
CN108920711B (en) 2021-09-24

Similar Documents

Publication Publication Date Title
CN108920711A (en) Deep learning label data generation method oriented to unmanned aerial vehicle take-off and landing guide
CN104915643B (en) A kind of pedestrian based on deep learning identification method again
CN108596277A (en) A kind of testing vehicle register identification method, apparatus and storage medium
CN102629441B (en) Avionic display test system
CN109559320A (en) Realize that vision SLAM semanteme builds the method and system of figure function based on empty convolution deep neural network
CN110581898A (en) internet of things data terminal system based on 5G and edge calculation
CN110168607A (en) System and method for the identification of automatic table game activities
CN106897681A (en) A kind of remote sensing images comparative analysis method and system
CN108198262B (en) Attendance system and implementation method
CN112132197A (en) Model training method, image processing method, device, computer equipment and storage medium
CN107545538A (en) A kind of Panorama Mosaic method and device based on unmanned plane
CN102768757B (en) Remote sensing image color correcting method based on image type analysis
CN106339366B (en) The method and apparatus of demand identification based on artificial intelligence
CN108960404A (en) A kind of people counting method and equipment based on image
Freddy et al. How many mule deer are there? Challenges of credibility in Colorado
CN112966555B (en) Remote sensing image airplane identification method based on deep learning and component prior
CN106980817A (en) A kind of terrified video frequency identifying method based on Caffe frameworks
CN108447064A (en) A kind of image processing method and device
CN108229274A (en) Multilayer neural network model training, the method and apparatus of roadway characteristic identification
CN111816205B (en) Airplane audio-based intelligent recognition method for airplane models
CA3047249A1 (en) Platform for training and/or assistance with air control through an electronic air traffic control system, associated process
CN110334584A (en) A kind of gesture identification method based on the full convolutional network in region
CN109816714A (en) A kind of point cloud object type recognition methods based on Three dimensional convolution neural network
CN109670423A (en) A kind of image identification system based on deep learning, method and medium
CN110390724B (en) SLAM method with instance segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant