CN106886216A - Robot automatic tracking method and system based on RGBD Face datections - Google Patents
Robot automatic tracking method and system based on RGBD Face datections Download PDFInfo
- Publication number
- CN106886216A CN106886216A CN201710028570.5A CN201710028570A CN106886216A CN 106886216 A CN106886216 A CN 106886216A CN 201710028570 A CN201710028570 A CN 201710028570A CN 106886216 A CN106886216 A CN 106886216A
- Authority
- CN
- China
- Prior art keywords
- image
- face
- human body
- robot
- depth
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 68
- 230000008569 process Effects 0.000 claims abstract description 12
- 230000001815 facial effect Effects 0.000 claims abstract description 10
- 238000001514 detection method Methods 0.000 claims description 56
- 238000012549 training Methods 0.000 claims description 23
- 238000004458 analytical method Methods 0.000 claims description 20
- 238000012360 testing method Methods 0.000 claims description 15
- 230000010354 integration Effects 0.000 claims description 13
- 230000008859 change Effects 0.000 claims description 12
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 238000012790 confirmation Methods 0.000 claims description 6
- 239000000284 extract Substances 0.000 claims description 6
- 210000000746 body region Anatomy 0.000 claims description 4
- 238000005259 measurement Methods 0.000 claims description 4
- 238000012545 processing Methods 0.000 abstract description 6
- 238000005516 engineering process Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000001149 cognitive effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
- G05D1/0253—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0225—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving docking at a fixed facility, e.g. base station or loading bay
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Electromagnetism (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The present invention relates to a kind of robot automatic tracking method and system based on RGBD Face datections, wherein, the system includes:RGBD cameras, image algorithm processing platform, motion planning and robot control unit;The RGBD cameras are used to gather the RGB image and depth image of human body, for the identification to upper half of human body and facial image;Described image algorithm process platform is used to receive the human body image of RGBD cameras collection, and by algorithm process, realizes the recognition of face to human body, and calculate distance of the robot apart from face;The motion planning and robot control unit draws face in longitudinal forward-reverse distance by image algorithm processing platform, and control robot is moved with following the movement of face according to distance variation information.The present invention solves face tracking accuracy rate and face longitudinal direction precisely range finding problem, make that robot follows that face moves accuracy rate is higher, class people is with better function.
Description
Technical field
It is automatic the present invention relates to technical field of robot control, more particularly to a kind of robot based on RGBD Face datections
Tracking and system.
Background technology
With the development of computer science and automatic control technology, increasing intelligent robot appears in production and living
In, vision system is also increasingly valued by people as an important subsystem in intelligent robot system.Intelligent machine
Device human visual system realizes that robot is adapted to environmental perception tissue, complex scene, handed over the intelligent behavior ability of people as source
Mutually with cooperate, concept formed with integrate, knowledge acquisition and reasoning, it is autonomous it is cognitive with senior decision-making, class people intelligent behavior angularly,
Launch the intelligent exploitation of robot.The intelligentized part of robot, is mainly used in fast and accurately when Face datection is followed
Face detection and tracking so that preferably experienced in man-machine exchange.
Domestic patent CN103093212A disclose it is a kind of based on Face detection and tracking interception facial image method and
Device, the system includes detection module, tracking module, judge module and interception module, using cascade classifier to figure to be detected
As carrying out Face datection;When human face target is detected, face tracking is carried out to human face target using average track algorithm;Work as people
When face target leaves detection zone, the Face datection and face tracking of the same frame in of position judgment according to target on each frame
Whether same human face target is corresponded to, select each frame of Face datection same human face target corresponding with face tracking;In choosing
In each frame for going out, the registration of the window of same frame in Face datection and the window of face tracking is calculated, by maximal degree of coincidence
Frame on the facial image that obtains of Face datection as interception facial image;It is of the invention to be disadvantageous in that it is single making
With cascade classifier, grader error detection can be very high, is unfavorable for the 2D video cameras for being used in complex environment, being used, without figure
Can only be of poor quality when face moves forward and backward as depth information estimate size, it is impossible to measure the reality of forward-reverse
Distance.
Domestic patent CN101477616 discloses a kind of Face datection and method for tracing, and energy is calculated by computer or tool
The microprocessor of power performs this Face datection and method for tracing, to recognize the face in image frame and its position, uses
Face datection, face tracking is carried out in every frame picture, face location is recorded, after space-number frame picture, in the face having found
Under conditions of position, a Face datection is carried out to image frame again, may be new rapidly to search out other
The face of addition, the present invention because it is used between the method that detects again after number frames, when detecting that target motion is too fast
Time will lose the target of tracking.
Therefore, prior art needs to improve.
The content of the invention
The invention discloses a kind of robot automatic tracking method and system based on RGBD Face datections, it is used to improve shifting
Efficiency and auto-returned charging success rate that mobile robot auto-returned charges.
On the one hand, a kind of robot automatic tracking method based on RGBD Face datections provided in an embodiment of the present invention, bag
Include:
Human sample view data is obtained, by the technical indicator of human sample image detection grader;
Level is combined grader carries out upper half of human body detection, and carries out numerical analysis using depth image data, to inspection
Human body image is surveyed to be confirmed;
Using cascade classifier in the image detected to upper half of human body, using incorporating image depth information method
Face datection is carried out, until detect face, and facial image to detecting confirms that the cascade classifier refers to will be more
Individual strong classifier links together and is operated, and the strong classifier is made up of the weighting of multiple Weak Classifiers;
After detecting human face region, using control module, extract corresponding region in depth image and do distance operation, obtain
Accurate distance between people and robot.
In another embodiment based on the above-mentioned robot automatic tracking method based on RGBD Face datections of the present invention,
The acquisition human sample view data, is included by the technical indicator of human sample image detection grader:
Prepare training data, make human body image training sample, the human body image training sample includes:Negative sample and just
Sample, the negative sample refers to not include the image of object;The positive sample be subject image to be detected and can not comprising appoint
What negative sample;
Using the image measurement grader for completing training sample, and with the technical indicator according to test result analysis grader,
The technical indicator bag evidence:The detection success rate of upper part of the body image, the success rate of false drop rate and Face datection, false drop rate;
The technical parameter of grader is adjusted, the parameter of grader adjustment includes:Positive sample number used during training, every grade
Negative sample number, the series of grader, haar characteristic types, every one-level minimum detection of grader used during classifier training
Rate.
In another embodiment based on the above-mentioned robot automatic tracking method based on RGBD Face datections of the present invention,
The use grader carries out upper half of human body detection, and carries out numerical analysis using depth image data, to detection human figure
Include as carrying out confirmation:
Gradation conversion first is carried out to RGB image, coloured image is become the gray level image of 0 to 255 ranks;
Strengthen the brightness of gray-scale map using histogram equalization method;
Selected digital image detection zone, treatment is integrated to selection area, using integration method when rectangular characteristic is calculated
The sum of grey scale pixel value in rectangle need not be again counted every time, the integration of the several respective points of rectangle need to be only calculated, that is, calculate square
Shape characteristic value, and the calculating time will not with rectangle size change and change;
Image detection region of entire scan obtains detecting factor coefficient;
Detecting factor coefficient being multiplied by using first order testing result and obtaining a Weak Classifier result, Weak Classifier is cascaded
To strong classifier testing result;
Half is zoomed in and out to original image, altimetric image to be checked, the fixed ruler are scanned with the Ha Er windows of fixed dimension
Window based on very little Ha Er windows;
The average and variance in different size window are calculated, altimetric image to be checked is traveled through, when the condition of setting are met, it is believed that
This window is upper half of human body image;
The Haar-like features in candidate region are calculated, these features are delivered in cascade adaboost graders
One step judges;
If can simultaneously detect upper half of human body image in the presence of three windows, assert that cascade classifier output result is true
Value, exports upper part of the body coordinate, if not existing, assert that cascade classifier output result is non-true value;
All pixels value in upper part of the body regional depth figure is all carried out asking for distance operation
Distance (x, y)=Pix (x, y)/1000;
Range data to measuring carries out consistency analysis, if there is uniformity, then confirms to be detected in RGB image
The target for arriving, exists in depth image, otherwise, does not exist in depth image.
In another embodiment based on the above-mentioned robot automatic tracking method based on RGBD Face datections of the present invention,
The use grader carries out face in the image detected to upper half of human body using image depth information method is incorporated
Detection, until detect face, and facial image to detecting carries out confirmation and includes:
Pre-loaded good Face datection classifier data, sets scanning window size, and the scanning window size is not limited
It is fixed, with specifically determining according to the demand in actual items;
Window according to the scaling of setting is scanned to image, until detecting face.
In another embodiment based on the above-mentioned robot automatic tracking method based on RGBD Face datections of the present invention,
It is described extract corresponding region and do distance operation in depth image include:
Ask for interior each pixel distance of human face region in depth map
FaceDis (x, y)=image (x, y)/1000;
Distance to calculating all pixels point is sued for peace and takes the actual range that average finally gives people and robot.
A kind of robot automatic tracking system based on RGBD Face datections that the embodiment of the present invention is also provided, including:
RGBD cameras, image algorithm processing platform, motion planning and robot control unit;
The RGBD cameras are used to gather the RGB image and depth image of human body, for upper half of human body and face
The identification of image;Described image algorithm process platform be used for receive RGBD cameras collection human body image, and by algorithm at
Reason, realizes the recognition of face to human body, and calculate distance of the robot apart from face;The motion planning and robot control unit leads to
Cross image algorithm processing platform and draw face in longitudinal forward-reverse distance, control robot according to distance variation information with following people
The movement of face and move;
The RGBD cameras include RGB cameras and depth camera, and the RGB cameras are used to shoot human body
RGB image, the depth camera is used to shoot the depth image of human body, and the RGB cameras and the depth camera are passed through
After crossing camera calibration, the picture registration that the image and RGB cameras that depth camera shoots shoot;
The RGBD cameras are intel advanced treating cameras.
In another embodiment based on the above-mentioned robot automatic tracking system based on RGBD Face datections of the present invention,
The RGBD cameras also include cascade classifier, and the cascade classifier is used to detect the people in RGB camera shooting images
Body upper part of the body image, when image detection is to the upper part of the body, pedestrian is entered to image upper part of the body region using cascade classifier again
Face is detected.
In another embodiment based on the above-mentioned robot automatic tracking system based on RGBD Face datections of the present invention,
The cascade classifier includes multiple strong classifiers, multiple strong classifiers is linked together and is operated, the intensity level connection point
Class device includes multiple Weak Classifiers, is made up of the weighting of multiple Weak Classifiers.
In another embodiment based on the above-mentioned robot automatic tracking system based on RGBD Face datections of the present invention,
Described image algorithm process platform carries out image using haar algorithms and adboost algorithms addition target depth information analysis method
Algorithm process, including:
Gradation conversion first is carried out to RGB image, coloured image is become the gray level image of 0 to 255 ranks;
Strengthen the brightness of gray-scale map using histogram equalization method;
Selected digital image detection zone, treatment is integrated to selection area, using integration method when rectangular characteristic is calculated
The sum of grey scale pixel value in rectangle need not be again counted every time, the integration of the several respective points of rectangle need to be only calculated, that is, calculate square
Shape characteristic value, and the calculating time will not with rectangle size change and change;
Image detection region of entire scan obtains detecting factor coefficient;
Detecting factor coefficient being multiplied by using first order testing result and obtaining a Weak Classifier result, Weak Classifier is cascaded
To strong classifier testing result;
Half is zoomed in and out to original image, altimetric image to be checked, the fixed ruler are scanned with the Ha Er windows of fixed dimension
Window based on very little Ha Er windows;
The average and variance in different size window are calculated, altimetric image to be checked is traveled through, when the condition of setting are met, it is believed that
This window is upper half of human body image;
The Haar-like features in candidate region are calculated, these features are delivered in cascade adaboost graders
One step judges;
If can simultaneously detect upper half of human body image in the presence of three windows, assert that cascade classifier output result is true
Value, exports upper part of the body coordinate, if not existing, assert that cascade classifier output result is non-true value;
All pixels value in upper part of the body regional depth figure is all carried out asking for distance operation
Distance (x, y)=Pix (x, y)/1000;
Range data to measuring carries out consistency analysis, if there is uniformity, then confirms to be detected in RGB image
The target for arriving, exists in depth image, otherwise, does not exist in depth image.
In another embodiment based on the above-mentioned robot automatic tracking system based on RGBD Face datections of the present invention,
The motion planning and robot control unit is used to do distance operation, the distance operation bag in the corresponding region of depth image extraction
Include:
Ask for interior each pixel distance of human face region in depth map
FaceDis (x, y)=image (x, y)/1000;
Distance to calculating all pixels point is sued for peace and takes the actual range that average finally gives people and robot;
Robot behavior is controlled with the actual specific of robot according to the people for calculating.
Compared with prior art, the present invention includes advantages below:
Robot automatic tracking method and system based on RGBD Face datections of the invention, by RGBD cameras
RGB cameras first gather RGB image, and image detection is carried out to upper half of human body with cascade classifier, are processed using image algorithm
After detection of platform goes out upper part of the body image, then Face datection is carried out using cascade classifier again to image upper part of the body region, examined
After measuring face, the coordinate information according to output determines the area of the depth information of face in the image that depth camera is formed
Domain, extracts face depth information, carries out distance operation, draws face in longitudinal forward-reverse distance, motion planning and robot control list
Control robot of unit is moved with accurately following the movement of face according to distance variation information, and it is accurate that the present invention solves face tracking
The accurate range finding problem in rate and face longitudinal direction, make that robot follows that face moves accuracy rate is higher, class people is with better function.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing
There is the accompanying drawing used in technology description to do one simply to introduce.
Fig. 1 is that the structure of one embodiment of the robot automatic tracking system based on RGBD Face datections of the invention is shown
It is intended to.
Fig. 2 is the flow of one embodiment of the robot automatic tracking method based on RGBD Face datections of the invention
Figure.
Fig. 3 is the flow of another embodiment of the robot automatic tracking method based on RGBD Face datections of the invention
Figure.
Fig. 4 is the flow of another embodiment of the robot automatic tracking method based on RGBD Face datections of the invention
Figure.
Fig. 5 is the flow of another embodiment of the robot automatic tracking method based on RGBD Face datections of the invention
Figure.
Fig. 6 is the flow of another embodiment of the robot automatic tracking method based on RGBD Face datections of the invention
Figure.
In figure:1RGBD cameras, 11RGB cameras, 12 depth cameras, 13 cascade classifiers, the treatment of 2 image algorithms
Platform, 3 motion planning and robot control units.
Specific embodiment
To make the purpose, technical scheme and advantage of the embodiment of the present invention clearer, below in conjunction with the embodiment of the present invention
In accompanying drawing, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is only
Only it is a part of embodiment of the invention, rather than whole embodiments.Based on the embodiment in the present invention, ordinary skill
The every other embodiment that personnel are obtained under the premise of creative work is not made, belongs to the scope of protection of the invention.
Fig. 1 is that the structure of one embodiment of the robot automatic tracking system based on RGBD Face datections of the invention is shown
It is intended to, as shown in figure 1, the robot automatic tracking system based on RGBD Face datections includes:
RGBD cameras 1, image algorithm processing platform 2, motion planning and robot control unit 3;
The RGBD cameras 1 are used to gather the RGB image and depth image of human body, for upper half of human body and face
The identification of image;Described image algorithm process platform 2 is used to receive the human body image of the collection of RGBD cameras 1, and by algorithm
Treatment, realizes the recognition of face to human body, and calculate distance of the robot apart from face;The motion planning and robot control unit
3 draw face in longitudinal forward-reverse distance by image algorithm processing platform 2, control robot with according to distance variation information with
Moved with the movement of face.
The RGBD cameras 1 include RGB cameras 11 and depth camera 12;
The RGB cameras 11 are used to shoot the RGB image of human body;
The depth camera 12 is used to shoot the depth image of human body;
The RGB cameras 11 and the depth camera 12 by after camera calibration, what depth camera 12 shot
The picture registration that image and RGB cameras shoot.
The RGBD cameras 1 also include cascade classifier 13, and the cascade classifier 13 is used to detect RGB cameras 11
Upper half of human body image in shooting image, when image detection is to the upper part of the body, level is used to image upper part of the body region again
Connection grader 13 carries out Face datection.
The RGBD cameras 1 are intel advanced treating cameras.
The cascade classifier 13 includes multiple strong classifiers, multiple strong classifiers is linked together and is operated, institute
Stating strong cascade classifier includes multiple Weak Classifiers, is made up of the weighting of multiple Weak Classifiers.
Described image algorithm process platform 2 adds target depth information analysis side using haar algorithms and adboost algorithms
Method carries out image algorithm treatment, including:
Gradation conversion first is carried out to RGB image, coloured image is become the gray level image of 0 to 255 ranks;
Strengthen the brightness of gray-scale map using histogram equalization method;
Selected digital image detection zone, treatment is integrated to selection area, using integration method when rectangular characteristic is calculated
The sum of grey scale pixel value in rectangle need not be again counted every time, the integration of the several respective points of rectangle need to be only calculated, that is, calculate square
Shape characteristic value, and the calculating time will not with rectangle size change and change;
Image detection region of entire scan obtains detecting factor coefficient;
Detecting factor coefficient being multiplied by using first order testing result and obtaining a Weak Classifier result, Weak Classifier is cascaded
To strong classifier testing result;
Half is zoomed in and out to original image, altimetric image to be checked, the fixed ruler are scanned with the Ha Er windows of fixed dimension
Window based on very little Ha Er windows;
The average and variance in different size window are calculated, altimetric image to be checked is traveled through, when the condition of setting are met, it is believed that
This window is upper half of human body image;
The Haar-like features in candidate region are calculated, these features are delivered in cascade adaboost graders
One step judges;
If can simultaneously detect upper half of human body image in the presence of three windows, assert that the output result of cascade classifier 13 is
True value, exports upper part of the body coordinate, if not existing, assert that the output result of cascade classifier 13 is non-true value;
All pixels value in upper part of the body regional depth figure is all carried out asking for distance operation
Distance (x, y)=Pix (x, y)/1000;
Range data to measuring carries out consistency analysis, if there is uniformity, then confirms to be detected in RGB image
The target for arriving, exists in depth image, otherwise, does not exist in depth image.
The motion planning and robot control unit 3 is used to extracting corresponding region in depth image and does distance operation, it is described away from
Include from computing:
Ask for interior each pixel distance of human face region in depth map
FaceDis (x, y)=image (x, y)/1000;
Distance to calculating all pixels point is sued for peace and takes the actual range that average finally gives people and robot;
Robot behavior is controlled with the actual specific of robot according to the people for calculating.
Fig. 2 is the flow of one embodiment of the robot automatic tracking method based on RGBD Face datections of the invention
Figure, as shown in Fig. 2 the robot automatic tracking method based on RGBD Face datections includes:
10, human sample view data is obtained, by the technical indicator of human sample image detection grader;The human body
Sample image data has RGBD video cameras 1 to shoot, the technical indicator bag of the grader according to the upper part of the body detection success rate, flase drop
The success rate of rate and Face datection, false drop rate;
20, upper half of human body detection is carried out using cascade classifier 13, and numerical analysis is carried out using depth image data,
The cascade classifier 13 refers to that multiple strong classifiers link together to be operated to be confirmed to detection human body image,
The strong classifier is made up of the weighting of multiple Weak Classifiers;
30, using cascade classifier 13 in the image detected to upper half of human body, believed using picture depth is incorporated
Breath method carries out Face datection, until detect face, and facial image to detecting confirms;
40, after detecting human face region, using control module, extract corresponding region in depth image and do distance operation,
Obtain the accurate distance between people and robot.
The use grader carries out upper half of human body detection, and carries out numerical analysis using depth image data, to inspection
Surveying human body image carries out confirming that the method for using is cascade sort method that the cascade sort method refers to by multiple strong classifiers
Link together and operated, the strong classifier is made up of the weighting of multiple Weak Classifiers.
Such as upper half of human body image detection is concatenated together using 10 Weak Classifiers, constitutes a strong classification of cascade
Device, because the differentiation degree of accuracy of each strong classifier to negative sample is very high, once finds that the target position for detecting is negative
Sample, just is not continuing to call following strong classifier, reduces detection time;Because region to be detected is very in piece image
Many is all negative sample, has so just abandoned the complex detection of many negative samples at the initial stage of grader by cascade classifier 1, so
The speed of cascade classifier is very fast;Only positive sample can just be sent to next strong classifier and be checked again, thus protect
The pseudo- positive possibility for having demonstrate,proved the positive sample of last output is very low;Cascade structure grader is made up of multiple Weak Classifiers, often
One-level is all more complicated than previous stage;Each grader allows all of positive example to pass through, while most of negative example is filtered, so per one-level
Positive example to be detected is just fewer than previous stage, eliminates substantial amounts of non-detection target, greatly improves detection speed.
Fig. 3 is the flow of another embodiment of the robot automatic tracking method based on RGBD Face datections of the invention
Figure, as shown in figure 3, the acquisition human sample view data, by the technical indicator bag of human sample image detection grader
Include:
11, prepare training data, human body image training sample is made, the human body image training sample includes:Negative sample
And positive sample, the negative sample refers to not include the image of object;The positive sample is subject image to be detected and can not wrap
Containing any negative sample;Negative sample selection will be more diversified and can not be associated with positive sample, is preferably taken from daily life
Material is relatively good;
12, using the image measurement grader for completing training sample, and with referring to according to the technology of test result analysis grader
Mark, the technical indicator bag evidence:The detection success rate of upper part of the body image, the success rate of false drop rate and Face datection, false drop rate;
13, the technical parameter of grader is adjusted, the parameter of grader adjustment includes:Positive sample number used during training,
Negative sample number, the series of grader, haar characteristic types, every one-level minimum of grader used during every grade of classifier training
Verification and measurement ratio.
The preparation training data, making human body image training sample includes:
The image of training sample is obtained using RGBD video cameras 1, the training sample divides into positive sample and negative sample, institute
It refers to not include the image of object to state negative sample;The positive sample is subject image to be detected and can not be comprising any negative sample
This;Negative sample selection will be more diversified and can not be associated with positive sample, is preferably drawn materials from daily life relatively good;
Calculate the integrogram of sample image, construction feature model;
The characteristic value of characteristic model is calculated, feature set is obtained;
Determine thresholding, by feature set generation correspondence Weak Classifier, obtain weak classifier set;
Use adaboost Algorithm for Training strong classifiers;
Part negative sample collection is added again, calculates the integrogram of integration sample image;
Complete collection of the cascade classifier to human sample.
Fig. 4 is the flow of another embodiment of the robot automatic tracking method based on RGBD Face datections of the invention
Figure, as shown in figure 4, the use grader carries out upper half of human body detection, and carries out numerical analysis using depth image data,
Carrying out confirmation to detection human body image includes:
201, gradation conversion first is carried out to RGB image, coloured image is become the gray level image of 0 to 255 ranks;
202, the brightness of gray-scale map is strengthened using histogram equalization method;
203, selected digital image detection zone is integrated treatment to selection area, and rectangle spy is being calculated using integration method
The sum of grey scale pixel value in rectangle need not be again counted when levying every time, the integration of the several respective points of rectangle need to be only calculated, that is, calculated
Go out rectangular characteristic value, and the calculating time will not change as rectangle size changes;
204, image detection region of entire scan obtains detecting factor coefficient;
205, it is multiplied by detecting factor coefficient using first order testing result and obtains a Weak Classifier result, Weak Classifier level
Connection obtains strong classifier testing result;Setting 40*80 windows, image of entire scan obtains detecting factor coefficient, the first order
Testing result is multiplied by detecting factor coefficient and obtains a Weak Classifier result, and Weak Classifier cascade obtains last testing result.
206, half is zoomed in and out to original image, altimetric image to be checked is scanned with the Ha Er windows of fixed dimension, it is described solid
Window based on the Ha Er windows being sized;
207, the average and variance in different size window are calculated, altimetric image to be checked is traveled through, when the condition of setting is met,
Think that this window is upper half of human body image;
208, the Haar-like features in candidate region are calculated, these features are delivered to cascade adaboost graders
In determine whether;
209, if can simultaneously detect upper half of human body image in the presence of three windows, assert cascade classifier output result
It is true value, exports upper part of the body coordinate, if not existing, assert that cascade classifier output result is non-true value;
210, all pixels value in upper part of the body regional depth figure is all carried out asking for distance operation
Distance (x, y)=Pix (x, y)/1000;
211, the range data to measuring carries out consistency analysis, if there is uniformity, then confirms in RGB image
The target for detecting, exists in depth image, otherwise, does not exist in depth image.
This method is that target depth information analysis is added on the basis of haar algorithms and adboost algorithms.Haar algorithms
Feature essence be Statistics, classified with statistical method and there is inadequate natural endowment, that is, Statistics exist mould
Paste property, adds target depth information effectively to make up the deficiency of Statistics, is greatly improving target detection just
True rate effectively reduces false detection rate.
Fig. 5 is the flow of another embodiment of the robot automatic tracking method based on RGBD Face datections of the invention
Figure, as shown in figure 5, the use grader is in the image detected to upper half of human body, is believed using picture depth is incorporated
Breath method carries out Face datection, until detect face, and facial image to detecting carries out confirmation and includes:
31, pre-loaded good Face datection classifier data sets scanning window size, and the scanning window size is not
Limit, with specifically determining according to the demand in actual items;
32, image is scanned according to the window of the scaling of setting, until detecting face.
Fig. 6 is the flow of another embodiment of the robot automatic tracking method based on RGBD Face datections of the invention
Figure, as shown in fig. 6, it is described extract corresponding region and do distance operation in depth image include:
41, ask for interior each pixel distance of human face region in depth map
FaceDis (x, y)=image (x, y)/1000;
42, the distance to calculating all pixels point is sued for peace and takes the actual range that average finally gives people and robot.
A kind of robot automatic tracking method and system based on RGBD Face datections provided by the present invention are entered above
Go and be discussed in detail, specific case used herein has been set forth to principle of the invention and implementation method, the above has been implemented
The explanation of example is only intended to help and understands the method for the present invention and its core concept;Simultaneously for the general technology people of this area
Member, according to thought of the invention, will change in specific embodiments and applications, in sum, this explanation
Book content should not be construed as limiting the invention.
Finally it should be noted that:The preferred embodiments of the present invention are the foregoing is only, are not intended to limit the invention,
Although being described in detail to the present invention with reference to the foregoing embodiments, for a person skilled in the art, it still may be used
Modified with to the technical scheme described in foregoing embodiments, or equivalent carried out to which part technical characteristic,
All any modification, equivalent substitution and improvements within the spirit and principles in the present invention, made etc., should be included in of the invention
Within protection domain.
Claims (10)
1. a kind of robot automatic tracking method based on RGBD Face datections, it is characterised in that including:
Human sample view data is obtained, by the technical indicator of human sample image detection grader;
Level is combined grader carries out upper half of human body detection, and carries out numerical analysis using depth image data, to detection people
Body image is confirmed;
Using cascade classifier in the image detected to upper half of human body, carried out using image depth information method is incorporated
Face datection, until detect face, and facial image to detecting confirms that the cascade classifier refers to will be multiple strong
Grader links together and is operated, and the strong classifier is made up of the weighting of multiple Weak Classifiers;
After detecting human face region, using control module, extract corresponding region in depth image and do distance operation, obtain people with
Accurate distance between robot.
2. method according to claim 1, it is characterised in that the acquisition human sample view data, by human body sample
The technical indicator of this image detection grader includes:
Prepare training data, make human body image training sample, the human body image training sample includes:Negative sample and positive sample
This, the negative sample refers to not include the image of object;The positive sample is subject image to be detected and can not be comprising any
Negative sample;
It is described using the image measurement grader for completing training sample, and with the technical indicator according to test result analysis grader
Technical indicator bag evidence:The detection success rate of upper part of the body image, the success rate of false drop rate and Face datection, false drop rate;
The technical parameter of grader is adjusted, the parameter of grader adjustment includes:Positive sample number, every grade of classification used during training
Device negative sample number, the series of grader, haar characteristic types, every one-level minimum detection rate of grader used when training.
3. method according to claim 1, it is characterised in that the use grader carries out upper half of human body detection, and
Numerical analysis is carried out using depth image data, carrying out confirmation to detection human body image includes:
Gradation conversion first is carried out to RGB image, coloured image is become the gray level image of 0 to 255 ranks;
Strengthen the brightness of gray-scale map using histogram equalization method;
Selected digital image detection zone, treatment is integrated to selection area, need not when rectangular characteristic is calculated using integration method
Count the sum of grey scale pixel value in rectangle again every time, need to only calculate the integration of the several respective points of rectangle, that is, calculate rectangle special
Value indicative, and the calculating time will not with rectangle size change and change;
Image detection region of entire scan obtains detecting factor coefficient;
Detecting factor coefficient being multiplied by using first order testing result and obtaining a Weak Classifier result, Weak Classifier cascade obtains strong
Detection of classifier result;
Half is zoomed in and out to original image, altimetric image to be checked is scanned with the Ha Er windows of fixed dimension, the fixed dimension
Window based on Ha Er windows;
The average and variance in different size window are calculated, altimetric image to be checked is traveled through, when the condition of setting is met, it is believed that this window
Mouth is upper half of human body image;
Calculate candidate region in Haar-like features, these features be delivered to cascade adaboost graders in further
Judge;
If can simultaneously detect upper half of human body image in the presence of three windows, assert that cascade classifier output result is true value,
Output upper part of the body coordinate, if not existing, assert that cascade classifier output result is non-true value;
All pixels value in upper part of the body regional depth figure is all carried out asking for distance operation distance (x, y)=Pix (x, y)/
1000;
Range data to measuring carries out consistency analysis, if there is uniformity, then confirms what is detected in RGB image
Target, exists in depth image, otherwise, does not exist in depth image.
4. method according to claim 1, it is characterised in that the use grader is detected to upper half of human body
Image in, Face datection is carried out using image depth information method is incorporated, until detect face, and the face figure to detecting
Include as carrying out confirmation:
Pre-loaded good Face datection classifier data, sets scanning window size, and the scanning window size is not limited, with
Specifically determine according to the demand in actual items;
Window according to the scaling of setting is scanned to image, until detecting face.
5. method according to claim 1, it is characterised in that described to extract corresponding region in depth image and do distance fortune
Including:
Interior each pixel of human face region is apart from faceDis (x, y)=image (x, y)/1000 in asking for depth map;
Distance to calculating all pixels point is sued for peace and takes the actual range that average finally gives people and robot.
6. a kind of robot automatic tracking system based on RGBD Face datections, it is characterised in that including:RGBD cameras, figure
As algorithm process platform, motion planning and robot control unit;
The RGBD cameras are used to gather the RGB image and depth image of human body, for upper half of human body and facial image
Identification;Described image algorithm process platform is used to receive the human body image of RGBD cameras collection, and by algorithm process, it is real
Now to the recognition of face of human body, and calculate distance of the robot apart from face;The motion planning and robot control unit is by figure
As algorithm process platform draws face in longitudinal forward-reverse distance, control robot according to distance variation information with following face
Move and move;
The RGBD cameras include RGB cameras and depth camera, and the RGB cameras are used to shoot the RGB figures of human body
Picture, the depth camera is used to shoot the depth image of human body, and the RGB cameras and the depth camera are by shooting
After machine is demarcated, the picture registration that the image and RGB cameras that depth camera shoots shoot;
The RGBD cameras are intel advanced treating cameras.
7. system according to claim 6, it is characterised in that the RGBD cameras also include cascade classifier, described
Cascade classifier is used to detect the upper half of human body image in RGB camera shooting images, right when image detection is to the upper part of the body
Image upper part of the body region carries out Face datection using cascade classifier again.
8. system according to claim 7, it is characterised in that the cascade classifier includes multiple strong classifiers, will be many
Individual strong classifier links together and is operated, and the strong cascade classifier includes multiple Weak Classifiers, by multiple Weak Classifiers
Weighting composition.
9. system according to claim 6, it is characterised in that described image algorithm process platform using haar algorithms and
Adboost algorithms addition target depth information analysis method carries out image algorithm treatment, including:
Gradation conversion first is carried out to RGB image, coloured image is become the gray level image of 0 to 255 ranks;
Strengthen the brightness of gray-scale map using histogram equalization method;
Selected digital image detection zone, treatment is integrated to selection area, need not when rectangular characteristic is calculated using integration method
Count the sum of grey scale pixel value in rectangle again every time, need to only calculate the integration of the several respective points of rectangle, that is, calculate rectangle special
Value indicative, and the calculating time will not with rectangle size change and change;
Image detection region of entire scan obtains detecting factor coefficient;
Detecting factor coefficient being multiplied by using first order testing result and obtaining a Weak Classifier result, Weak Classifier cascade obtains strong
Detection of classifier result;
Half is zoomed in and out to original image, altimetric image to be checked is scanned with the Ha Er windows of fixed dimension, the fixed dimension
Window based on Ha Er windows;
The average and variance in different size window are calculated, altimetric image to be checked is traveled through, when the condition of setting is met, it is believed that this window
Mouth is upper half of human body image;
Calculate candidate region in Haar-like features, these features be delivered to cascade adaboost graders in further
Judge;
If can simultaneously detect upper half of human body image in the presence of three windows, assert that cascade classifier output result is true value,
Output upper part of the body coordinate, if not existing, assert that cascade classifier output result is non-true value;
All pixels value in upper part of the body regional depth figure is all carried out asking for distance operation distance (x, y)=Pix (x, y)/
1000;
Range data to measuring carries out consistency analysis, if there is uniformity, then confirms what is detected in RGB image
Target, exists in depth image, otherwise, does not exist in depth image.
10. system according to claim 6, it is characterised in that the motion planning and robot control unit is used in depth map
Distance operation is done as extracting corresponding region, the distance operation includes:
Interior each pixel of human face region is apart from faceDis (x, y)=image (x, y)/1000 in asking for depth map;
Distance to calculating all pixels point is sued for peace and takes the actual range that average finally gives people and robot;
Robot behavior is controlled with the actual specific of robot according to the people for calculating.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710028570.5A CN106886216B (en) | 2017-01-16 | 2017-01-16 | Robot automatic tracking method and system based on RGBD face detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710028570.5A CN106886216B (en) | 2017-01-16 | 2017-01-16 | Robot automatic tracking method and system based on RGBD face detection |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106886216A true CN106886216A (en) | 2017-06-23 |
CN106886216B CN106886216B (en) | 2020-08-14 |
Family
ID=59176339
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710028570.5A Active CN106886216B (en) | 2017-01-16 | 2017-01-16 | Robot automatic tracking method and system based on RGBD face detection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106886216B (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107977636A (en) * | 2017-12-11 | 2018-05-01 | 北京小米移动软件有限公司 | Method for detecting human face and device, terminal, storage medium |
CN108197507A (en) * | 2017-12-30 | 2018-06-22 | 刘智 | A kind of privacy real-time protection method and system |
CN108537843A (en) * | 2018-03-12 | 2018-09-14 | 北京华凯汇信息科技有限公司 | The method and device of depth of field distance is obtained according to depth image |
CN108527366A (en) * | 2018-03-22 | 2018-09-14 | 北京理工华汇智能科技有限公司 | Robot follower method and device based on depth of field distance |
CN108647555A (en) * | 2017-11-21 | 2018-10-12 | 江苏鸿山鸿锦物联技术有限公司 | Whether there is or not the detection methods of personnel in a kind of car based on video image |
CN108647662A (en) * | 2018-05-17 | 2018-10-12 | 四川斐讯信息技术有限公司 | A kind of method and system of automatic detection face |
CN108734083A (en) * | 2018-03-21 | 2018-11-02 | 北京猎户星空科技有限公司 | Control method, device, equipment and the storage medium of smart machine |
CN109299595A (en) * | 2018-09-08 | 2019-02-01 | 太若科技(北京)有限公司 | Method, apparatus and AR equipment based on hand skin texture information unlock AR equipment |
CN109895140A (en) * | 2017-12-10 | 2019-06-18 | 湘潭宏远电子科技有限公司 | A kind of robotically-driven trigger device |
CN110276271A (en) * | 2019-05-30 | 2019-09-24 | 福建工程学院 | Merge the non-contact heart rate estimation technique of IPPG and depth information anti-noise jamming |
CN111259684A (en) * | 2018-11-30 | 2020-06-09 | Tcl集团股份有限公司 | Method and device for determining distance from camera to human face |
CN112115913A (en) * | 2020-09-28 | 2020-12-22 | 杭州海康威视数字技术股份有限公司 | Image processing method, device and equipment and storage medium |
CN117519488A (en) * | 2024-01-05 | 2024-02-06 | 四川中电启明星信息技术有限公司 | Dialogue method and dialogue system of dialogue robot |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101477625A (en) * | 2009-01-07 | 2009-07-08 | 北京中星微电子有限公司 | Upper half of human body detection method and system |
CN102214291A (en) * | 2010-04-12 | 2011-10-12 | 云南清眸科技有限公司 | Method for quickly and accurately detecting and tracking human face based on video sequence |
US20120183238A1 (en) * | 2010-07-19 | 2012-07-19 | Carnegie Mellon University | Rapid 3D Face Reconstruction From a 2D Image and Methods Using Such Rapid 3D Face Reconstruction |
CN103093212A (en) * | 2013-01-28 | 2013-05-08 | 北京信息科技大学 | Method and device for clipping facial images based on face detection and face tracking |
CN103995747A (en) * | 2014-05-12 | 2014-08-20 | 上海大学 | Distributed pedestrian detection system and method based on mobile robot platform |
CN105187721A (en) * | 2015-08-31 | 2015-12-23 | 广州市幸福网络技术有限公司 | An identification camera and method for rapidly extracting portrait features |
CN105182983A (en) * | 2015-10-22 | 2015-12-23 | 深圳创想未来机器人有限公司 | Face real-time tracking method and face real-time tracking system based on mobile robot |
US20160191995A1 (en) * | 2011-09-30 | 2016-06-30 | Affectiva, Inc. | Image analysis for attendance query evaluation |
CN106250850A (en) * | 2016-07-29 | 2016-12-21 | 深圳市优必选科技有限公司 | Face datection tracking and device, robot head method for controlling rotation and system |
-
2017
- 2017-01-16 CN CN201710028570.5A patent/CN106886216B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101477625A (en) * | 2009-01-07 | 2009-07-08 | 北京中星微电子有限公司 | Upper half of human body detection method and system |
CN102214291A (en) * | 2010-04-12 | 2011-10-12 | 云南清眸科技有限公司 | Method for quickly and accurately detecting and tracking human face based on video sequence |
US20120183238A1 (en) * | 2010-07-19 | 2012-07-19 | Carnegie Mellon University | Rapid 3D Face Reconstruction From a 2D Image and Methods Using Such Rapid 3D Face Reconstruction |
US20160191995A1 (en) * | 2011-09-30 | 2016-06-30 | Affectiva, Inc. | Image analysis for attendance query evaluation |
CN103093212A (en) * | 2013-01-28 | 2013-05-08 | 北京信息科技大学 | Method and device for clipping facial images based on face detection and face tracking |
CN103995747A (en) * | 2014-05-12 | 2014-08-20 | 上海大学 | Distributed pedestrian detection system and method based on mobile robot platform |
CN105187721A (en) * | 2015-08-31 | 2015-12-23 | 广州市幸福网络技术有限公司 | An identification camera and method for rapidly extracting portrait features |
CN105182983A (en) * | 2015-10-22 | 2015-12-23 | 深圳创想未来机器人有限公司 | Face real-time tracking method and face real-time tracking system based on mobile robot |
CN106250850A (en) * | 2016-07-29 | 2016-12-21 | 深圳市优必选科技有限公司 | Face datection tracking and device, robot head method for controlling rotation and system |
Non-Patent Citations (1)
Title |
---|
吴会霞等: "基于RGB-D的人脸表情识别研究", 《现代计算机》 * |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108647555A (en) * | 2017-11-21 | 2018-10-12 | 江苏鸿山鸿锦物联技术有限公司 | Whether there is or not the detection methods of personnel in a kind of car based on video image |
CN109895140A (en) * | 2017-12-10 | 2019-06-18 | 湘潭宏远电子科技有限公司 | A kind of robotically-driven trigger device |
CN107977636B (en) * | 2017-12-11 | 2021-11-30 | 北京小米移动软件有限公司 | Face detection method and device, terminal and storage medium |
CN107977636A (en) * | 2017-12-11 | 2018-05-01 | 北京小米移动软件有限公司 | Method for detecting human face and device, terminal, storage medium |
CN108197507A (en) * | 2017-12-30 | 2018-06-22 | 刘智 | A kind of privacy real-time protection method and system |
CN108537843A (en) * | 2018-03-12 | 2018-09-14 | 北京华凯汇信息科技有限公司 | The method and device of depth of field distance is obtained according to depth image |
CN108734083A (en) * | 2018-03-21 | 2018-11-02 | 北京猎户星空科技有限公司 | Control method, device, equipment and the storage medium of smart machine |
CN108527366A (en) * | 2018-03-22 | 2018-09-14 | 北京理工华汇智能科技有限公司 | Robot follower method and device based on depth of field distance |
CN108647662A (en) * | 2018-05-17 | 2018-10-12 | 四川斐讯信息技术有限公司 | A kind of method and system of automatic detection face |
CN109299595A (en) * | 2018-09-08 | 2019-02-01 | 太若科技(北京)有限公司 | Method, apparatus and AR equipment based on hand skin texture information unlock AR equipment |
CN111259684A (en) * | 2018-11-30 | 2020-06-09 | Tcl集团股份有限公司 | Method and device for determining distance from camera to human face |
CN111259684B (en) * | 2018-11-30 | 2023-07-28 | Tcl科技集团股份有限公司 | Method and device for determining distance from camera to face |
CN110276271A (en) * | 2019-05-30 | 2019-09-24 | 福建工程学院 | Merge the non-contact heart rate estimation technique of IPPG and depth information anti-noise jamming |
CN112115913A (en) * | 2020-09-28 | 2020-12-22 | 杭州海康威视数字技术股份有限公司 | Image processing method, device and equipment and storage medium |
CN112115913B (en) * | 2020-09-28 | 2023-08-25 | 杭州海康威视数字技术股份有限公司 | Image processing method, device and equipment and storage medium |
CN117519488A (en) * | 2024-01-05 | 2024-02-06 | 四川中电启明星信息技术有限公司 | Dialogue method and dialogue system of dialogue robot |
CN117519488B (en) * | 2024-01-05 | 2024-03-29 | 四川中电启明星信息技术有限公司 | Dialogue method and dialogue system of dialogue robot |
Also Published As
Publication number | Publication date |
---|---|
CN106886216B (en) | 2020-08-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106886216A (en) | Robot automatic tracking method and system based on RGBD Face datections | |
CN103530599B (en) | The detection method and system of a kind of real human face and picture face | |
CN105069472B (en) | A kind of vehicle checking method adaptive based on convolutional neural networks | |
CN108288033B (en) | A kind of safety cap detection method based on random fern fusion multiple features | |
CN102542289B (en) | Pedestrian volume statistical method based on plurality of Gaussian counting models | |
CN106485274B (en) | A kind of object classification method based on target property figure | |
CN110837784B (en) | Examination room peeping and cheating detection system based on human head characteristics | |
KR101441333B1 (en) | Detecting Apparatus of Human Component AND Method of the same | |
CN104123543B (en) | A kind of eye movement recognition methods based on recognition of face | |
CN101853399B (en) | Method for realizing blind road and pedestrian crossing real-time detection by utilizing computer vision technology | |
CN102622584B (en) | Method for detecting mask faces in video monitor | |
CN107833221A (en) | A kind of water leakage monitoring method based on multi-channel feature fusion and machine learning | |
CN108985170A (en) | Transmission line of electricity hanger recognition methods based on Three image difference and deep learning | |
CN109145708A (en) | A kind of people flow rate statistical method based on the fusion of RGB and D information | |
CN103390164A (en) | Object detection method based on depth image and implementing device thereof | |
CN109376637A (en) | Passenger number statistical system based on video monitoring image processing | |
CN109506628A (en) | Object distance measuring method under a kind of truck environment based on deep learning | |
CN101477626A (en) | Method for detecting human head and shoulder in video of complicated scene | |
CN110189375A (en) | A kind of images steganalysis method based on monocular vision measurement | |
CN104198497A (en) | Surface defect detection method based on visual saliency map and support vector machine | |
CN105913013A (en) | Binocular vision face recognition algorithm | |
CN106709438A (en) | Method for collecting statistics of number of people based on video conference | |
CN106548131A (en) | A kind of workmen's safety helmet real-time detection method based on pedestrian detection | |
CN109359577A (en) | A kind of Complex Background number detection system based on machine learning | |
CN111415339B (en) | Image defect detection method for complex texture industrial product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |