CN109492521A - Face positioning method and robot - Google Patents
Face positioning method and robot Download PDFInfo
- Publication number
- CN109492521A CN109492521A CN201811070041.2A CN201811070041A CN109492521A CN 109492521 A CN109492521 A CN 109492521A CN 201811070041 A CN201811070041 A CN 201811070041A CN 109492521 A CN109492521 A CN 109492521A
- Authority
- CN
- China
- Prior art keywords
- face
- coordinate
- point
- image
- dimensional coordinate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
- B25J11/0005—Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a face positioning method and a robot, wherein the method and the robot acquire an image of a designated area and all three-dimensional coordinates in the designated area at the same time, determine plane coordinates of the three-dimensional coordinates in the image, establish a corresponding relation between each three-dimensional coordinate and each plane coordinate, acquire the area position of each face in the image, and determine the position information of the face in the designated area according to the corresponding relation and the area position. By applying the technical scheme, the robot can accurately position the relative positions (relative distance, relative angle and the like) of the human face and the robot, so that the robot can accurately position the human body at the positions of all directions around the robot, and the accuracy of the robot on the positioning of the human body is improved.
Description
Technical field
The present invention relates to positioning field, in particular to a kind of Face detection method.The present invention also relates to a kind of machine
People.
Background technique
Robot is the automatic installations for executing work.It can not only receive mankind commander, but also can run preparatory volume
The program of row, can also be according to principle program action formulated with artificial intelligence technology.Its task is to assist or replace the mankind
The work of work, such as production industry, construction industry, or dangerous work.
As being continuously increased for robot application scene all needs in occasions such as customs, airport, bank, teleconferences
Given Face target is tracked.At present robot be directed to the mainstream scheme being tracked closest to itself face be
After shooting coloured image, identify the position of face in the picture, and then calculate the difference of itself and central point x coordinate, then to
Robot motion's system issues order, and robot is made to turn to direction locating for target person.
However, inventor has found in the implementation of the present invention, due to the position of face on flat image and reality three
It is distinguishing for tieing up solid space.Two different positions may be the same position on flat image in three dimensions.
Therefore, now with simple dependence to the identification method of face location in color image, not can accurately locating human face with
The relative position (especially to the relative distance of face and robot, relative angle etc.) of robot, so as to cause robot without
Method accurately positions the human body of all directions position around itself.
Summary of the invention
The present invention provides a kind of Face detection method, how to solve robot to all directions position around itself
Human body is accurately positioned, which comprises
All three-dimensional coordinates in the image and the specified region that synchronization acquires specified region;
It determines plane coordinates of the three-dimensional coordinate in described image, and establishes each three-dimensional coordinate and each described flat
Corresponding relationship between areal coordinate;
Obtain regional location of each face in described image included in described image;
Location information of the face in specified region is determined according to the corresponding relationship and the regional location.
Preferably, plane coordinates of the three-dimensional coordinate in described image is determined, specifically:
The three-dimensional coordinate is subjected to plane depth processing according to the acquisition angles of described image, generating includes described three
Tie up the plan view of coordinate;
The plan view is handled in proportion according to the size of described image;
Each three-dimensional coordinate in treated the plan view described in proportion is mapped into described image, is tied according to mapping
Fruit determines the plane coordinates.
Preferably, position of the face in specified region is determined according to the corresponding relationship and the regional location
Information, specifically:
Multiple characteristic point coordinates are generated according to the apex coordinate of the regional location, the characteristic point coordinate is evenly distributed in
The regional location;
The corresponding three-dimensional coordinate of each characteristic point is obtained according to the corresponding relationship, and it is corresponding to obtain each characteristic point
Acquisition equipment the distance between of the three-dimensional coordinate apart from described image;
The base position point of the face is filtered out from the characteristic point according to the distance;
The corresponding three-dimensional coordinate of the base position point is determined according to the corresponding relationship, the base position point is corresponding
Location information of the three-dimensional coordinate as the face.
Preferably, the base position point of the face is filtered out from the characteristic point according to the distance, specifically:
Determine the average value of the distance of all characteristic point coordinates, removal distance is higher than the feature of the average value specified threshold
Point coordinate;
If remaining characteristic point coordinate is less than specified quantity, select the central point of the regional location as the benchmark position
It sets a little;
If remaining characteristic point coordinate is not less than specified quantity, select from the remaining characteristic point coordinate closest to institute
The characteristic point coordinate of average value is stated as the base position point.
Preferably, position of the face in specified region is being determined according to the corresponding relationship and the regional location
After confidence breath, further includes:
The face to be tracked that the acquisition equipment needs to track is obtained according to the positional information;
The three-dimensional coordinate of the face to be tracked is determined according to the corresponding relationship;
Relative angle is determined according to the coordinate of the three-dimensional coordinate of the face to be tracked and the acquisition equipment;
Indicate that the acquisition equipment is moved according to the relative angle.
Preferably, the three-dimensional coordinate is specially point cloud data;
The point cloud data is scanned generation to the specified region by laser radar;
Or, the point cloud data is generated according to the colouring information in black white image corresponding to the image.
Correspondingly, the invention also provides a kind of robot, the robot includes:
Acquisition module, all three-dimensional seats in the image and the specified region that synchronization acquires specified region
Mark;
Respective modules determine plane coordinates of the three-dimensional coordinate in described image, and establish each three-dimensional coordinate
With the corresponding relationship between each plane coordinates;
Module is obtained, regional location of each face in described image included in described image is obtained;
Determining module determines position of the face in specified region according to the corresponding relationship and the regional location
Confidence breath.
Correspondingly, the invention also provides a kind of computer readable storage medium, in the computer readable storage medium storing program for executing
It is stored with instruction, when described instruction is run on the terminal device, so that the terminal device executes face as described above and determines
Position method.
Correspondingly, the invention also provides a kind of computer program products, which is characterized in that the computer program product
When running on the terminal device, so that the terminal device executes Face detection method as described above.
By the technical solution of application the application, the program is by acquiring the image in specified region in synchronization and referring to
Determine three-dimensional coordinate all in region, determine the plane coordinates of three-dimensional coordinate in the picture, and establishes each three-dimensional coordinate and put down with each
Corresponding relationship between areal coordinate obtains the regional location of each face in the picture included in image, and according to correspondence
Relationship and regional location determine location information of the face in specified region.Pass through the technical solution of application the application, machine
People is capable of the relative position (relative distance, relative angle etc.) of accurate locating human face and robot, to reach robot pair
The human body of all directions position is accurately positioned around itself, improves the accuracy that robot is positioned about human body.
Detailed description of the invention
It, below will be to attached drawing needed in embodiment description in order to illustrate more clearly of technical solution of the present invention
It is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, general for this field
For logical technical staff, without creative efforts, it is also possible to obtain other drawings based on these drawings.
Fig. 1 is a kind of flow diagram for Face detection method that the application proposes;
Fig. 2 is a kind of structural schematic diagram for robot that the application proposes.
Specific embodiment
As stated in the background art, robot is directed to the mainstream scheme being tracked closest to the face of itself in the prior art
It is to identify the position of face in the picture, and then calculate the difference of itself and central point x coordinate, so after shooting coloured image
Backward robot motion's system issues order, and robot is made to turn to direction locating for target person.But due to face on flat image
Position and real three-dimensional solid space be it is distinguishing, two different positions may be on flat image in three dimensions
It is the same position.Therefore, now with simple dependence to the identification method of face location in color image, can not be accurate
Locating human face and robot relative position.
In view of the above problem, the embodiment of the present invention provides a kind of Face detection method, so that robot accurately positions
The human body of all directions position around itself.Meanwhile the present invention can not only play the role of to nearest face tracking it is good,
It can be applied to precisely following to things (such as: people, article).Below in conjunction with the attached drawing in the present invention, to the present invention
In technical solution carry out clear, complete description.
In embodiments of the present invention, robot refers to the automatic installations for executing work.It can both receive the mankind and refer to
It waves, and the program of preparatory layout can be run, it can also be according to principle program action formulated with artificial intelligence technology.It appoints
Business is assistance or the work for replacing human work, such as production industry, construction industry, or dangerous work.Its functional structure is used
The variation of environment etc. will not influence protection scope of the present invention.
As shown in Figure 1, the people's face-positioning method specifically includes the following steps:
S101, all three-dimensional coordinates in the image and the specified region that synchronization acquires specified region.
This step is intended to acquire the image in synchronization region and the three-dimensional coordinate in region, wherein obtain image with
And there are many kinds of the modes of three-dimensional coordinate, such as: photograph, camera shooting, radar scanning, ultra sonic scanner etc., as long as it is same to reach acquisition
The purpose of image in one moment region and the three-dimensional coordinate in region, different obtaining means will not influence this hair
Bright protection scope.Meanwhile image nature (such as: color image, black white image etc.) is obtained, purpose is obtained as long as can reach
, different image properties will not influence protection scope of the present invention.Meanwhile different three-dimensional coordinate representation methods, and
It will not influence protection scope of the present invention.
Preferably, the three-dimensional coordinate is specially point cloud data;The point cloud data is by laser radar to described specified
Region is scanned generation;Or, the point cloud data is generated according to the colouring information in black white image corresponding to the image.
In specific application scenarios, robot obtains the color image of a frame image by camera, as dividing later
Analyse the resource of face location.Robot obtains the point cloud information that the same time is in camera colour by laser radar simultaneously
Data.
In specific application scenarios, this step can be used depth image instead of color image while obtain depth image
Point cloud.Wherein depth image refers to: the only image of black and white grey.The degree of the black of every bit, represents this in image
Point is with a distance from video camera.Depth map picture point cloud: the similar laser radar points that Mathematical treatment is converted into are passed through according to depth image
The information data of cloud data.It can play the role of with laser radar point cloud info class.
S102 determines plane coordinates of the three-dimensional coordinate in described image, and establishes each three-dimensional coordinate and each
Corresponding relationship between the plane coordinates.
This step is intended to determine plane coordinates of the three-dimensional coordinate in described image and establishes corresponding relationship.Wherein not
With orientation, angle, different algorithms, obtained same point coordinate is not identical, but as long as can obtain uniquely putting coordinate with
And the algorithms of different of corresponding relationship, within the application protection scope.
Preferably, in order to preferably determine plane coordinates of the three-dimensional coordinate in described image, preferred steps are as follows:
(1) three-dimensional coordinate is subjected to plane depth processing according to the acquisition angles of described image, generating includes institute
State the plan view of three-dimensional coordinate.
(2) plan view is handled in proportion according to the size of described image.
(3) each three-dimensional coordinate in treated the plan view described in proportion is mapped into described image, according to reflecting
It penetrates result and determines the plane coordinates.
In specific application scenarios, by point cloud information data, with the man-machine direction of machine, (" robot is just facing towards side
To ") on the basis of vertical plane, (data after planarizing later be referred to as to put down to plane depth in proportion with camera camera data
Face radar data figure), and obtain the mapping relations function Map () of the point in radar data and the point in panel data.Lead to again
The flat image become with radar data is crossed to compare, and the data of Radar Plane are compared with original three-dimensional data again.
The three-dimensional coordinate of point and ambient enviroment all in the color image data of acquisition is established with this.Wherein, plane depth in proportion
Change be the three-dimensional data for scanning radar data become with an equal amount of flat image in region captured by color image, and
Record the corresponding relationship of each point and the point in the three-dimensional data before conversion in flat image.
S103 obtains regional location of each face in described image included in described image.
This step is intended to obtain the opposed area position of each face in image.The image wherein obtained can be cromogram
The regional location of picture, black white image etc., acquisition can be frame-type region, profile description region etc., in the application protection scope
Within.
S104 determines that position of the face in specified region is believed according to the corresponding relationship and the regional location
Breath.
This step is intended to determine location information of the face in specified region, wherein obtain the corresponding relationship of target image with
And the distinct methods and algorithm of regional location, it will not influence protection scope of the present invention.
Preferably, in order to preferably determine the face in specified area according to the corresponding relationship and the regional location
Location information in domain, preferred steps are as follows:
(1) multiple characteristic point coordinates, the characteristic point coordinate average mark are generated according to the apex coordinate of the regional location
It is distributed in the regional location.
(2) the corresponding three-dimensional coordinate of each characteristic point is obtained according to the corresponding relationship, and obtains each characteristic point
Acquisition equipment the distance between of the corresponding three-dimensional coordinate apart from described image.
(3) the base position point of the face is filtered out from the characteristic point according to the distance.
(4) the corresponding three-dimensional coordinate of the base position point is determined according to the corresponding relationship, by the base position point
Location information of the corresponding three-dimensional coordinate as the face.
Preferably, in order to the base position of the face is preferably filtered out from the characteristic point according to the distance
Point, preferred steps are as follows:
(1) average value of the distance of all characteristic point coordinates is determined, removal distance is higher than the average value specified threshold
Characteristic point coordinate.
(2) if remaining characteristic point coordinate is less than specified quantity, select the central point of the regional location as the base
Quasi- location point.
(3) it if remaining characteristic point coordinate is not less than specified quantity, selects most to connect from the remaining characteristic point coordinate
The characteristic point coordinate of the nearly average value is as the base position point.
Preferably, position of the face in specified region is being determined according to the corresponding relationship and the regional location
After confidence breath, preferred steps are as follows:
(1) face to be tracked that the acquisition equipment needs to track is obtained according to the positional information.
(2) three-dimensional coordinate of the face to be tracked is determined according to the corresponding relationship.
(3) relative angle is determined according to the coordinate of the three-dimensional coordinate of the face to be tracked and the acquisition equipment.
(4) indicate that the acquisition equipment is moved according to the relative angle.
In concrete application scene, n point is chosen in the way of being evenly distributed in human face region.In planarization radar data
This n point is obtained in figure at a distance from robot.The interative computation processing of row data is clicked through to this n, removal is more than all choosings
The point taken to robot average distance (average value) 50% or more point, remaining point repeats interative computation.It extracts most
Close to the several points of actual conditions (average value).If the point close to truth that can be extracted is more than or equal to half, select
Take 1 point in these points closest to average value;If being less than half, the point at facial image center is directly chosen.These points are (preceding
Point described in face.I.e. closest to true condition, while namely closest to the point of average value) it will be as the mark later to face distance
On schedule.
Compare the data of all face criterion distance points obtained in aforementioned process, the nearest face of selected distance.
In concrete application scene, closed according to laser radar planarization figure is corresponding between laser radar point cloud information
System, calculates coordinate of the face center in radar points cloud space.Then Mathematical treatment is carried out to coordinate data, to obtain
The coordinate information (such as (x1, y1, z1)) of corresponding points in the same plane with laser radar and camera.
In concrete application scene, robot face is calculated using data antitrigonometric function formula θ=arctan (x1/y1)
Towards the angle theta in direction and point (x1, y1, z1), i.e., selected face and robot face the current relative angle in direction.So far,
We have obtained the relative angle of accurate people and robot.
In concrete application scene, the motion command of rotation specified angle is issued to robot motion's system.
By the technical solution of application the application, the program is by acquiring the image in specified region in synchronization and referring to
Determine three-dimensional coordinate all in region, determine the plane coordinates of three-dimensional coordinate in the picture, and establishes each three-dimensional coordinate and put down with each
Corresponding relationship between areal coordinate obtains the regional location of each face in the picture included in image, and according to correspondence
Relationship and regional location determine location information of the face in specified region.Pass through the technical solution of application the application, machine
People is capable of the relative position (relative distance, relative angle etc.) of accurate locating human face and robot, to reach robot pair
The human body of all directions position is accurately positioned around itself, improves the accuracy that robot is positioned about human body.
For the technical idea that the present invention is further explained, now in conjunction with specific application scenarios, to technical side of the invention
Case is illustrated.
In this concrete application scene, specific process flow is as follows:
(1) color image of a frame image is obtained from camera.As post analysis face location resource.
(2) obtain the point cloud information data that the same time is in camera color image from laser radar, and by its
On the basis of the man-machine direction of machine (" robot front direction ") vertical plane, and it is year-on-year with camera camera data
Example plane depth (data after hereinafter referred to as planarizing are planarization radar data figure), obtains data mapping relations function Map
().The three-dimensional coordinate of all points and ambient enviroment in the color image data as obtained in step (1) is established with this.
Specifically, plane depth in proportion: being that the three-dimensional data for scanning radar data becomes and color image
The captured an equal amount of flat image in region, and record each point and the point in the three-dimensional data before conversion in flat image
Corresponding relationship.
(3) all face informations are identified from color image.
(4) for each face, following procedure is executed:
(a) assume that four apex coordinates of human face region are (x1, y1), (x2, y1), (x1, y2), (x2, y2), wherein x2
> x1, y2 > y1.The coordinate information of following 9 points is taken out from planarization radar data figure:
(x1+ (x2-x1)/4, y1+ (y2-y1)/4)
(x1+ (x2-x1)/2, y1+ (y2-y1)/4)
(x1+ (x2-x1) * 3/4, y1+ (y2-y1)/2)
(x1+ (x2-x1)/4, y1+ (y2-y1)/2)
(x1+ (x2-x1)/2, y1+ (y2-y1)/2)
(x1+ (x2-x1) * 3/4, y1+ (y2-y1)/2)
(x1+ (x2-x1)/4, y1+ (y2-y1) * 3/4)
(x1+ (x2-x1)/2, y1+ (y2-y1) * 3/4)
(x1+ (x2-x1) * 3/4, y1+ (y2-y1) * 3/4)
(b) 9 points are at a distance from robot more than obtaining in planarization radar data figure.
(c) interative computation is used, removal is more than the point of 50% or more average value.
(d) it with after algorithm, if final remaining point is more than 5, takes in these points closest to one of average value
Point is used as standard point, the step of after continuation;If final remaining point is less than 5, these points are abandoned, with (x1+ (x2-
X1)/2, y1+ (y2-y1)/2) it is standard point, the step of after continuation.
Specifically, interative computation: asking the average value of all values, removal and the maximum value of average value difference.Remaining point
Repeat above step.
(5) all standard point datas obtained in comparison procedure (4), the nearest face of selected distance.So far, it has obtained
The position of nearest face.
(6) corresponding relationship between figure and laser radar point cloud information is planarized according to laser radar, calculated in face
Coordinate of the heart point in radar points cloud space.Then Mathematical treatment (z0 replaces z1) is carried out to coordinate data, to obtain and swash
The coordinate information (x1, y1, z1) of optical radar and camera corresponding points in the same plane.
(7) using data antitrigonometric function formula θ=arctan (x1/y1) calculate robot face direction and point (x1,
Y1, z1) angle theta, i.e., selected face and the current relative angle of robot direction.So far accurate to get having arrived
The relative angle of people and robot.
(8) motion command of rotation specified angle is issued to robot motion's system.
By the technical solution of application the application, the program is by acquiring the image in specified region in synchronization and referring to
Determine three-dimensional coordinate all in region, determine the plane coordinates of three-dimensional coordinate in the picture, and establishes each three-dimensional coordinate and put down with each
Corresponding relationship between areal coordinate obtains the regional location of each face in the picture included in image, and according to correspondence
Relationship and regional location determine location information of the face in specified region.Pass through the technical solution of application the application, machine
People is capable of the relative position (relative distance, relative angle etc.) of accurate locating human face and robot, to reach robot pair
The human body of all directions position is accurately positioned around itself, improves the accuracy that robot is positioned about human body.
To reach the above technical purpose, the application also proposed a kind of robot, as shown in Fig. 2, the robot includes:
Acquisition module 210, all three-dimensionals in the image and the specified region that synchronization acquires specified region
Coordinate;
Respective modules 220 determine plane coordinates of the three-dimensional coordinate in described image, and establish each three-dimensional seat
Corresponding relationship between mark and each plane coordinates;
Module 230 is obtained, regional location of each face in described image included in described image is obtained;
Determining module 240 determines the face in specified region according to the corresponding relationship and the regional location
Location information.
In specific application scenarios, the respective modules 220 determine plane of the three-dimensional coordinate in described image
Coordinate, specifically:
The three-dimensional coordinate is subjected to plane depth processing according to the acquisition angles of described image, generating includes described three
Tie up the plan view of coordinate;
The plan view is handled in proportion according to the size of described image;
Each three-dimensional coordinate in treated the plan view described in proportion is mapped into described image, is tied according to mapping
Fruit determines the plane coordinates.
In specific application scenarios, the determining module 240 is true according to the corresponding relationship and the regional location
Location information of the fixed face in specified region, specifically:
Multiple characteristic point coordinates are generated according to the apex coordinate of the regional location, the characteristic point coordinate is evenly distributed in
The regional location;
The corresponding three-dimensional coordinate of each characteristic point is obtained according to the corresponding relationship, and it is corresponding to obtain each characteristic point
Acquisition equipment the distance between of the three-dimensional coordinate apart from described image;
The base position point of the face is filtered out from the characteristic point according to the distance;
The corresponding three-dimensional coordinate of the base position point is determined according to the corresponding relationship, the base position point is corresponding
Location information of the three-dimensional coordinate as the face.
In specific application scenarios, the determining module 240 filters out institute according to the distance from the characteristic point
The base position point of face is stated, specifically:
Determine the average value of the distance of all characteristic point coordinates, removal distance is higher than the feature of the average value specified threshold
Point coordinate;
If remaining characteristic point coordinate is less than specified quantity, select the central point of the regional location as the benchmark position
It sets a little;
If remaining characteristic point coordinate is not less than specified quantity, select from the remaining characteristic point coordinate closest to institute
The characteristic point coordinate of average value is stated as the base position point.
In specific application scenarios, the determining module 240 is according to the corresponding relationship and the regional location
Determine the face after the location information in specified region, further includes:
The face to be tracked that the acquisition equipment needs to track is obtained according to the positional information;
The three-dimensional coordinate of the face to be tracked is determined according to the corresponding relationship;
Relative angle is determined according to the coordinate of the three-dimensional coordinate of the face to be tracked and the acquisition equipment;
Indicate that the acquisition equipment is moved according to the relative angle.
In specific application scenarios, three-dimensional coordinate described in modules is specially point cloud data;
The point cloud data is scanned generation to the specified region by laser radar;
Or, the point cloud data is generated according to the colouring information in black white image corresponding to the image.
By the technical solution of application the application, the program is by acquiring the image in specified region in synchronization and referring to
Determine three-dimensional coordinate all in region, determine the plane coordinates of three-dimensional coordinate in the picture, and establishes each three-dimensional coordinate and put down with each
Corresponding relationship between areal coordinate obtains the regional location of each face in the picture included in image, and according to correspondence
Relationship and regional location determine location information of the face in specified region.Pass through the technical solution of application the application, machine
People is capable of the relative position (relative distance, relative angle etc.) of accurate locating human face and robot, to reach robot pair
The human body of all directions position is accurately positioned around itself, improves the accuracy that robot is positioned about human body.
Through the above description of the embodiments, those skilled in the art can be understood that the present invention can lead to
Hardware realization is crossed, the mode of necessary general hardware platform can also be added to realize by software.Based on this understanding, this hair
Bright technical solution can be embodied in the form of software products, which can store in a non-volatile memories
In medium (can be CD-ROM, USB flash disk, mobile hard disk etc.), including some instructions are used so that a computer equipment (can be
Personal computer, server or network equipment etc.) execute method described in each implement scene of the present invention.
It will be appreciated by those skilled in the art that the accompanying drawings are only schematic diagrams of a preferred implementation scenario, module in attached drawing or
Process is not necessarily implemented necessary to the present invention.
It will be appreciated by those skilled in the art that the module in device in implement scene can be described according to implement scene into
Row is distributed in the device of implement scene, can also be carried out corresponding change and is located at the one or more dresses for being different from this implement scene
In setting.The module of above-mentioned implement scene can be merged into a module, can also be further split into multiple submodule.
Aforementioned present invention serial number is for illustration only, does not represent the superiority and inferiority of implement scene.
Disclosed above is only several specific implementation scenes of the invention, and still, the present invention is not limited to this, Ren Heben
What the technical staff in field can think variation should all fall into protection scope of the present invention.
Claims (9)
1. a kind of Face detection method characterized by comprising
All three-dimensional coordinates in the image and the specified region that synchronization acquires specified region;
It determines plane coordinates of the three-dimensional coordinate in described image, and establishes each three-dimensional coordinate and each plane seat
Corresponding relationship between mark;
Obtain regional location of each face in described image included in described image;
Location information of the face in specified region is determined according to the corresponding relationship and the regional location.
2. the method as described in claim 1, which is characterized in that determine that plane of the three-dimensional coordinate in described image is sat
Mark, specifically:
The three-dimensional coordinate is subjected to plane depth processing according to the acquisition angles of described image, is generated comprising the three-dimensional seat
Target plan view;
The plan view is handled in proportion according to the size of described image;
Each three-dimensional coordinate in treated the plan view described in proportion is mapped into described image, it is true according to mapping result
The fixed plane coordinates.
3. the method as described in claim 1, which is characterized in that determine institute according to the corresponding relationship and the regional location
Location information of the face in specified region is stated, specifically:
Multiple characteristic point coordinates are generated according to the apex coordinate of the regional location, the characteristic point coordinate is evenly distributed in described
Regional location;
The corresponding three-dimensional coordinate of each characteristic point is obtained according to the corresponding relationship, and obtains each characteristic point corresponding three
Tie up the distance between the acquisition equipment of coordinate distance described image;
The base position point of the face is filtered out from the characteristic point according to the distance;
The corresponding three-dimensional coordinate of the base position point is determined according to the corresponding relationship, by the base position point corresponding three
Tie up location information of the coordinate as the face.
4. method as claimed in claim 3, which is characterized in that filter out the people from the characteristic point according to the distance
The base position point of face, specifically:
Determine the average value of the distance of all characteristic point coordinates, the characteristic point that removal distance is higher than the average value specified threshold is sat
Mark;
If remaining characteristic point coordinate is less than specified quantity, select the central point of the regional location as the base position
Point;
If remaining characteristic point coordinate is not less than specified quantity, selected from the remaining characteristic point coordinate closest to described flat
The characteristic point coordinate of mean value is as the base position point.
5. method as claimed in claim 3, which is characterized in that determined according to the corresponding relationship and the regional location
The face is after the location information in specified region, further includes:
The face to be tracked that the acquisition equipment needs to track is obtained according to the positional information;
The three-dimensional coordinate of the face to be tracked is determined according to the corresponding relationship;
Relative angle is determined according to the coordinate of the three-dimensional coordinate of the face to be tracked and the acquisition equipment;
Indicate that the acquisition equipment is moved according to the relative angle.
6. the method according to claim 1 to 5, which is characterized in that the three-dimensional coordinate is specially point cloud data;
The point cloud data is scanned generation to the specified region by laser radar;
Or, the point cloud data is generated according to the colouring information in black white image corresponding to the image.
7. a kind of robot, which is characterized in that the robot includes:
Acquisition module, all three-dimensional coordinates in the image and the specified region that synchronization acquires specified region;
Respective modules determine plane coordinates of the three-dimensional coordinate in described image, and establish each three-dimensional coordinate and each
Corresponding relationship between the plane coordinates;
Module is obtained, regional location of each face in described image included in described image is obtained;
Determining module determines that position of the face in specified region is believed according to the corresponding relationship and the regional location
Breath.
8. a kind of computer readable storage medium, which is characterized in that it is stored with instruction in the computer readable storage medium storing program for executing, when
When described instruction is run on the terminal device, so that the terminal device perform claim requires the described in any item faces of 1-6 fixed
Position method.
9. a kind of computer program product, which is characterized in that when the computer program product is run on the terminal device, so that
The terminal device perform claim requires the described in any item Face detection methods of 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811070041.2A CN109492521B (en) | 2018-09-13 | 2018-09-13 | Face positioning method and robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811070041.2A CN109492521B (en) | 2018-09-13 | 2018-09-13 | Face positioning method and robot |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109492521A true CN109492521A (en) | 2019-03-19 |
CN109492521B CN109492521B (en) | 2022-05-13 |
Family
ID=65690550
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811070041.2A Active CN109492521B (en) | 2018-09-13 | 2018-09-13 | Face positioning method and robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109492521B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111179490A (en) * | 2019-12-13 | 2020-05-19 | 新石器慧通(北京)科技有限公司 | Movable carrier for user verification, control system and unmanned vehicle |
CN112511757A (en) * | 2021-02-05 | 2021-03-16 | 北京电信易通信息技术股份有限公司 | Video conference implementation method and system based on mobile robot |
CN112887594A (en) * | 2021-01-13 | 2021-06-01 | 随锐科技集团股份有限公司 | Method and system for improving video conference security |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008087140A (en) * | 2006-10-05 | 2008-04-17 | Toyota Motor Corp | Speech recognition robot and control method of speech recognition robot |
CN106709954A (en) * | 2016-12-27 | 2017-05-24 | 上海唱风信息科技有限公司 | Method for masking human face in projection region |
CN106826815A (en) * | 2016-12-21 | 2017-06-13 | 江苏物联网研究发展中心 | Target object method of the identification with positioning based on coloured image and depth image |
CN106991688A (en) * | 2017-03-09 | 2017-07-28 | 广东欧珀移动通信有限公司 | Human body tracing method, human body tracking device and electronic installation |
US20170248971A1 (en) * | 2014-11-12 | 2017-08-31 | SZ DJI Technology Co., Ltd. | Method for detecting target object, detection apparatus and robot |
CN107179768A (en) * | 2017-05-15 | 2017-09-19 | 上海木爷机器人技术有限公司 | A kind of obstacle recognition method and device |
CN107845061A (en) * | 2017-11-10 | 2018-03-27 | 暴风集团股份有限公司 | Image processing method, device and terminal |
CN108170166A (en) * | 2017-11-20 | 2018-06-15 | 北京理工华汇智能科技有限公司 | The follow-up control method and its intelligent apparatus of robot |
-
2018
- 2018-09-13 CN CN201811070041.2A patent/CN109492521B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008087140A (en) * | 2006-10-05 | 2008-04-17 | Toyota Motor Corp | Speech recognition robot and control method of speech recognition robot |
US20170248971A1 (en) * | 2014-11-12 | 2017-08-31 | SZ DJI Technology Co., Ltd. | Method for detecting target object, detection apparatus and robot |
CN106826815A (en) * | 2016-12-21 | 2017-06-13 | 江苏物联网研究发展中心 | Target object method of the identification with positioning based on coloured image and depth image |
CN106709954A (en) * | 2016-12-27 | 2017-05-24 | 上海唱风信息科技有限公司 | Method for masking human face in projection region |
CN106991688A (en) * | 2017-03-09 | 2017-07-28 | 广东欧珀移动通信有限公司 | Human body tracing method, human body tracking device and electronic installation |
CN107179768A (en) * | 2017-05-15 | 2017-09-19 | 上海木爷机器人技术有限公司 | A kind of obstacle recognition method and device |
CN107845061A (en) * | 2017-11-10 | 2018-03-27 | 暴风集团股份有限公司 | Image processing method, device and terminal |
CN108170166A (en) * | 2017-11-20 | 2018-06-15 | 北京理工华汇智能科技有限公司 | The follow-up control method and its intelligent apparatus of robot |
Non-Patent Citations (1)
Title |
---|
FAISAL R. AL-OSAIMI: "A Novel Multi-Purpose Matching Representation of Local 3D Surfaces: A Rotationally Invariant, Efficient, and Highly Discriminative Approach With an Adjustable Sensitivity", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111179490A (en) * | 2019-12-13 | 2020-05-19 | 新石器慧通(北京)科技有限公司 | Movable carrier for user verification, control system and unmanned vehicle |
CN111179490B (en) * | 2019-12-13 | 2022-01-11 | 新石器慧通(北京)科技有限公司 | Movable carrier for user verification, control system and unmanned vehicle |
CN112887594A (en) * | 2021-01-13 | 2021-06-01 | 随锐科技集团股份有限公司 | Method and system for improving video conference security |
CN112887594B (en) * | 2021-01-13 | 2022-07-15 | 随锐科技集团股份有限公司 | Method and system for improving video conference security |
CN112511757A (en) * | 2021-02-05 | 2021-03-16 | 北京电信易通信息技术股份有限公司 | Video conference implementation method and system based on mobile robot |
Also Published As
Publication number | Publication date |
---|---|
CN109492521B (en) | 2022-05-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111126304B (en) | Augmented reality navigation method based on indoor natural scene image deep learning | |
US9317762B2 (en) | Face recognition using depth based tracking | |
US10824853B2 (en) | Human detection system for construction machine | |
CN109492521A (en) | Face positioning method and robot | |
US10176564B1 (en) | Collaborative disparity decomposition | |
US20150243031A1 (en) | Method and device for determining at least one object feature of an object comprised in an image | |
CN107093171A (en) | A kind of image processing method and device, system | |
Litomisky et al. | Removing moving objects from point cloud scenes | |
Tang et al. | Camera self-calibration from tracking of moving persons | |
CN108803591A (en) | A kind of ground drawing generating method and robot | |
CN107958466B (en) | Slam algorithm optimization model-based tracking method | |
CN113689503B (en) | Target object posture detection method, device, equipment and storage medium | |
US20200145639A1 (en) | Portable 3d scanning systems and scanning methods | |
Chiang et al. | A stereo vision-based self-localization system | |
Surmann et al. | 3D mapping for multi hybrid robot cooperation | |
Jin et al. | Sensor fusion for fiducial tags: Highly robust pose estimation from single frame rgbd | |
CN104315998A (en) | Door opening degree judgment method based on depth image and azimuth angle | |
CN107797556B (en) | A method of realizing server start and stop using Xun Wei robots | |
Mohedano et al. | Robust 3d people tracking and positioning system in a semi-overlapped multi-camera environment | |
Shimura et al. | Research on person following system based on RGB-D features by autonomous robot with multi-kinect sensor | |
Bai et al. | Research on obstacles avoidance technology for UAV based on improved PTAM algorithm | |
Grosso et al. | Log-polar stereo for anthropomorphic robots | |
Asad et al. | Smartphone based guidance system for visually impaired person | |
Alouache et al. | An adapted block-matching method for optical flow estimation in catadioptric images | |
Rivera-Bautista et al. | Using color histograms and range data to track trajectories of moving people from a mobile robot platform |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |