CN107341442A - Motion control method, device, computer equipment and service robot - Google Patents
Motion control method, device, computer equipment and service robot Download PDFInfo
- Publication number
- CN107341442A CN107341442A CN201710365516.XA CN201710365516A CN107341442A CN 107341442 A CN107341442 A CN 107341442A CN 201710365516 A CN201710365516 A CN 201710365516A CN 107341442 A CN107341442 A CN 107341442A
- Authority
- CN
- China
- Prior art keywords
- node
- image
- feature
- map
- picture frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000033001 locomotion Effects 0.000 title claims abstract description 135
- 238000000034 method Methods 0.000 title claims abstract description 59
- 230000001815 facial effect Effects 0.000 claims abstract description 129
- 230000015654 memory Effects 0.000 claims description 44
- 239000006185 dispersion Substances 0.000 claims description 37
- 230000009466 transformation Effects 0.000 claims description 36
- 238000013527 convolutional neural network Methods 0.000 claims description 27
- 238000000605 extraction Methods 0.000 claims description 26
- 238000013528 artificial neural network Methods 0.000 claims description 25
- 230000000630 rising effect Effects 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 22
- 230000008859 change Effects 0.000 description 13
- 238000001514 detection method Methods 0.000 description 13
- 239000011159 matrix material Substances 0.000 description 12
- 238000005457 optimization Methods 0.000 description 8
- 230000002463 transducing effect Effects 0.000 description 7
- 230000008569 process Effects 0.000 description 6
- 238000007689 inspection Methods 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000006073 displacement reaction Methods 0.000 description 2
- 235000013399 edible fruits Nutrition 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000004807 localization Effects 0.000 description 2
- 231100000572 poisoning Toxicity 0.000 description 2
- 230000000607 poisoning effect Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000001131 transforming effect Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000009432 framing Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 210000004218 nerve net Anatomy 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000007935 neutral effect Effects 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/10—Simultaneous control of position or course in three dimensions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/167—Detection; Localisation; Normalisation using comparisons between temporally consecutive images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Automation & Control Theory (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Aviation & Aerospace Engineering (AREA)
- Electromagnetism (AREA)
- Image Analysis (AREA)
Abstract
The present invention relates to a kind of motion control method, device, computer equipment and service robot, methods described includes:Obtain picture frame;When carrying out Face datection to described image frame and obtaining described image frame and include facial image, the facial image corresponding destination node in map is determined;The start node matched with described image frame is selected from the map;Wherein, the feature of the feature of described image frame node image corresponding with the start node matches;According to the start node and the destination node, chosen in the path that the map includes and tend to target motion path;According to the trend target movement path of selection.The scheme that the application provides improves the accuracy of motion control.
Description
Technical field
The present invention relates to field of computer technology, more particularly to a kind of motion control method, device, computer equipment and
Service robot.
Background technology
With the development and the improvement of people's living standards of computer technology, people are increasingly dependent on moveable calculating
Machine equipment helps the people to complete various tasks.It is traditional by moveable computer equipment in the task of execution when, it is removable
The motion control of dynamic computer equipment is sensor-based positioning method to realize.
However, based on traditional this positioning method by sensor come control computer equipment motion when, fixed
Transducing signal is easily affected by the ambient during position, the degree of accuracy of positioning can be had a strong impact on, so as to cause to control
Accuracy rate reduces during the motion of computer equipment.
The content of the invention
Based on this, it is necessary to cause the degree of accuracy in the motion of control computer equipment for traditional movement control mode
The problem of low, there is provided a kind of motion control method, device, computer equipment and service robot.
A kind of motion control method, methods described include:
Obtain picture frame;
When carrying out Face datection to described image frame and obtaining described image frame and include facial image, the face figure is determined
As the corresponding destination node in map;
The start node matched with described image frame is selected from the map;Wherein, the feature of described image frame and institute
The feature for stating node image corresponding to start node matches;
According to the start node and the destination node, chosen in the path that the map includes and tend to target motion
Path;
According to the trend target movement path of selection.
A kind of motion control device, described device include:
Acquisition module, for obtaining picture frame;
Determining module, for when carrying out Face datection to described image frame and obtaining described image frame and include facial image,
Determine the facial image corresponding destination node in map;
Choosing module, for selecting the start node matched with described image frame from the map;Wherein, described image
The feature of the feature of frame node image corresponding with the start node matches;
Module is chosen, for according to the start node and the destination node, being selected in the path that the map includes
Take and tend to target motion path;
Motion module, for the trend target movement path according to selection.
In one embodiment, described device also includes:
Detection module, for described image frame to be inputted into convolutional neural networks model;Obtain the convolutional neural networks mould
The characteristic pattern for multiple Internets output that type includes;Each characteristic pattern is sequentially input into Memory Neural Networks model;Obtain institute
State whether the described image frame of Memory Neural Networks model output includes the result of facial image.
In one embodiment, the map structuring module is additionally operable to the feature for the node image that extraction obtains;Obtain
Take the feature of node image corresponding to existing node in map;It is determined that between the feature and the feature of extraction that obtain
Transformation matrices;According to the node and the transformation matrices, it is determined that the node image obtained saves accordingly in map
Point.
In one embodiment, the map structuring module is additionally operable to calculate node diagram corresponding to existing node in map
Similarity between the feature of picture, and the feature of the node image of acquisition;When node corresponding to existing node in map
When similarity between the feature of image, and the feature of the node image of acquisition exceedes default similarity threshold, then basis
The corresponding node of the node image obtained, the circular path for including the existing node is generated in the map.
In one embodiment, the motion module is additionally operable to extract the feature of described image frame;Obtain the starting section
The feature of the corresponding node image of point;Determine the space shape between the feature of described image frame and the feature of the node image
State measures of dispersion;Moved according to the spatiality measures of dispersion.
In one embodiment, the motion module be additionally operable to obtain successively the trend target motion path include it is each
The feature of node image corresponding to node;The sky between the feature of the node image of the corresponding adjacent node of acquisition is determined successively
Between state difference amount;The spatiality measures of dispersion according to determining successively is moved.
A kind of computer equipment, including memory and processor, computer-readable instruction are stored with the memory, institute
When stating computer-readable instruction by the computing device so that the step of the computing device motion control method.
A kind of service robot, including memory and processor, computer-readable instruction are stored with the memory, institute
When stating computer-readable instruction by the computing device so that the step of the computing device motion control method
Above-mentioned motion control method, device, computer equipment and service robot, after picture frame is got, it is possible to
Automatically when detecting that the picture frame includes facial image, the corresponding destination node of the facial image is determined in map, it is fixed
Position of the position target in map, then with the feature of the feature of picture frame node image corresponding with each node in map
Matching relationship is foundation, you can the start node matched with the picture frame is selected from map, positioning the machine is currently in map
Position, can be chosen further according to present node and destination node in the path that map includes and tend to target motion path and transport
It is dynamic.The positioning in map so can be completed by the characteristic matching between image, avoid and drawn by transducing signal positioning
The environment risen influences, and improves the accuracy of motion control.
Brief description of the drawings
Fig. 1 is the applied environment figure of motion control method in one embodiment;
Fig. 2 is the cut-away view for being used to realize the computer equipment of motion control method in one embodiment;
Fig. 3 is the schematic flow sheet of motion control method in one embodiment;
The schematic flow sheet for the step of Fig. 4 is Face datection in one embodiment;
Fig. 5 is the schematic diagram for carrying out recognition of face in one embodiment to facial image;
Fig. 6 is schematic flow sheet the step of map is built in one embodiment;
Fig. 7 is the schematic flow sheet of map building process in one embodiment;
Fig. 8 is the schematic diagram for the map that completion is created in one embodiment;
Fig. 9 is to choose the schematic diagram for tending to target motion path in one embodiment in map;
Figure 10 is the schematic flow sheet of motion control method in another embodiment;
Figure 11 is the structured flowchart of motion control device in one embodiment;
Figure 12 is the structured flowchart of motion control device in another embodiment;
Figure 13 is the structured flowchart of motion control device in another embodiment;
Figure 14 is the structured flowchart of motion control device in further embodiment.
Embodiment
In order to make the purpose , technical scheme and advantage of the present invention be clearer, it is right below in conjunction with drawings and Examples
The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and
It is not used in the restriction present invention.
Fig. 1 is the applied environment figure of motion control method in one embodiment.As shown in figure 1, the motion control method should
For kinetic control system.The kinetic control system is applied to indoor scene.Kinetic control system includes the He of computer equipment 110
Target 120.Computer equipment 110 can be moved by performing motion control method to target 120.Those skilled in the art can be with
Understand, the application environment shown in Fig. 1, only the part scene related to application scheme, is not formed to the application side
The restriction of case application environment, it is medium that the kinetic control system applies also for outdoor open scene.
Fig. 2 is the internal structure schematic diagram of one embodiment Computer equipment.As shown in Fig. 2 the computer equipment bag
Include the processor connected by system bus, non-volatile memory medium, built-in storage, camera, voice collection device, raise one's voice
Device, display screen, input unit and telecontrol equipment.Wherein, the non-volatile memory medium of computer equipment is stored with operating system,
Computer-readable instruction can be also stored with, when the computer-readable instruction is executed by processor, it is a kind of to may be such that processor is realized
Motion control method.The processor is used to provide calculating and control ability, supports the operation of whole computer equipment.The memory storage
Also computer-readable instruction can be stored in device, when the computer-readable instruction is by the computing device, may be such that the place
Manage device and perform a kind of motion control method.The display screen of computer equipment can be LCDs or electric ink display screen
It can be the button set on the touch layer or terminal enclosure covered on display screen, trace ball or touch Deng, input unit
Control plate or external keyboard, Trackpad or mouse etc..The computer equipment is moveable electronic equipment, specifically may be used
To be service robot etc..It will be understood by those skilled in the art that the structure shown in Fig. 1, only with application scheme phase
The block diagram of the part-structure of pass, the restriction for the terminal being applied thereon to application scheme is not formed, specific terminal can
With including than more or less parts shown in figure, either combining some parts or being arranged with different parts.
As shown in figure 3, in one embodiment, there is provided a kind of motion control method.The present embodiment is main in this way
Illustrated applied to the computer equipment in above-mentioned Fig. 2.Reference picture 3, the motion control method specifically comprise the following steps:
S302, obtain picture frame.
In one embodiment, computer equipment can by camera, the acquired image frames under the current visual field of camera,
Obtain the picture frame collected.Wherein, the visual field of camera can change because of the change of the posture of computer equipment and position.
In one embodiment, computer equipment can specifically be adopted according to fixed or dynamic frame per second acquired image frames, acquisition
Collect obtained picture frame.Wherein, fixed or dynamic frame per second can be such that picture frame is fixed according to this or when dynamic frame per second plays
Continuous dynamic menu is formed, so that the specific objective in the traceable continuous dynamic menu of computer equipment.
In one embodiment, computer equipment can call camera to open camera-shooting scanning pattern, and real time scan is current
Specific objective under the visual field, and picture frame is generated in real time according to certain frame per second, obtain the picture frame of generation.
Wherein, computer equipment is moveable electronic equipment, can be specifically robot etc..Camera can be calculated
Camera built in machine equipment, or the external camera associated with computer equipment.Camera can be monocular cam,
Binocular camera or RGB-D (Red-Green-Blue-Deep) shootings are first-class.
S304, when carrying out Face datection to picture frame and obtaining picture frame and include facial image, determine facial image on ground
Corresponding destination node in figure.
Wherein, map is the characteristic profile that computer equipment is built according to the picture frame gathered from place.Meter
SLAM (Simultaneous Localization And Mapping are positioned and map structuring) can be based on to nature by calculating machine equipment
Space builds corresponding map.Computer equipment can be specifically three-dimensional point diagram based on the map that SLAM is built.Node is to calculate
Machine equipment by from place the position of acquired image frames project the position into map space.Destination node is target certainly
Position in right space is projected to the node of map.For example target is A (x1, y1, z1) in the coordinate of place, and A is projected
B (x2, y2, z2) is designated as to map space recoil, then B is node of the target in map.
In one embodiment, computer equipment can extract the image that the picture frame includes after picture frame is got
Data, and detect whether the view data includes face characteristic data.Included if computer equipment detects in the view data
Face characteristic data, then judge that the picture frame includes facial image.Computer equipment also can be after picture frame be got, by this
Picture frame is sent to server, completes the Face datection process to picture frame by server, then return to image to computer equipment
Whether the testing result of facial image is included in frame.Wherein, testing result may include the probability that facial image in picture frame be present
With the coordinates regional of facial image.
In one embodiment, some nodes are may include in map, each node has one-to-one node image.Ground
Figure may also include the characteristic point extracted from node image.Map including characteristic point and node is to place Scene
Three-dimensional reconstruction.Specifically, the three-dimensional point in the three-dimensional scenic in place is counted by the projective transformation of projection matrix
The pixel in the two-dimensional image frame of machine equipment camera image pickup plane is calculated, the pixel in two-dimensional image frame is again by projection square
The projection inverse transformation of battle array, obtains the three-dimensional feature point in the three-dimensional reconstruction scene in map.
Computer equipment can calculate the facial image in map when detecting that the picture frame includes facial image
Position.Specifically, computer equipment can determine that coordinate position of the facial image in the picture frame, according to computer equipment
Camera adaptation projection matrix, calculate position of the facial image in map, in the node that map includes search with
The corresponding node in position being calculated, obtains destination node.
In one embodiment, computer equipment can extract the figure when detecting that the picture frame includes facial image
As the background characteristics point of background image in frame, the background characteristics point of extraction is matched with the characteristic point that map includes, obtained
The position with the characteristic point of the background characteristics Point matching of extraction in map is taken, so as to be chosen in map with the distance of the position most
Near node, obtains destination node.
In one embodiment, the picture frame that computer equipment obtains can be picture frame more than two frames or two frames.
When computer equipment has facial image in the image that is obtained of detection, it can calculate similar between any two field pictures frame
Matrix, then the human face characteristic point of matching is chosen from the facial image included for calculating the picture frame of similar matrix, and really
Fixed position of the human face characteristic point on picture frame.Computer equipment can be further according between any two field pictures frame being calculated
Similar matrix and selection position of the human face characteristic point in this two field pictures, determine the face according to range of triangle algorithm
Position of the characteristic point in place.Computer equipment can determine further according to position of the human face characteristic point in place
Position of the human face characteristic point in map, so as to choose the closest node with the position in map, obtain target
Node.
S306, the start node matched with picture frame is selected from map;Wherein, the feature of picture frame and start node pair
The feature for the node image answered matches.
Wherein, node image is position of the computer equipment in it the place of projection relation be present with the node in map
Put the image of place's collection.The feature of image can be one or more of groups in color characteristic, textural characteristics and shape facility
Close.Computer equipment can extract feature, by the node diagram of extraction when building map to node image corresponding to map interior joint
The feature of picture is stored in database or caching relative to corresponding node.
In one embodiment, computer equipment can travel through the feature of node image corresponding to each node in map, judge
Whether the feature of node image of traversal extremely matches with the feature of picture frame.Computer equipment can judge the node diagram of traversal extremely
When the feature of picture and the characteristic matching of picture frame, obtain the node corresponding to the feature of the node image of traversal extremely and saved for starting
Point.
In one embodiment, computer equipment is characterized in the feature and picture frame for judging the node image of traversal extremely
During no matching, the similarity between the feature of node image and the feature of picture frame of traversal extremely specifically can be first calculated, and then sentence
Whether the similarity of breaking is more than or equal to default similarity;If so, then match;If it is not, then mismatch.Wherein, similarity can use
The Hamming distance of cryptographic Hash is each perceived between cosine similarity or image.
In one embodiment, computer equipment specifically can choose pole according to the pixel value of each pixel in node image
Value point is used as characteristic point.Wherein, computer can (Features from accelerated segment test be fast based on FAST
Fast feature point detection) or Harris Corner Detection Algorithms scheduling algorithm selection extreme point, the characteristic point of node image is obtained, then will
Obtained characteristic point passes through binary coding representation.The reusable one dimensional image characteristic vector of computer equipment represents node image bag
The characteristic point included, obtain and the one-to-one one dimensional image characteristic vector of the node of map.
Computer equipment can be in the way of the feature for characterizing node image, and generation characterizes the feature of the picture frame obtained
One dimensional image characteristic vector.The one dimensional image characteristic vector that computer equipment can calculate generation again is corresponding with each node of map
Vector similarity between one dimensional image characteristic vector, and then it is similar to judge whether the vector similarity is more than or equal to default vector
Degree;If so, then match;If it is not, then mismatch.
S308, according to start node and destination node, chosen in the path that map includes and tend to target motion path.
Specifically, the path formed by the node in map is may include in map.Computer equipment can be with start node
As starting point, destination node is terminal, and path selection obtains tending to target motion in the path formed in map by node
Path.
In one embodiment, for map using start node as starting point, destination node can be one for the path of terminal
It is or a plurality of.When using start node as starting point, when destination node is unique for the path of terminal, computer equipment can be obtained directly
The path is trend target motion path.When using start node as starting point, when destination node is not unique for the path of terminal, meter
A paths can be randomly selected as target motion path is tended to by calculating machine equipment, can also obtain the minimum path of the nodes included
As trend target motion path.
S310, according to the trend target movement path of selection.
Specifically, computer equipment obtains each section included by the path after trend target movement path is chosen
The feature of node image, computer is determined according to the variation relation between the feature of node image corresponding to each node corresponding to point
The direction of equipment current kinetic and distance, moved according to the direction and distance of determination to target.
Above-mentioned motion control method, after picture frame is got, it is possible to automatically detecting that the picture frame includes people
During face image, the corresponding destination node of the facial image, position of the positioning target in map, then with this are determined in map
The matching relationship of the feature of the feature of picture frame node image corresponding with each node in map is foundation, you can is chosen from map
The start node matched with the picture frame is selected, the machine currently position in map is positioned, further according to present node and target section
Point can be chosen in the path that map includes tends to target motion path to move.So pass through the characteristic matching between image
The positioning in map can be completed, avoid by transducing signal position caused by environment influence, improve motion control
Accuracy.
In one embodiment, after step S302, the step of motion control method also includes Face datection, face inspection
The step of survey, specifically includes:
S402, picture frame is inputted into convolutional neural networks model.
Wherein, convolutional neural networks model is the complex network model for being interconnected by multilayer and being formed.Neutral net mould
Type may include multilayer feature conversion layer, and every layer of Feature Conversion layer has corresponding nonlinear change operator, every layer of non-linear change
It can be multiple to change operator, and a nonlinear change operator carries out non-linear change to the image of input in every layer of Feature Conversion layer
Change, obtain characteristic pattern (Feature Map) and be used as operation result.
Specifically, convolutional neural networks model is using the image including facial image as training data, carries out study instruction
What is got is used to extract the model of face characteristic.Computer equipment inputs convolution god after picture frame is got, by picture frame
Through network model, face characteristic extraction is carried out to picture frame using convolutional neural networks model.Wherein, face characteristic can be used
In reflecting the distance between the sex of people, the profile of face, hair style, glasses, nose, mouth and each face's organ etc. wherein
One or more features.
In one embodiment, convolutional neural networks model is using image as training data, carries out learning training and obtains
Be used for extract the model of characteristics of image.Computer equipment inputs convolutional neural networks after picture frame is got, by picture frame
Model, image characteristics extraction is carried out to picture frame using convolutional neural networks model.
S404, obtain the characteristic pattern for multiple Internets output that convolutional neural networks model includes.
Specifically, computer equipment can obtain the characteristic pattern for multiple Internets output that convolutional neural networks model includes.
Characteristic pattern is that the response for being handled to obtain to the image of input by nonlinear change operator is formed.Different Internets carry
The feature taken is different.Computer equipment can utilize the characteristic pattern for the convolutional neural networks output for extracting face characteristic to determine input
The corresponding face characteristic data of image.Computer equipment can utilize the characteristic pattern for the convolutional neural networks output for extracting characteristics of image
It is determined that the corresponding image feature data of image of input, and then judge whether include face characteristic number in the image feature data
According to.
For example, computer equipment can use 52 layer depth residual error network models to carry out image procossing, the depth is extracted
The characteristic pattern for 4 layers of full articulamentum output that residual error network model includes, as follow-up input.
S406, each characteristic pattern is sequentially input into Memory Neural Networks model.
Wherein, Memory Neural Networks model is the neural network model that integrated treatment can be carried out to sequence inputting.Memory god
It is recurrent neural networks model through network model.Memory Neural Networks model can be specifically LSTM (Long Short-Term
Memory grows Memory Neural Networks in short-term).Specifically, computer equipment can sequentially input each characteristic pattern of acquisition in memory nerve
Network model, carry out facial features localization.
S408, obtains whether the picture frame of Memory Neural Networks model output includes the result of facial image.
Specifically, computer equipment can obtain Memory Neural Networks model and be obtained according to each characteristic pattern integrated treatment of input
Face datection result.Face datection result includes the coordinate area of the probability and facial image of facial image in picture frame being present
Domain.
In one embodiment, computer equipment can also be after extraction obtains Face datection result, according to Face datection knot
Coordinates regional of the facial image that fruit includes in picture frame, filter out the face inspection that overlapping region exceedes default anti-eclipse threshold
Result is surveyed, coordinates regional of the facial image in picture frame is obtained according to the Face datection result retained after filtering.
In one embodiment, a rectangular window can be used in Memory Neural Networks model, according to preset direction and presets
Step-length moves in the characteristic pattern of input, so as to carry out window scanning, the face in the video in window of extraction scanning extremely in scanning
Characteristic, according to the face characteristic image of extraction, obtain scanning the probability that facial image in video in window extremely be present.Will meter
Coordinates regional of the forward video in window of obtained probability sorting in picture frame stored, and is continued to subsequently inputting
Characteristic pattern is handled.
Fig. 5 shows the schematic diagram for carrying out recognition of face in one embodiment to facial image.With reference to figure 5, computer is set
The standby Memory Neural Networks model used according to rectangular window scanning analysis, obtains and A pairs of rectangular window to the characteristic pattern of input
The probability P for the presence facial image answeredA, the probability P of facial image corresponding to rectangular window B be presentB, deposit corresponding to rectangular window C
In the probability P of facial imageC.Now, PC>PA>PB, Memory Neural Networks model can be by PCCorresponding rectangular window C is recorded,
Continue to the characteristic pattern subsequently inputted according to rectangular window scanning analysis, and comprehensive repeatedly analysis obtains rectangular window and corresponding
Presence facial image probability, the probability and the face of facial image be present in the picture frame that output computer equipment obtains
Coordinates regional of the image in picture frame.
In the present embodiment, characteristics of image is fully extracted by the multiple Internets included of convolutional neural networks model,
The feature that Multilayer Network network layers are extracted again inputs Memory Neural Networks model integrated treatment so that Face datection is more accurate.
In one embodiment, after step S304, the step of motion control method also includes recognition of face, face is known
Other step specifically includes:Extract the face characteristic data of facial image;According to face characteristic data query and facial image phase
The default facial image of matching;Target identities recognition result is obtained according to default facial image;Tied it is determined that being identified with target identities
The associated service type of fruit.After step S310, the motion control method also includes:Take corresponding with service type is provided
Business triggering entrance.
Wherein, target identities recognition result is the data for reflecting target identities.Target identities can be the name of target
Word, social status or job information etc..
In one embodiment, default facial image database is provided with computer equipment, default facial image database includes
Some default facial images.Computer equipment can be when detecting that picture frame includes facial image, by the face in picture frame
Image is compared with the default facial image that default facial image database includes, facial image and default face in detection image frame
Whether matched between image.When computer equipment can match between the facial image in picture frame and default facial image, sentence
The facial image and default facial image that the fixed picture frame includes are identical character image, and it is corresponding to obtain the default facial image
Target identity information as target identities recognition result.
Wherein, default facial image can be the real human face image for reflecting corresponding target.It can be uploaded from target
Personal information, in the pictorial information delivered of history, automatically analyzed by the image of the corresponding self-defined selection of target, or by system
The pictures chosen, as corresponding default facial image.
In one embodiment, it is between facial image and default facial image of the computer equipment in detection image frame
No matching, it can specifically calculate the similarity between the facial image in picture frame and default facial image.Computer equipment can be first
The facial image and the respective feature of default facial image in picture frame are extracted, so as to calculate the difference between two features, feature
Between the more big then similarity of difference it is lower, the smaller then similarity of difference between feature is higher.Wherein, computer equipment calculates
During similarity between the facial image and default facial image in picture frame, it can use and be calculated suitable for the acceleration of image processor
Method, improve arithmetic speed.
In one embodiment, from the view data after computer equipment can include facial image judging the picture frame
Middle extraction face characteristic data, then the face characteristic data of extraction are relative with each default facial image in default facial image database
The face characteristic data answered compare, and obtain target identities recognition result.
In one embodiment, the facial image that the picture frame that computer equipment is detected to obtain to picture frame includes
Can be one or more.The facial image that computer equipment can determine that picture frame and include accounts for the accounting of picture frame, extraction
Accounting exceedes the face characteristic data of the facial image of preset ratio;And/or determine the clear of the facial image that picture frame includes
Clear degree, extraction definition exceed the face characteristic data of the facial image of clarity threshold.Computer equipment is again to being extracted people
The facial image of face characteristic is identified.
Further, computer equipment can be searched and identified with target identities after identification obtains target identities recognition result
As a result associated service type.Wherein, service type is the type belonging to the service provided to target.Service type is such as eaten
The Room service of ordering or hotel's hospitalit etc..Service type can be the unified type or and target identities set
Related type, can also be the type related to objective attribute target attribute.
In one embodiment, computer equipment can set service type in advance, and service type and target identification are closed
Connection, then the service type of setting is stored in database or file, read when needed from database or file.Meter
Machine equipment is calculated after identification obtains target identities recognition result, target identification institute corresponding to the target identities recognition result can be pulled
The service type of association.
Further, computer equipment is it is determined that after the service type associated with target identities recognition result, can be
After moving to target, service trigger entrance corresponding with the service type determined is provided to target.Specifically, computer equipment can
Service trigger entrance is provided by display screen, also can provide voice service entrance with target by loudspeaker and sound collector.
In one embodiment, computer equipment is after moving at destination node, can acquired image frames determine current institute
Service trigger entrance is provided in position, and for target, the service parameter of input is received by display screen or sound collector, is calculated
Machine equipment is so that it is determined that the content of the object of current service, the position of current service and current service.
In above-described embodiment, the image detection of acquisition to when face be present, is being identified to existing face, known
The identity of target is not obtained and the service entrance associated with the target can be provided to the target after moving to target, greatly
Improve the efficiency that service provides.
In above-described embodiment, the recognition of face step and determination and target identities recognition result phase that are handled by computer equipment
The service type of association can be by server process.Computer equipment can send the picture frame of acquisition to server, server
After Face datection, recognition of face are completed to picture frame and determines the service type associated with target identities recognition result,
Target identities recognition result is sent to computer equipment with associated service type.
In one embodiment, before step S402, the motion control method also include structure map the step of, the step
Specifically include:
S602, picture frame is chosen from the picture frame chronologically gathered.
Wherein, the picture frame of selection, can be the key frame in the picture frame gathered.
In one embodiment, computer equipment can receive user's selection instruction, according to user's selection instruction, from collection
Picture frame in choose picture frame.
In one embodiment, computer equipment can choose image according to predetermined interval frame number from the picture frame of collection
Frame.For example choose picture frame after 20 two field picture frames.
Whether S604, the feature for the picture frame for judging to choose meet the feature of default node image.
Specifically, default node image is characterized in the default feature for being used to select node image.Meet default
The feature of node image can be the spy to match in the characteristic point that the characteristic point that image includes includes with existing node image
The quantity of sign point exceed predetermined number or including characteristic point with matching in the characteristic point that includes of node image
The ratio that characteristic point accounts for the characteristic point that existing node image includes is less than preset ratio.
Illustrate, it is assumed that the characteristic point quantity that the node image added recently includes is 100, the picture frame currently chosen
Including characteristic point quantity be 120.Predetermined number is 50, preset ratio 90%.Wherein, if the picture frame currently chosen includes
Characteristic point and the characteristic point that includes of the node image added recently in the quantity of characteristic point that matches be 70.So, currently
The quantity for the Feature Points Matching that the characteristic point that picture frame includes includes with existing node image exceedes predetermined number, can determine that and works as
The feature of the picture frame of preceding selection meets the feature of default node image.
S606, when the feature of the picture frame of selection meets the feature of node image, the picture frame for obtaining selection is node
Image.
In one embodiment, computer equipment, can be according to fixed or dynamic frame after the instruction of structure map is obtained
Rate acquired image frames, choose the characteristic point that the picture frame of collection includes quantity be more than predetermined number threshold value picture frame be initial
Node image, determine the node image corresponding node in map, and the characteristic point that the node image includes is in map
In corresponding position, build local map.Computer equipment chooses picture frame from the picture frame chronologically gathered again, will choose
Meet the picture frame of the feature of default node image as follow-up node image, until obtaining global map.
Specifically, the node image that computer equipment can be initial is reference mode image, is followed the trail of in reference mode image
Characteristic point.When the number of matches for the characteristic point that characteristic point that the picture frame of selection includes includes with reference mode image is less than the
One predetermined number and when being higher than the second predetermined number, using the picture frame of selection as node image.When the picture frame of selection includes
The number of matches of characteristic point that includes of characteristic point and reference mode image when being less than the second predetermined number, the section that will be obtained recently
Dot image is reference mode image, continues picture charge pattern, to choose node image.
S608, it is determined that the node image obtained corresponding node in map.
Specifically, computer equipment can determine that the node image that the acquisition is gathered in place is projected on map space
In node.Computer equipment can extract the feature in the forward node image of the node image sequential of acquisition, calculates sequential and leans on
The feature of preceding node image and the transformation matrices of the node image obtained, it is forward to obtain collection sequential according to the transformation matrices
The variable quantity of position during the node image that position during node image obtains to collection, determine what is obtained further according to the variable quantity
Node image corresponding node in map.
In one embodiment, step S608 includes:Extract the feature of the node image obtained;Obtain existing in map
The feature of node image corresponding to node;It is determined that the transformation matrices between the feature and the feature of extraction that obtain;According to node with
Transformation matrices, it is determined that the node image obtained corresponding node in map.
Wherein, transformation matrices are the features of two dimensional image to the similar variation relation between the feature of two dimensional image.Specifically
Ground, the feature of the extractable node image obtained of computer equipment, the feature of node image corresponding to existing node in map
Matched, obtain the feature that the match is successful the position in the node image and existing node image of acquisition respectively.Obtain
Node image be rear collection picture frame, existing node image be rear collection picture frame.Computer equipment
According to the feature of obtained matching priority collection two field pictures frame on position determine successively collection two field pictures frame it
Between transformation matrices, so as to obtain the change in location and attitudes vibration when computer equipment gathers this two field pictures frame, further according to
In the position of the image of preceding collection and posture, you can obtain in the position of the image of rear collection and posture.
In one embodiment, node image corresponding to existing node can be a frame or multiframe in map.Calculate
Machine equipment can also obtain by the feature of the node image of acquisition compared with the feature of node image corresponding to multiple existing nodes
The transformation matrices of picture frame after acquisition and multiple picture frames formerly gathered, integrate to obtain further according to multiple transformation matrices
The position of the image gathered afterwards and posture.Such as multiple change in location and attitudes vibration to being calculated weighting averaging etc..
In the present embodiment, by the transformation matrices between the feature of node image, the node image currently obtained
With the transforming relationship in preceding existing node image, currently schemed by position estimating of the preceding picture frame in map so as to realize
As the position in map of frame, positioning in real time is realized.
S610, the feature of the node image obtained is stored corresponding to the node of determination.
Specifically, computer equipment can extract the feature of node image, and the feature of node image is corresponded into node image
Corresponding node storage, can be when needing progress characteristics of image to compare, the spy of node image directly according to corresponding to node checks
Sign, search efficiency is improved to save memory space.
In the present embodiment, by itself acquired image frames, then processing is carried out to the picture frame of collection and can be carried out automatically
Map structuring, the staff for avoiding the need for largely possessing professional drawing ability manually survey and draw to environment, to the people that works
Member's Capability Requirement is high and tedious problem, improves the efficiency of map structuring.
In one embodiment, after step S608, the motion control method also includes:Calculate existing node in map
Similarity between the feature of corresponding node image, and the feature of the node image of acquisition;When existing node pair in map
When similarity between the feature for the node image answered, and the feature of the node image of acquisition exceedes default similarity threshold, then
According to the corresponding node of the node image of acquisition, the circular path for including existing node is generated in map.
Specifically, when computer equipment obtains node image, can by the feature of the newly-increased node image and map
The feature of node image is compared corresponding to some nodes, calculate newly-increased node image feature and map in existing section
Similarity between the feature of node image corresponding to point.When the feature of node image corresponding to existing node in map, with
When similarity between the feature of newly-increased node image exceedes default similarity threshold, computer equipment can determine that newly-increased section
Collection position with existing node corresponding node image collection position in place of the dot image in place
Unanimously.
Computer equipment can be generated from the existing node by the corresponding node of node image of acquisition in map
Rise, the node added after the existing node, to the circular path from the existing node.Computer equipment certainly can should again
Existing node plays the feature that order successively obtains the node image corresponding to each node that circular path includes;Determine to obtain successively
Transformation matrices between the feature of the node image of the corresponding adjacent node taken;Adjusted according to the transformation matrices backward determined successively
The feature for the node image corresponding to each node that circular path includes.
For example, computer equipment from the first frame node image, adds node image structure local map successively.
It is default similar to detect that the similarity between the feature of current 4th frame node image and the feature of the first frame node image exceedes
When spending threshold value, judge collection position of the 4th frame node image in place with the first frame node image in place
Collection position consistency, generate first the-the second frame of frame node image node image the-the three frame node image the-the first frame node diagram
The circular path of picture.
Wherein, the transformation matrices between the feature of the feature of the first frame node image and the second frame node image are H1, the
Transformation matrices between the feature of two frame node images and the feature of the 3rd frame node image are H2, the spy of the 3rd frame node image
Transformation matrices between sign and the feature of the 4th frame node image are H4.Computer equipment can be by the feature of the first frame node image
Change according to H4, according to the frame node image of characteristic optimization the 3rd of obtained image, then by the 3rd frame node image after optimization
Change according to H3, according to characteristic optimization the second frame node image of obtained image.
In the present embodiment, using the similarity of the feature of newly-increased node image and the feature of existing node image as
According to closed loop detection is carried out, when having detected closed loop, circular path is generated in map, to carry out follow-up closed-loop optimization,
Improve the accuracy of structure map.
Fig. 7 shows the schematic flow sheet of map building process in one embodiment.With reference to figure 7, the map building process
Three parts are detected including following the trail of, building figure and closed loop.Computer equipment after the instruction of structure map is obtained, can according to fixed or
Dynamic frame per second acquired image frames.After picture frame is collected, the characteristic point of the picture frame is extracted, by the characteristic point of extraction and ground
The Feature Points Matching of node image corresponding to the node increased newly in figure.When the characteristic point of extraction is corresponding with the node increased newly in map
Node image Feature Points Matching failure when, the picture frame that computer equipment can reacquire collection is relocated.
When the Feature Points Matching success of the characteristic point of extraction node image corresponding with the node increased newly in map, according to
The node increased newly in map estimates the corresponding node with map of the picture frame of collection.Computer equipment can be followed the trail of in map again
The characteristic point to match with the image of collection, according to the corresponding node with map of the characteristic optimization to match the picture frame.
After the completion of to this of collection image optimization, judge whether the characteristic point of the picture frame meets the feature of default node image
Point, if it is not, the picture frame that computer equipment can reacquire collection carries out Feature Points Matching.
If the characteristic point of the picture frame meets the characteristic point of default node image, computer equipment can obtain the picture frame
For newly-increased node image.Computer equipment can extract the characteristic point of the newly-increased node image, according to default unified lattice
Formula represents the characteristic point of extraction, and position of the characteristic point of newly-increased node image in map is determined according still further to range of triangle algorithm
Put, so as to update local map, then carry out local boundling adjustment, remove the node image that similarity is higher than default similarity threshold
The node of corresponding redundancy.
Computer equipment, can asynchronous progress closed loop detection after the picture frame is obtained as newly-increased node image.Will be newly-increased
The feature of feature node image corresponding with existing node of node image contrasted, as the spy of newly-increased node image
Similarity between the feature of sign node image corresponding with existing node is higher than default similarity threshold, and computer equipment can
Judge collection position with existing node corresponding node image of the newly-increased node image in place in place
In collection position consistency, that is, closed loop be present.Computer equipment can be further according to the newly-increased corresponding node of node image, in map
It is middle to generate the circular path for the node for including position consistency, and carry out closed-loop optimization and closed loop fusion.Finally give including feature
Point, node and the global map in path
Fig. 8 shows the schematic diagram for the map that completion is created in one embodiment.With reference to figure 8, the map is based on sparse
The feature distribution schematic diagram that feature is established.The schematic diagram includes the path formed between characteristic point 801, node 802 and node
803.Wherein, characteristic point 801 is throwing of position of the characteristic point of object in place in place in map space
Shadow position.Node 802 is place position of the computer equipment in place during acquired image frames in map space
Projected position.The path 803 formed between node is path that computer equipment moves in place in map space
Projection.
In one embodiment, step S306 includes:Extract the feature of picture frame;Obtain corresponding to the node that map includes
Node image feature;Determine the similarity between the feature of picture frame and the feature of node image;Choose corresponding similarity
Node corresponding to the feature of highest node image, obtain the start node to match with picture frame.
Specifically, computer equipment will calculate the feature of node image corresponding to existing node in map, with acquisition
The feature of node image when comparing, the difference between two characteristics of image can be calculated, the more big then similarity of difference between feature
Lower, the smaller then similarity of difference between feature is higher.Similarity can be used and each perceived between cosine similarity or image
The Hamming distance of cryptographic Hash.The feature of computer equipment node image corresponding to existing node in map is calculated, with
After similarity between the feature of the node image of acquisition, choose corresponding to the feature for corresponding to similarity highest node image
Node, obtain the start node to match with picture frame.
In the present embodiment, it is similar to the feature of the node image corresponding to the node that map includes by current image frame
Match to position the position currently in map so that self poisoning result is more accurate.
In one embodiment, before step S310, the motion control method also includes:Extract the feature of picture frame;Obtain
Take the feature of the node image corresponding to start node;Determine the space shape between the feature of picture frame and the feature of node image
State measures of dispersion;Moved according to spatiality measures of dispersion.
Wherein, spatiality measures of dispersion is that computer equipment is gathering the variable quantity of different picture frame time space states.
Spatiality measures of dispersion includes differences in spatial location amount and space angle measures of dispersion.Differences in spatial location amount is that computer equipment exists
Movement physically.For example computer equipment is gathering the first two field picture frame level when the second two field picture frame is gathered
0.5m is translated forward.Space angle measures of dispersion is rotation of the computer equipment in physical orientation, such as, computer equipment is being adopted
Collect the first two field picture frame 15 degree of rotate counterclockwise when the second two field picture frame is gathered.
Specifically, computer equipment can calculate picture frame feature and start node corresponding to node image feature it
Between transformation matrices, recover the movement position of computer equipment according to the transformation matrices that are calculated, decomposed from transformation matrices
Spin matrix and transposed matrix are obtained, the space between the feature of picture frame and the feature of node image is obtained according to spin matrix
Angle difference amount, the differences in spatial location amount between the feature of picture frame and the feature of node image is obtained according to transposed matrix.
Computer equipment can determine the direction of current kinetic further according to space angle measures of dispersion, be determined according to differences in spatial location amount current
The distance of motion, so as to move the distance determined according to the direction of determination.
In the present embodiment, by between the picture frame currently got node image corresponding with the start node of determination
Spatiality measures of dispersion, to move at the start node in map, so as to according to selection trend target motion path to
Target is moved, and ensure that the accuracy of motion.
In one embodiment, step S310 includes:It is right that each node institute that trend target motion path includes is obtained successively
The feature for the node image answered;Determine that the spatiality between the feature of the node image of the corresponding adjacent node of acquisition is poor successively
Different amount;Spatiality measures of dispersion according to determining successively is moved.
Specifically, computer equipment, which can obtain, tends to the section point adjacent with start node that target motion path includes
The feature of corresponding node image, calculate the feature node diagram corresponding with section point of node image corresponding to start node
Transformation matrices between the feature of picture.Computer equipment is decomposed to obtain spin matrix and displacement square again to the transformation matrices
Battle array, the space angle measures of dispersion between the feature of picture frame and the feature of node image is obtained according to spin matrix, according to displacement
Matrix obtains the differences in spatial location amount between the feature of picture frame and the feature of node image.Computer equipment can be further according to sky
Between angle difference amount determine the direction of current kinetic, the distance of current kinetic is determined according to differences in spatial location amount, so as to according to
The distance that the direction movement of determination determines, is moved at the section point in map.Computer equipment can be according still further to identical at
Reason mode determines distance and the direction of current kinetic, successively according on trend target motion path at the section point in map
Motion, until reaching at destination node.
In the present embodiment, the feature of the node image corresponding to adjacent node included by tending to target motion path
Spatiality measures of dispersion, progressively according to tend to target motion path move to destination node from start node on map, keep away
Exempt from that the problem of deviation can not determine current location occurs in motion process, ensure that the accuracy of motion.
Fig. 9 shows the schematic diagram chosen in one embodiment in map and tend to target motion path., should with reference to figure 9
Schematic diagram includes destination node 901, start node 902 and tends to target motion path 903.Computer equipment is it is determined that target
It is starting point with start node 902 behind position and start node 902 of the node 901 i.e. where target i.e. the machine position, with
Destination node 901 is terminal, and target motion path 903 is chosen in map.
As shown in Figure 10, in a specific embodiment, motion control method comprises the following steps:
S1002, picture frame is chosen from the picture frame chronologically gathered.
Whether S1004, the feature for the picture frame for judging to choose meet the feature of default node image;If so, then redirect
To step S1006;If it is not, then return to step S1002.
S1006, the picture frame for obtaining selection are node image.
S1008, extract the feature of the node image of acquisition;Obtain the spy of node image corresponding to existing node in map
Sign;It is determined that the transformation matrices between the feature and the feature of extraction that obtain;According to node and transformation matrices, it is determined that the node obtained
Image corresponding node in map, the feature of the node image obtained is stored corresponding to the node of determination.
S1010, calculate the feature of node image corresponding to existing node in map, the feature with the node image of acquisition
Between similarity;When the feature of node image corresponding to existing node in map, with the feature of the node image of acquisition it
Between similarity when exceeding default similarity threshold, then according to the corresponding node of node image of acquisition, bag is generated in map
Include the circular path of existing node.
S1012, obtain picture frame.
S1014, picture frame is inputted into convolutional neural networks model;Obtain multiple networks that convolutional neural networks model includes
The characteristic pattern of layer output;Each characteristic pattern is sequentially input into Memory Neural Networks model, obtains the output of Memory Neural Networks model
Face datection result.
S1016, judge that Face datection result indicates whether that picture frame includes facial image;If so, then jump to step
S1018;If it is not, then return to step S1012.
S1018, extract the face characteristic data of facial image;Matched according to face characteristic data query with facial image
Default facial image;Target identities recognition result is obtained according to default facial image;It is determined that with target identities recognition result phase
The service type of association.
S1020, determine facial image corresponding destination node in map.
S1022, extract the feature of picture frame;Obtain the feature of the node image corresponding to the node that map includes;It is determined that
Similarity between the feature of picture frame and the feature of node image;Choose the feature institute of corresponding similarity highest node image
Corresponding node, obtain the start node to match with picture frame.
S1024, according to start node and destination node, chosen in the path that map includes and tend to target motion path.
S1026, extract the feature of picture frame;Obtain the feature of the node image corresponding to start node;Determine picture frame
Feature and node image feature between spatiality measures of dispersion;Moved according to spatiality measures of dispersion.
S1028, the feature for tending to the node image corresponding to each node that target motion path includes is obtained successively;Successively
It is determined that the spatiality measures of dispersion between the feature of the node image of the corresponding adjacent node obtained;According to the space determined successively
State difference amount is moved.
S1030, there is provided service trigger entrance corresponding with service type.
In the present embodiment, after picture frame is got, it is possible to automatically detecting that the picture frame includes face figure
During picture, the corresponding destination node of the facial image, position of the positioning target in map, then with the image are determined in map
The matching relationship of the feature of the feature of frame node image corresponding with each node in map is foundation, you can selected from map with
The start node of picture frame matching, the current position in map of positioning the machine, further according to present node and destination node just
It can be chosen in the path that map includes and tend to target motion path to move.So pass through the characteristic matching between image
Complete positioning in map, avoid by transducing signal position caused by environment influence, improve the accurate of motion control
Property.
As shown in figure 11, in one embodiment, there is provided a kind of motion control device 1100, including:Acquisition module
1101st, determining module 1102, Choosing module 1103, selection module 1104 and motion module 1105.
Acquisition module 1101, for obtaining picture frame.
Determining module 1102, for when carrying out Face datection to picture frame and obtaining picture frame and include facial image, it is determined that
Facial image corresponding destination node in map.
Choosing module 1103, for selecting the start node matched with picture frame from map;Wherein, the feature of picture frame
The feature of node image corresponding with start node matches.
Module 1104 is chosen, for according to start node and destination node, being chosen in the path that map includes and tending to mesh
Mark motion path.
Motion module 1105, for the trend target movement path according to selection.
Above-mentioned motion control device 1100, after picture frame is got, it is possible to automatically detecting the picture frame bag
When including facial image, the corresponding destination node of the facial image is determined in map, positions position of the target in map, then
Using the matching relationship of the feature of the feature of picture frame node image corresponding with each node in map as foundation, you can from map
In select the start node matched with the picture frame, the current position in map of positioning the machine, further according to present node and mesh
Mark node can be chosen in the path that map includes tends to target motion path to move.So pass through the feature between image
Matching can complete the positioning in map, avoid by transducing signal position caused by environment influence, improve motion control
The accuracy of system.
As shown in figure 12, in one embodiment, motion control device 1100 also includes:Detection module 1106.
Detection module 1106, for picture frame to be inputted into convolutional neural networks model;Obtain convolutional neural networks model bag
The characteristic pattern of the multiple Internets output included;Each characteristic pattern is sequentially input into Memory Neural Networks model;Obtain memory nerve net
Whether the picture frame of network model output includes the result of facial image.
In the present embodiment, characteristics of image is fully extracted by the multiple Internets included of convolutional neural networks model,
The feature that Multilayer Network network layers are extracted again inputs Memory Neural Networks model integrated treatment so that Face datection is more accurate.
As shown in figure 13, in one embodiment, motion control device 1100 also includes:Identification module 1107 and service mould
Block 1108.
Identification module 1107, for extracting the face characteristic data of facial image;According to face characteristic data query and people
The default facial image that face image matches;Target identities recognition result is obtained according to default facial image;It is determined that with target body
The associated service type of part recognition result.
Service module 1108, for providing service trigger entrance corresponding with service type.
In the present embodiment, the image detection of acquisition to when face be present, is being identified to existing face, known
The identity of target is not obtained and the service entrance associated with the target can be provided to the target after moving to target, greatly
Improve the efficiency that service provides.
As shown in figure 14, in one embodiment, motion control device 1100 also includes:Map structuring module 1109.
Map structuring module 1109, for choosing picture frame from the picture frame chronologically gathered;Judge the image chosen
Whether the feature of frame meets the feature of default node image;When the feature of the picture frame of selection meets the feature of node image
When, the picture frame for obtaining selection is node image;It is determined that the node image obtained corresponding node in map;Corresponding to determination
Node storage obtain node image feature.
In the present embodiment, by itself acquired image frames, then processing is carried out to the picture frame of collection and can be carried out automatically
Map structuring, the staff for avoiding the need for largely possessing professional drawing ability manually survey and draw to environment, to the people that works
Member's Capability Requirement is high and tedious problem, improves the efficiency of map structuring.
In one embodiment, map structuring module 1109 is additionally operable to the feature for the node image that extraction obtains;Obtain ground
The feature of node image corresponding to existing node in figure;It is determined that the transformation matrices between the feature and the feature of extraction that obtain;
According to node and transformation matrices, it is determined that the node image obtained corresponding node in map.
In the present embodiment, by the transformation matrices between the feature of node image, the node image currently obtained
With the transforming relationship in preceding existing node image, currently schemed by position estimating of the preceding picture frame in map so as to realize
As the position in map of frame, positioning in real time is realized.
In one embodiment, map structuring module 1109 is additionally operable to calculate node diagram corresponding to existing node in map
Similarity between the feature of picture, and the feature of the node image of acquisition;When node image corresponding to existing node in map
Feature, when similarity between the feature of the node image of acquisition exceedes default similarity threshold, then according to the section of acquisition
The corresponding node of dot image, the circular path for including existing node is generated in map.
In the present embodiment, using the similarity of the feature of newly-increased node image and the feature of existing node image as
According to closed loop detection is carried out, when having detected closed loop, circular path is generated in map, to carry out follow-up closed-loop optimization,
Improve the accuracy of structure map.
In one embodiment, Choosing module 1103 is additionally operable to extract the feature of picture frame;Obtain the node that map includes
The feature of corresponding node image;Determine the similarity between the feature of picture frame and the feature of node image;Choose corresponding
Node corresponding to the feature of similarity highest node image, obtains the start node to match with picture frame.
In one embodiment, motion module 1105 is additionally operable to extract the feature of picture frame;Obtain corresponding to start node
Node image feature;Determine the spatiality measures of dispersion between the feature of picture frame and the feature of node image;According to sky
Between state difference amount moved.
In the present embodiment, it is similar to the feature of the node image corresponding to the node that map includes by current image frame
Match to position the position currently in map so that self poisoning result is more accurate.
In one embodiment, motion module 1105, which is additionally operable to obtain successively, tends to each node that target motion path includes
The feature of corresponding node image;The space shape between the feature of the node image of the corresponding adjacent node of acquisition is determined successively
State measures of dispersion;Spatiality measures of dispersion according to determining successively is moved.
In the present embodiment, by between the picture frame currently got node image corresponding with the start node of determination
Spatiality measures of dispersion, to move at the start node in map, so as to according to selection trend target motion path to
Target is moved, and ensure that the accuracy of motion.
A kind of computer-readable recording medium, is stored thereon with computer-readable instruction, and the computer-readable instruction is located
Reason device realizes following steps when performing:Obtain picture frame;Include face figure when obtaining picture frame to picture frame progress Face datection
During picture, facial image corresponding destination node in map is determined;The start node matched with picture frame is selected from map;Its
In, the feature of the feature node image corresponding with start node of picture frame matches;According to start node and destination node,
Chosen in the path that map includes and tend to target motion path;According to the trend target movement path of selection.
The computer-readable instruction stored on above computer readable storage medium storing program for executing when executed, is getting picture frame
Afterwards, it is possible to automatically when detecting that the picture frame includes facial image, the corresponding mesh of the facial image is determined in map
Mark node, position of the positioning target in map, then with the feature of picture frame node diagram corresponding with each node in map
The matching relationship of the feature of picture is foundation, you can the start node matched with the picture frame is selected from map, positioning the machine is worked as
The preceding position in map, it can be chosen further according to present node and destination node in the path that map includes and tend to target fortune
Dynamic path is moved.The positioning in map so can be completed by the characteristic matching between image, avoid and pass through sensing
Environment caused by signal framing influences, and improves the accuracy of motion control.
In one embodiment, computer-readable instruction cause processor perform obtain picture frame after, also perform with
Lower step:Picture frame is inputted into convolutional neural networks model;Obtain multiple Internets output that convolutional neural networks model includes
Characteristic pattern;Each characteristic pattern is sequentially input into Memory Neural Networks model;Obtain the picture frame of Memory Neural Networks model output
Whether the result of facial image is included.
In one embodiment, computer-readable instruction causes processor to be obtained in execution when to picture frame progress Face datection
When including facial image to picture frame, determine that facial image after corresponding destination node, also performs following steps in map:
Extract the face characteristic data of facial image;The default face figure to be matched according to face characteristic data query and facial image
Picture;Target identities recognition result is obtained according to default facial image;It is determined that the service class associated with target identities recognition result
Type.Computer-readable instruction cause processor perform according to the trend target movement path of selection after, also perform with
Lower step:Service trigger entrance corresponding with service type is provided.
In one embodiment, computer-readable instruction cause processor perform obtain picture frame before, also perform with
Lower step:Picture frame is chosen from the picture frame chronologically gathered;It is default whether the feature for the picture frame for judging to choose meets
The feature of node image;When the feature of the picture frame of selection meets the feature of node image, the picture frame for obtaining selection is section
Dot image;It is determined that the node image obtained corresponding node in map;The node diagram that node storage corresponding to determination obtains
The feature of picture.
In one embodiment, it is determined that obtain node image in map corresponding node, including:Extract the section obtained
The feature of dot image;Obtain the feature of node image corresponding to existing node in map;It is determined that the feature obtained and extraction
Transformation matrices between feature;According to node and transformation matrices, it is determined that the node image obtained corresponding node in map.
In one embodiment, computer-readable instruction causes processor performing the node image for determining to obtain in map
In after corresponding node, also perform following steps:The feature of node image corresponding to existing node in map is calculated, with obtaining
Similarity between the feature of the node image taken;When the feature of node image corresponding to existing node in map, with acquisition
Node image feature between similarity when exceeding default similarity threshold, then saved accordingly according to the node image of acquisition
Point, the circular path for including existing node is generated in map.
In one embodiment, the start node matched with picture frame is selected from map, including:Extract the spy of picture frame
Sign;Obtain the feature of the node image corresponding to the node that map includes;Determine the feature of picture frame and the feature of node image
Between similarity;The node corresponding to the feature of corresponding similarity highest node image is chosen, is obtained and picture frame phase
The start node matched somebody with somebody.
In one embodiment, computer-readable instruction causes processor performing the trend target motion road according to selection
Before the motion of footpath, following steps are also performed:Extract the feature of picture frame;Obtain the spy of the node image corresponding to start node
Sign;Determine the spatiality measures of dispersion between the feature of picture frame and the feature of node image;Entered according to spatiality measures of dispersion
Row motion.
In one embodiment, according to the trend target movement path of selection, including:Obtain successively and tend to target fortune
The feature for the node image corresponding to each node that dynamic path includes;The node image of corresponding adjacent node obtained is determined successively
Feature between spatiality measures of dispersion;Spatiality measures of dispersion according to determining successively is moved.
A kind of computer equipment, including memory and processor, computer-readable instruction, computer are stored in memory
When readable instruction is executed by processor so that computing device following steps:Obtain picture frame;When to picture frame progress face inspection
When measuring picture frame includes facial image, facial image corresponding destination node in map is determined;Selected from map with
The start node of picture frame matching;Wherein, the feature of the feature of picture frame node image corresponding with start node matches;Root
According to start node and destination node, chosen in the path that map includes and tend to target motion path;According to the trend mesh of selection
Mark movement path.
Above computer equipment, after picture frame is got, it is possible to automatically detecting that the picture frame includes face
During image, the corresponding destination node of the facial image, position of the positioning target in map, then with the figure are determined in map
As the matching relationship of the feature of the feature of frame node image corresponding with each node in map is foundation, you can selected from map
The start node matched with the picture frame, the machine currently position in map is positioned, further according to present node and destination node
It can be chosen in the path that map includes and tend to target motion path to move.So it is by the characteristic matching between image
The positioning in map can be completed, avoid by transducing signal position caused by environment influence, improve the standard of motion control
True property.
In one embodiment, computer-readable instruction cause processor perform obtain picture frame after, also perform with
Lower step:Picture frame is inputted into convolutional neural networks model;Obtain multiple Internets output that convolutional neural networks model includes
Characteristic pattern;Each characteristic pattern is sequentially input into Memory Neural Networks model;Obtain the picture frame of Memory Neural Networks model output
Whether the result of facial image is included.
In one embodiment, computer-readable instruction causes processor to be obtained in execution when to picture frame progress Face datection
When including facial image to picture frame, determine that facial image after corresponding destination node, also performs following steps in map:
Extract the face characteristic data of facial image;The default face figure to be matched according to face characteristic data query and facial image
Picture;Target identities recognition result is obtained according to default facial image;It is determined that the service class associated with target identities recognition result
Type.Computer-readable instruction cause processor perform according to the trend target movement path of selection after, also perform with
Lower step:Service trigger entrance corresponding with service type is provided.
In one embodiment, computer-readable instruction cause processor perform obtain picture frame before, also perform with
Lower step:Picture frame is chosen from the picture frame chronologically gathered;It is default whether the feature for the picture frame for judging to choose meets
The feature of node image;When the feature of the picture frame of selection meets the feature of node image, the picture frame for obtaining selection is section
Dot image;It is determined that the node image obtained corresponding node in map;The node diagram that node storage corresponding to determination obtains
The feature of picture.
In one embodiment, it is determined that obtain node image in map corresponding node, including:Extract the section obtained
The feature of dot image;Obtain the feature of node image corresponding to existing node in map;It is determined that the feature obtained and extraction
Transformation matrices between feature;According to node and transformation matrices, it is determined that the node image obtained corresponding node in map.
In one embodiment, computer-readable instruction causes processor performing the node image for determining to obtain in map
In after corresponding node, also perform following steps:The feature of node image corresponding to existing node in map is calculated, with obtaining
Similarity between the feature of the node image taken;When the feature of node image corresponding to existing node in map, with acquisition
Node image feature between similarity when exceeding default similarity threshold, then saved accordingly according to the node image of acquisition
Point, the circular path for including existing node is generated in map.
In one embodiment, the start node matched with picture frame is selected from map, including:Extract the spy of picture frame
Sign;Obtain the feature of the node image corresponding to the node that map includes;Determine the feature of picture frame and the feature of node image
Between similarity;The node corresponding to the feature of corresponding similarity highest node image is chosen, is obtained and picture frame phase
The start node matched somebody with somebody.
In one embodiment, computer-readable instruction causes processor performing the trend target motion road according to selection
Before the motion of footpath, following steps are also performed:Extract the feature of picture frame;Obtain the spy of the node image corresponding to start node
Sign;Determine the spatiality measures of dispersion between the feature of picture frame and the feature of node image;Entered according to spatiality measures of dispersion
Row motion.
In one embodiment, according to the trend target movement path of selection, including:Obtain successively and tend to target fortune
The feature for the node image corresponding to each node that dynamic path includes;The node image of corresponding adjacent node obtained is determined successively
Feature between spatiality measures of dispersion;Spatiality measures of dispersion according to determining successively is moved.
A kind of service robot, including memory and processor, computer-readable instruction, computer are stored in memory
When readable instruction is executed by processor so that computing device following steps:Obtain picture frame;When to picture frame progress face inspection
When measuring picture frame includes facial image, facial image corresponding destination node in map is determined;Selected from map with
The start node of picture frame matching;Wherein, the feature of the feature of picture frame node image corresponding with start node matches;Root
According to start node and destination node, chosen in the path that map includes and tend to target motion path;According to the trend mesh of selection
Mark movement path.
Above-mentioned service robot, after picture frame is got, it is possible to automatically detecting that the picture frame includes face
During image, the corresponding destination node of the facial image, position of the positioning target in map, then with the figure are determined in map
As the matching relationship of the feature of the feature of frame node image corresponding with each node in map is foundation, you can selected from map
The start node matched with the picture frame, the machine currently position in map is positioned, further according to present node and destination node
It can be chosen in the path that map includes and tend to target motion path to move.So it is by the characteristic matching between image
The positioning in map can be completed, avoid by transducing signal position caused by environment influence, improve the standard of motion control
True property.
In one embodiment, computer-readable instruction cause processor perform obtain picture frame after, also perform with
Lower step:Picture frame is inputted into convolutional neural networks model;Obtain multiple Internets output that convolutional neural networks model includes
Characteristic pattern;Each characteristic pattern is sequentially input into Memory Neural Networks model;Obtain the picture frame of Memory Neural Networks model output
Whether the result of facial image is included.
In one embodiment, computer-readable instruction causes processor to be obtained in execution when to picture frame progress Face datection
When including facial image to picture frame, determine that facial image after corresponding destination node, also performs following steps in map:
Extract the face characteristic data of facial image;The default face figure to be matched according to face characteristic data query and facial image
Picture;Target identities recognition result is obtained according to default facial image;It is determined that the service class associated with target identities recognition result
Type.Computer-readable instruction cause processor perform according to the trend target movement path of selection after, also perform with
Lower step:Service trigger entrance corresponding with service type is provided.
In one embodiment, computer-readable instruction cause processor perform obtain picture frame before, also perform with
Lower step:Picture frame is chosen from the picture frame chronologically gathered;It is default whether the feature for the picture frame for judging to choose meets
The feature of node image;When the feature of the picture frame of selection meets the feature of node image, the picture frame for obtaining selection is section
Dot image;It is determined that the node image obtained corresponding node in map;The node diagram that node storage corresponding to determination obtains
The feature of picture.
In one embodiment, it is determined that obtain node image in map corresponding node, including:Extract the section obtained
The feature of dot image;Obtain the feature of node image corresponding to existing node in map;It is determined that the feature obtained and extraction
Transformation matrices between feature;According to node and transformation matrices, it is determined that the node image obtained corresponding node in map.
In one embodiment, computer-readable instruction causes processor performing the node image for determining to obtain in map
In after corresponding node, also perform following steps:The feature of node image corresponding to existing node in map is calculated, with obtaining
Similarity between the feature of the node image taken;When the feature of node image corresponding to existing node in map, with acquisition
Node image feature between similarity when exceeding default similarity threshold, then saved accordingly according to the node image of acquisition
Point, the circular path for including existing node is generated in map.
In one embodiment, the start node matched with picture frame is selected from map, including:Extract the spy of picture frame
Sign;Obtain the feature of the node image corresponding to the node that map includes;Determine the feature of picture frame and the feature of node image
Between similarity;The node corresponding to the feature of corresponding similarity highest node image is chosen, is obtained and picture frame phase
The start node matched somebody with somebody.
In one embodiment, computer-readable instruction causes processor performing the trend target motion road according to selection
Before the motion of footpath, following steps are also performed:Extract the feature of picture frame;Obtain the spy of the node image corresponding to start node
Sign;Determine the spatiality measures of dispersion between the feature of picture frame and the feature of node image;Entered according to spatiality measures of dispersion
Row motion.
In one embodiment, according to the trend target movement path of selection, including:Obtain successively and tend to target fortune
The feature for the node image corresponding to each node that dynamic path includes;The node image of corresponding adjacent node obtained is determined successively
Feature between spatiality measures of dispersion;Spatiality measures of dispersion according to determining successively is moved.
One of ordinary skill in the art will appreciate that realize all or part of flow in above-described embodiment method, being can be with
The hardware of correlation is instructed to complete by computer program, described program can be stored in a non-volatile computer and can be read
In storage medium, the program is upon execution, it may include such as the flow of the embodiment of above-mentioned each method.Wherein, described storage is situated between
Matter can be magnetic disc, CD, read-only memory (Read-Only Memory, ROM) etc..
Each technical characteristic of above example can be combined arbitrarily, to make description succinct, not to above-described embodiment
In each technical characteristic it is all possible combination be all described, as long as however, lance is not present in the combination of these technical characteristics
Shield, all it is considered to be the scope of this specification record.
Embodiment described above only expresses the several embodiments of the present invention, and its description is more specific and detailed, but simultaneously
Therefore the limitation to the scope of the claims of the present invention can not be interpreted as.It should be pointed out that for one of ordinary skill in the art
For, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to the guarantor of the present invention
Protect scope.Therefore, the protection domain of patent of the present invention should be determined by the appended claims.
Claims (15)
1. a kind of motion control method, methods described include:
Obtain picture frame;
When carrying out Face datection to described image frame and obtaining described image frame and include facial image, determine that the facial image exists
Corresponding destination node in map;
The start node matched with described image frame is selected from the map;Wherein, the feature of described image frame with described
The feature of node image matches corresponding to beginning node;
According to the start node and the destination node, chosen in the path that the map includes and tend to target motion road
Footpath;
According to the trend target movement path of selection.
2. according to the method for claim 1, it is characterised in that after the acquisition picture frame, methods described also includes:
Described image frame is inputted into convolutional neural networks model;
Obtain the characteristic pattern for multiple Internets output that the convolutional neural networks model includes;
Each characteristic pattern is sequentially input into Memory Neural Networks model;
Obtain whether the described image frame of the Memory Neural Networks model output includes the result of facial image.
3. according to the method for claim 1, it is characterised in that described to obtain institute to described image frame progress Face datection
When stating picture frame includes facial image, the facial image is determined in map after corresponding destination node, methods described is also
Including:
Extract the face characteristic data of the facial image;
The default facial image to be matched according to the face characteristic data query and the facial image;
Target identities recognition result is obtained according to the default facial image;
It is determined that the service type associated with the target identities recognition result;
After the trend target movement path according to selection, methods described also includes:
Service trigger entrance corresponding with the service type is provided.
4. according to the method for claim 1, it is characterised in that before the acquisition picture frame, methods described also includes:
Picture frame is chosen from the picture frame chronologically gathered;
Whether the feature for the picture frame for judging to choose meets the feature of default node image;
When the feature of the picture frame of selection meets the feature of the node image, the picture frame for obtaining selection is node image;
It is determined that the node image obtained corresponding node in map;
The feature for the node image that node storage corresponding to determination obtains.
5. according to the method for claim 4, it is characterised in that the node image for determining to obtain phase in map
The node answered, including:
Extract the feature of the node image obtained;
Obtain the feature of node image corresponding to existing node in map;
It is determined that the transformation matrices between the feature and the feature of extraction that obtain;
According to the node and the transformation matrices, it is determined that the node image obtained corresponding node in map.
6. according to the method for claim 4, it is characterised in that the node image for determining to obtain phase in map
After the node answered, methods described also includes:
The feature of node image corresponding to existing node in map is calculated, between the feature of the node image of acquisition
Similarity;
Phase between the feature of node image corresponding to existing node in map, and the feature of the node image of acquisition
When exceeding default similarity threshold like degree, then
According to the corresponding node of the node image of acquisition, the annular for including the existing node is generated in the map
Path.
7. according to the method for claim 1, it is characterised in that described selected from the map matches with described image frame
Start node, including:
Extract the feature of described image frame;
Obtain the feature of the node image corresponding to the node that the map includes;
Determine the similarity between the feature of described image frame and the feature of the node image;
The node corresponding to the feature of corresponding similarity highest node image is chosen, obtains rising with what described image frame matched
Beginning node.
8. method according to any one of claim 1 to 7, it is characterised in that the trend mesh according to selection
Before marking movement path, methods described also includes:
Extract the feature of described image frame;
Obtain the feature of the node image corresponding to the start node;
Determine the spatiality measures of dispersion between the feature of described image frame and the feature of the node image;
Moved according to the spatiality measures of dispersion.
9. method according to any one of claim 1 to 7, it is characterised in that the trend mesh according to selection
Movement path is marked, including:
The feature of the node image corresponding to each node that the trend target motion path includes is obtained successively;
The spatiality measures of dispersion between the feature of the node image of the corresponding adjacent node of acquisition is determined successively;
The spatiality measures of dispersion according to determining successively is moved.
10. a kind of motion control device, described device include:
Acquisition module, for obtaining picture frame;
Determining module, for when carrying out Face datection to described image frame and obtaining described image frame and include facial image, it is determined that
The facial image corresponding destination node in map;
Choosing module, for selecting the start node matched with described image frame from the map;Wherein, described image frame
The feature of feature node image corresponding with the start node matches;
Module is chosen, for according to the start node and the destination node, choosing in the path that the map includes
To target motion path;
Motion module, for the trend target movement path according to selection.
11. device according to claim 10, it is characterised in that described device also includes:
Identification module, for extracting the face characteristic data of the facial image;According to the face characteristic data query and institute
State the default facial image that facial image matches;Target identities recognition result is obtained according to the default facial image;It is determined that
The service type associated with the target identities recognition result;
Service module, for providing service trigger entrance corresponding with the service type.
12. device according to claim 10, it is characterised in that described device also includes:
Map structuring module, for choosing picture frame from the picture frame chronologically gathered;Judge the feature of picture frame chosen
Whether the feature of default node image is met;When the feature of the picture frame of selection meets the feature of the node image, obtain
The picture frame for taking selection is node image;It is determined that the node image obtained corresponding node in map;Corresponding to determination
The node storage obtain the node image feature.
13. device according to claim 10, it is characterised in that the Choosing module is additionally operable to extract described image frame
Feature;Obtain the feature of the node image corresponding to the node that the map includes;Determine the feature of described image frame with it is described
Similarity between the feature of node image;The node corresponding to the feature of corresponding similarity highest node image is chosen, is obtained
To the start node to match with described image frame.
14. a kind of computer equipment, it is characterised in that including memory and processor, computer is stored with the memory
Readable instruction, when the computer-readable instruction is by the computing device so that the computing device claim 1 to 7
Any one of method the step of.
15. a kind of service robot, it is characterised in that including memory and processor, computer is stored with the memory
Readable instruction, when the computer-readable instruction is by the computing device so that the computing device claim 1 to 7
Any one of method the step of.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710365516.XA CN107341442B (en) | 2017-05-22 | 2017-05-22 | Motion control method, motion control device, computer equipment and service robot |
PCT/CN2018/085065 WO2018214706A1 (en) | 2017-05-22 | 2018-04-28 | Movement control method, storage medium, computer apparatus, and service robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710365516.XA CN107341442B (en) | 2017-05-22 | 2017-05-22 | Motion control method, motion control device, computer equipment and service robot |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107341442A true CN107341442A (en) | 2017-11-10 |
CN107341442B CN107341442B (en) | 2023-06-06 |
Family
ID=60221306
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710365516.XA Active CN107341442B (en) | 2017-05-22 | 2017-05-22 | Motion control method, motion control device, computer equipment and service robot |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107341442B (en) |
WO (1) | WO2018214706A1 (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108236777A (en) * | 2018-01-08 | 2018-07-03 | 深圳市易成自动驾驶技术有限公司 | It picks up ball method, pick up ball vehicle and computer readable storage medium |
WO2018214706A1 (en) * | 2017-05-22 | 2018-11-29 | 腾讯科技(深圳)有限公司 | Movement control method, storage medium, computer apparatus, and service robot |
CN109389156A (en) * | 2018-09-11 | 2019-02-26 | 深圳大学 | A kind of training method, device and the image position method of framing model |
CN110646787A (en) * | 2018-06-27 | 2020-01-03 | 三星电子株式会社 | Self-motion estimation method and device and model training method and device |
CN110794951A (en) * | 2018-08-01 | 2020-02-14 | 北京京东尚科信息技术有限公司 | Method and device for determining shopping instruction based on user action |
CN112914601A (en) * | 2021-01-19 | 2021-06-08 | 深圳市德力凯医疗设备股份有限公司 | Obstacle avoidance method and device for mechanical arm, storage medium and ultrasonic equipment |
CN113343739A (en) * | 2020-03-02 | 2021-09-03 | 杭州萤石软件有限公司 | Relocating method of movable equipment and movable equipment |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109579847B (en) * | 2018-12-13 | 2022-08-16 | 歌尔股份有限公司 | Method and device for extracting key frame in synchronous positioning and map construction and intelligent equipment |
CN111144275A (en) * | 2019-12-24 | 2020-05-12 | 中石化第十建设有限公司 | Intelligent running test system and method based on face recognition |
CN111241943B (en) * | 2019-12-31 | 2022-06-21 | 浙江大学 | Scene recognition and loopback detection method based on background target and triple loss |
CN111506104B (en) * | 2020-04-03 | 2021-10-01 | 北京邮电大学 | Method and device for planning position of unmanned aerial vehicle |
CN111815738B (en) * | 2020-06-15 | 2024-01-12 | 北京京东乾石科技有限公司 | Method and device for constructing map |
CN112528728B (en) * | 2020-10-16 | 2024-03-29 | 深圳银星智能集团股份有限公司 | Image processing method and device for visual navigation and mobile robot |
CN112464989B (en) * | 2020-11-02 | 2024-02-20 | 北京科技大学 | Closed loop detection method based on target detection network |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006195969A (en) * | 2004-12-14 | 2006-07-27 | Honda Motor Co Ltd | Apparatus for generating movement path for autonomous mobile robot |
CN102411368A (en) * | 2011-07-22 | 2012-04-11 | 北京大学 | Active vision human face tracking method and tracking system of robot |
US20130138246A1 (en) * | 2005-03-25 | 2013-05-30 | Jens-Steffen Gutmann | Management of resources for slam in large environments |
CN104236548A (en) * | 2014-09-12 | 2014-12-24 | 清华大学 | Indoor autonomous navigation method for micro unmanned aerial vehicle |
US20150227775A1 (en) * | 2012-09-11 | 2015-08-13 | Southwest Research Institute | 3-D Imaging Sensor Based Location Estimation |
JP2015180974A (en) * | 2015-07-17 | 2015-10-15 | 株式会社ナビタイムジャパン | Information processing system including hierarchal map data, information processing program, information processor and information processing method |
US20160005229A1 (en) * | 2014-07-01 | 2016-01-07 | Samsung Electronics Co., Ltd. | Electronic device for providing map information |
CN106125730A (en) * | 2016-07-10 | 2016-11-16 | 北京工业大学 | A kind of robot navigation's map constructing method based on Mus cerebral hippocampal spatial cell |
CN106574975A (en) * | 2014-04-25 | 2017-04-19 | 三星电子株式会社 | Trajectory matching using peripheral signal |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105911992B (en) * | 2016-06-14 | 2019-02-22 | 广东技术师范学院 | A kind of automatic path planning method and mobile robot of mobile robot |
CN107341442B (en) * | 2017-05-22 | 2023-06-06 | 腾讯科技(上海)有限公司 | Motion control method, motion control device, computer equipment and service robot |
-
2017
- 2017-05-22 CN CN201710365516.XA patent/CN107341442B/en active Active
-
2018
- 2018-04-28 WO PCT/CN2018/085065 patent/WO2018214706A1/en active Application Filing
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006195969A (en) * | 2004-12-14 | 2006-07-27 | Honda Motor Co Ltd | Apparatus for generating movement path for autonomous mobile robot |
US20130138246A1 (en) * | 2005-03-25 | 2013-05-30 | Jens-Steffen Gutmann | Management of resources for slam in large environments |
CN102411368A (en) * | 2011-07-22 | 2012-04-11 | 北京大学 | Active vision human face tracking method and tracking system of robot |
US20150227775A1 (en) * | 2012-09-11 | 2015-08-13 | Southwest Research Institute | 3-D Imaging Sensor Based Location Estimation |
CN106574975A (en) * | 2014-04-25 | 2017-04-19 | 三星电子株式会社 | Trajectory matching using peripheral signal |
US20160005229A1 (en) * | 2014-07-01 | 2016-01-07 | Samsung Electronics Co., Ltd. | Electronic device for providing map information |
CN104236548A (en) * | 2014-09-12 | 2014-12-24 | 清华大学 | Indoor autonomous navigation method for micro unmanned aerial vehicle |
JP2015180974A (en) * | 2015-07-17 | 2015-10-15 | 株式会社ナビタイムジャパン | Information processing system including hierarchal map data, information processing program, information processor and information processing method |
CN106125730A (en) * | 2016-07-10 | 2016-11-16 | 北京工业大学 | A kind of robot navigation's map constructing method based on Mus cerebral hippocampal spatial cell |
Non-Patent Citations (4)
Title |
---|
EKMAN F, ET AL: "Working day movement model", PROCEEDINGS OF THE 1ST ACM SIGMOBILE WORKSHOP ON MOBILITY MODELS * |
RA´UL MUR-ARTAL,ET AL: "ORB-SLAM: a Versatile and Accurate", 《IEEE TRANSACTIONS ON ROBOTICS》 * |
SEDER M, ET AL.: "Dynamic window based approach to mobile robot motion control in the presence of moving obstacles", PROCEEDINGS 2007 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION. * |
吴陈沭: "基于群智感知的无线室内定位", 清华大学 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018214706A1 (en) * | 2017-05-22 | 2018-11-29 | 腾讯科技(深圳)有限公司 | Movement control method, storage medium, computer apparatus, and service robot |
CN108236777A (en) * | 2018-01-08 | 2018-07-03 | 深圳市易成自动驾驶技术有限公司 | It picks up ball method, pick up ball vehicle and computer readable storage medium |
CN110646787A (en) * | 2018-06-27 | 2020-01-03 | 三星电子株式会社 | Self-motion estimation method and device and model training method and device |
CN110794951A (en) * | 2018-08-01 | 2020-02-14 | 北京京东尚科信息技术有限公司 | Method and device for determining shopping instruction based on user action |
CN109389156A (en) * | 2018-09-11 | 2019-02-26 | 深圳大学 | A kind of training method, device and the image position method of framing model |
CN113343739A (en) * | 2020-03-02 | 2021-09-03 | 杭州萤石软件有限公司 | Relocating method of movable equipment and movable equipment |
CN113343739B (en) * | 2020-03-02 | 2022-07-22 | 杭州萤石软件有限公司 | Relocating method of movable equipment and movable equipment |
CN112914601A (en) * | 2021-01-19 | 2021-06-08 | 深圳市德力凯医疗设备股份有限公司 | Obstacle avoidance method and device for mechanical arm, storage medium and ultrasonic equipment |
CN112914601B (en) * | 2021-01-19 | 2024-04-02 | 深圳市德力凯医疗设备股份有限公司 | Obstacle avoidance method and device for mechanical arm, storage medium and ultrasonic equipment |
Also Published As
Publication number | Publication date |
---|---|
WO2018214706A1 (en) | 2018-11-29 |
CN107341442B (en) | 2023-06-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107341442A (en) | Motion control method, device, computer equipment and service robot | |
US11045705B2 (en) | Methods and systems for 3D ball trajectory reconstruction | |
CN109084746A (en) | Monocular mode for the autonomous platform guidance system with aiding sensors | |
JP2022504704A (en) | Target detection methods, model training methods, equipment, equipment and computer programs | |
CN103703758B (en) | mobile augmented reality system | |
CN104781849B (en) | Monocular vision positions the fast initialization with building figure (SLAM) simultaneously | |
Chen et al. | Rise of the indoor crowd: Reconstruction of building interior view via mobile crowdsourcing | |
CN102609942B (en) | Depth map is used to carry out mobile camera location | |
CN108492316A (en) | A kind of localization method and device of terminal | |
CN111126304A (en) | Augmented reality navigation method based on indoor natural scene image deep learning | |
CN107423398A (en) | Exchange method, device, storage medium and computer equipment | |
CN107888828A (en) | Space-location method and device, electronic equipment and storage medium | |
CN108200334B (en) | Image shooting method and device, storage medium and electronic equipment | |
CN107886120A (en) | Method and apparatus for target detection tracking | |
CN109920055A (en) | Construction method, device and the electronic equipment of 3D vision map | |
CN108805917A (en) | Sterically defined method, medium, device and computing device | |
CN109298629A (en) | For providing the fault-tolerant of robust tracking to realize from non-autonomous position of advocating peace | |
CN111160111B (en) | Human body key point detection method based on deep learning | |
CN110648363A (en) | Camera posture determining method and device, storage medium and electronic equipment | |
CN110111388A (en) | Three-dimension object pose parameter estimation method and visual apparatus | |
CN106530407A (en) | Three-dimensional panoramic splicing method, device and system for virtual reality | |
US11373329B2 (en) | Method of generating 3-dimensional model data | |
CN109063549A (en) | High-resolution based on deep neural network is taken photo by plane video moving object detection method | |
WO2024060978A1 (en) | Key point detection model training method and apparatus and virtual character driving method and apparatus | |
CN110889361A (en) | ORB feature visual odometer learning method and device based on image sequence |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |