CN107341442B - Motion control method, motion control device, computer equipment and service robot - Google Patents

Motion control method, motion control device, computer equipment and service robot Download PDF

Info

Publication number
CN107341442B
CN107341442B CN201710365516.XA CN201710365516A CN107341442B CN 107341442 B CN107341442 B CN 107341442B CN 201710365516 A CN201710365516 A CN 201710365516A CN 107341442 B CN107341442 B CN 107341442B
Authority
CN
China
Prior art keywords
node
image
map
nodes
image frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710365516.XA
Other languages
Chinese (zh)
Other versions
CN107341442A (en
Inventor
孟宾宾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shanghai Co Ltd
Original Assignee
Tencent Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shanghai Co Ltd filed Critical Tencent Technology Shanghai Co Ltd
Priority to CN201710365516.XA priority Critical patent/CN107341442B/en
Publication of CN107341442A publication Critical patent/CN107341442A/en
Priority to PCT/CN2018/085065 priority patent/WO2018214706A1/en
Application granted granted Critical
Publication of CN107341442B publication Critical patent/CN107341442B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/167Detection; Localisation; Normalisation using comparisons between temporally consecutive images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a motion control method, a motion control device, computer equipment and a service robot, wherein the method comprises the following steps: acquiring an image frame; when the image frame is subjected to face detection to obtain that the image frame comprises a face image, determining a corresponding target node of the face image in a map; selecting a starting node matched with the image frame from the map; the characteristics of the image frames are matched with the characteristics of the node images corresponding to the initial nodes; selecting a trend target motion path from paths included in the map according to the starting node and the target node; and moving according to the selected trend target movement path. The scheme provided by the application improves the accuracy of motion control.

Description

Motion control method, motion control device, computer equipment and service robot
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a motion control method, a motion control device, a computer device, and a service robot.
Background
With the development of computer technology and the improvement of living standard of people, people increasingly rely on mobile computer devices to help people accomplish various tasks. In conventional task execution by a mobile computer device, motion control of the mobile computer device is based on the positioning of the sensors.
However, when the motion of the computer device is controlled by using the conventional positioning method through the sensor, the sensing signal is easily affected by the surrounding environment during the positioning process, and the positioning accuracy is seriously affected, so that the accuracy is reduced when the motion of the computer device is controlled.
Disclosure of Invention
Based on this, it is necessary to provide a motion control method, apparatus, computer device and service robot for the problem that the conventional motion control method causes low accuracy in controlling the motion of the computer device.
A method of motion control, the method comprising:
acquiring an image frame;
when the image frame is subjected to face detection to obtain that the image frame comprises a face image, determining a corresponding target node of the face image in a map;
selecting a starting node matched with the image frame from the map; the characteristics of the image frames are matched with the characteristics of the node images corresponding to the initial nodes;
selecting a trend target motion path from paths included in the map according to the starting node and the target node;
and moving according to the selected trend target movement path.
A motion control apparatus, the apparatus comprising:
an acquisition module for acquiring an image frame;
the determining module is used for determining a corresponding target node of the face image in the map when the face detection is carried out on the image frame to obtain that the image frame comprises the face image;
the selecting module is used for selecting a starting node matched with the image frame from the map; the characteristics of the image frames are matched with the characteristics of the node images corresponding to the initial nodes;
the selecting module is used for selecting a trend target motion path from paths included in the map according to the starting node and the target node;
and the motion module is used for moving according to the selected trend target motion path.
In one embodiment, the apparatus further comprises:
the detection module is used for inputting the image frames into a convolutional neural network model; acquiring a characteristic diagram of a plurality of network layer outputs included in the convolutional neural network model; sequentially inputting each characteristic diagram into a memory neural network model; and obtaining a result of whether the image frame output by the memory neural network model comprises a face image.
In one embodiment, the map construction module is further configured to extract features of the acquired node images; acquiring characteristics of node images corresponding to existing nodes in the map; determining a change matrix between the acquired feature and the extracted feature; and determining corresponding nodes of the acquired node images in the map according to the nodes and the change matrix.
In one embodiment, the map construction module is further configured to calculate a similarity between a feature of a node image corresponding to an existing node in the map and the obtained feature of the node image; when the similarity between the features of the node images corresponding to the existing nodes in the map and the obtained features of the node images exceeds a preset similarity threshold, generating a circular path comprising the existing nodes in the map according to the obtained nodes corresponding to the node images.
In one embodiment, the motion module is further configured to extract features of the image frame; acquiring characteristics of a node image corresponding to the initial node; determining a spatial state difference amount between the features of the image frame and the features of the node image; and performing movement according to the space state difference quantity.
In one embodiment, the motion module is further configured to sequentially obtain features of node images corresponding to each node included in the trend target motion path; sequentially determining the space state difference quantity between the acquired characteristics of the node images corresponding to the adjacent nodes; and performing movement according to the space state difference quantities which are determined in sequence.
A computer device comprising a memory and a processor, the memory having stored therein computer readable instructions which, when executed by the processor, cause the processor to perform the steps of a motion control method.
A service robot comprising a memory and a processor, the memory having stored therein computer readable instructions which, when executed by the processor, cause the processor to perform the steps of a motion control method
After the image frame is acquired, the motion control method, the motion control device, the computer equipment and the service robot can automatically determine the corresponding target node of the face image in the map when the face image is detected to be included in the image frame, position the target in the map, select the initial node matched with the image frame from the map according to the matching relation of the characteristics of the image frame and the characteristics of the node images corresponding to the nodes in the map, position the current position of the self in the map, and select the trend target motion path from the paths included in the map according to the current node and the target node to move. Therefore, the positioning in the map can be completed through the feature matching between the images, the environmental influence caused by the positioning of the sensing signals is avoided, and the accuracy of motion control is improved.
Drawings
FIG. 1 is a diagram of an application environment for a motion control method in one embodiment;
FIG. 2 is an internal block diagram of a computer device for implementing a motion control method in one embodiment;
FIG. 3 is a flow chart of a motion control method in one embodiment;
FIG. 4 is a flowchart illustrating steps of face detection in one embodiment;
FIG. 5 is a schematic diagram of face recognition of a face image in one embodiment;
FIG. 6 is a flow diagram of the steps for constructing a map in one embodiment;
FIG. 7 is a flow diagram of a map creation process in one embodiment;
FIG. 8 is a schematic diagram of creating a completed map in one embodiment;
FIG. 9 is a schematic diagram of selecting a trending target motion path in a map in one embodiment;
FIG. 10 is a flow chart of a motion control method according to another embodiment;
FIG. 11 is a block diagram of a motion control device in one embodiment;
FIG. 12 is a block diagram of a motion control apparatus in another embodiment;
FIG. 13 is a block diagram of a motion control apparatus according to yet another embodiment;
fig. 14 is a block diagram showing the structure of a motion control apparatus in still another embodiment.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
FIG. 1 is a diagram of an application environment for a motion control method in one embodiment. As shown in fig. 1, the motion control method is applied to a motion control system. The motion control system is applied to indoor scenes. The motion control system includes a computer device 110 and a target 120. Computer device 110 may move toward target 120 by performing a motion control method. It will be appreciated by those skilled in the art that the application environment shown in fig. 1 is only a part of the scene related to the present application, and is not limited to the application environment of the present application, and the motion control system may be applied to an outdoor open scene, etc.
FIG. 2 is a schematic diagram of the internal structure of a computer device in one embodiment. As shown in fig. 2, the computer apparatus includes a processor, a nonvolatile storage medium, an internal memory, a camera, a sound collection device, a speaker, a display screen, an input device, and a motion device, which are connected through a system bus. The non-volatile storage medium of the computer device stores an operating system, and may further store computer readable instructions that, when executed by the processor, cause the processor to implement a motion control method. The processor is used to provide computing and control capabilities to support the operation of the entire computer device. The internal memory may also have stored therein computer readable instructions that, when executed by the processor, cause the processor to perform a method of motion control. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, etc., and the input device can be a touch layer covered on the display screen, can also be a key, a track ball or a touch pad arranged on the terminal shell, and can also be an external keyboard, a touch pad or a mouse, etc. The computer device is a mobile electronic device, which may be a service robot or the like. It will be appreciated by those skilled in the art that the structure shown in fig. 1 is merely a block diagram of a portion of the structure associated with the present application and is not limiting of the terminal to which the present application is applied, and that a particular terminal may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, as shown in FIG. 3, a motion control method is provided. The present embodiment is mainly exemplified by the application of the method to the computer device in fig. 2. Referring to fig. 3, the motion control method specifically includes the steps of:
s302, acquiring an image frame.
In one embodiment, the computer device may acquire an image frame through the camera under the current field of view of the camera, and acquire the acquired image frame. Wherein the field of view of the camera may change due to changes in the pose and position of the computer device.
In one embodiment, the computer device may specifically acquire image frames at a fixed or dynamic frame rate, and acquire the acquired image frames. Wherein the fixed or dynamic frame rate enables the image frames to form a continuous dynamic picture when played at the fixed or dynamic frame rate such that the computer device can track a particular object in the continuous dynamic picture.
In one embodiment, the computer device may invoke a camera to turn on a camera scan mode, scan a specific target in a current field of view in real time, and generate image frames in real time at a certain frame rate, and acquire the generated image frames.
The computer device is a movable electronic device, and may be a robot or the like. The camera may be a camera internal to the computer device or an external camera associated with the computer device. The camera may be a monocular camera, a binocular camera, or an RGB-D (Red-Green-Blue-Deep) camera, etc.
S304, when the image frame is subjected to face detection to obtain that the image frame comprises a face image, determining a corresponding target node of the face image in the map.
Wherein the map is a feature distribution map constructed by the computer device from image frames acquired from the natural space. The computer device may construct a corresponding map for the natural space based on SLAM (Simultaneous Localization And Mapping localization and mapping). The map constructed by the computer device based on the SLAM may specifically be a three-dimensional dot map. A node is a location in map space where a computer device projects a location in the natural space where an image frame was acquired. The target node is a node where the position of the target in the natural space is projected to the map. For example, the coordinates of the object in the natural space are a (x 1, y1, z 1), and the coordinates of the object after projecting a to the map space are B (x 2, y2, z 2), then B is the node of the object in the map.
In one embodiment, the computer device may, after acquiring an image frame, extract image data included in the image frame and detect whether the image data contains face feature data. And if the computer equipment detects that the image data contains the face characteristic data, judging that the image frame comprises a face image. After the computer equipment acquires the image frame, the image frame is sent to the server, the server completes the face detection process of the image frame, and then the detection result of whether the image frame comprises the face image or not is returned to the computer equipment. The detection result may include a probability of existence of a face image in the image frame and a coordinate region of the face image.
In one embodiment, a map may include a plurality of nodes, each having a one-to-one correspondence of node images. The map may also include feature points extracted from the node images. The map including feature points and nodes is a three-dimensional reconstruction of a scene in natural space. Specifically, three-dimensional points in a three-dimensional scene in a natural space are subjected to projection transformation of a projection matrix to obtain pixel points in a two-dimensional image frame of a camera shooting plane of the computer equipment, and the pixel points in the two-dimensional image frame are subjected to projection inverse transformation of the projection matrix to obtain three-dimensional feature points in a three-dimensional reconstruction scene in a map.
The computer device may calculate a position of the face image in the map upon detecting that the face image is included in the image frame. Specifically, the computer device may determine a coordinate position of the face image in the image frame, calculate a position of the face image in the map according to a projection matrix adapted to a camera of the computer device, and find a node corresponding to the calculated position from nodes included in the map, to obtain the target node.
In one embodiment, when detecting that the image frame includes a face image, the computer device may extract a background feature point of a background image in the image frame, match the extracted background feature point with a feature point included in the map, and obtain a position of the feature point matched with the extracted background feature point in the map, so as to select a node closest to the position in the map, and obtain the target node.
In one embodiment, the image frames acquired by the computer device may be two or more image frames. When the computer equipment detects that the obtained image has a face image, the similarity matrix between any two frames of image frames can be calculated, then the matched face feature points are selected from the face image included in the image frames for calculating the similarity matrix, and the positions of the face feature points on the image frames are determined. The computer equipment can determine the position of the face feature point in the natural space according to the triangular ranging algorithm according to the similarity matrix between any two frames of image frames obtained through calculation and the position of the selected face feature point on the two frames of images. The computer equipment can determine the position of the face feature point in the map according to the position of the face feature point in the natural space, so that the node closest to the position is selected from the map, and a target node is obtained.
S306, selecting a starting node matched with the image frame from the map; wherein the characteristics of the image frames are matched with the characteristics of the node images corresponding to the starting node.
Wherein the node image is an image acquired by the computer device at a location in natural space where a projected relationship exists with a node in the map. The features of the image may be one or a combination of several of color features, texture features and shape features. The computer device may extract features from the node images corresponding to the nodes in the map when constructing the map, and store the extracted features of the node images in a database or cache with respect to the corresponding nodes.
In one embodiment, the computer device may traverse the features of the node images corresponding to each node in the map, and determine whether the traversed features of the node images match the features of the image frames. The computer device may obtain a node corresponding to the feature of the traversed node image as the starting node when it is determined that the feature of the traversed node image matches the feature of the image frame.
In one embodiment, when the computer device determines whether the features of the traversed node image and the features of the image frame match, the computer device may specifically calculate the similarity between the features of the traversed node image and the features of the image frame first, so as to determine whether the similarity is greater than or equal to a preset similarity; if yes, matching; if not, the matching is not performed. The similarity can adopt cosine similarity or Hamming distance of each perceived hash value between images.
In one embodiment, the computer device may specifically select the extreme point as the feature point according to the pixel value of each pixel point in the node image. The computer can select extreme points based on algorithms such as FAST (Features from accelerated segment test rapid feature point detection) or Harris corner detection algorithm to obtain feature points of the node image, and the obtained feature points are represented by binary codes. The computer equipment can represent the feature points included in the node image by using the one-dimensional image feature vectors to obtain one-to-one correspondence to the nodes of the map.
The computer device may generate a one-dimensional image feature vector characterizing features of the acquired image frame in a manner characterizing features of the node image. The computer equipment can recalculate the vector similarity between the generated one-dimensional image feature vector and the one-dimensional image feature vector corresponding to each node of the map, and further judge whether the vector similarity is greater than or equal to the preset vector similarity; if yes, matching; if not, the matching is not performed.
S308, selecting a trend target motion path from paths included in the map according to the initial node and the target node.
In particular, a path formed by nodes in the map may be included in the map. The computer device may select a path from paths formed by nodes in the map to obtain a trending target motion path, with the starting node as a starting point and the target node as an ending point.
In one embodiment, the map may have one or more paths starting at a start node and ending at a destination node. When the path with the starting node as the starting point and the target node as the end point is unique, the computer equipment can directly acquire the path as a trend target motion path. When the path with the starting node as the starting point and the target node as the end point is not unique, the computer equipment can randomly select one path as a trend target motion path, and can also acquire the path with the least number of the included nodes as the trend target motion path.
S310, moving according to the selected trend target movement path.
Specifically, after the computer equipment selects a path tending to the target to move, the characteristics of the node images corresponding to all the nodes included in the path are acquired, the current movement direction and distance of the computer equipment are determined according to the change relation between the characteristics of the node images corresponding to all the nodes, and the computer equipment moves to the target according to the determined direction and distance.
According to the motion control method, after the image frame is acquired, when the image frame is detected to comprise the face image, the corresponding target node of the face image is determined in the map, the position of the target in the map is positioned, then the initial node matched with the image frame can be selected from the map based on the matching relation between the characteristic of the image frame and the characteristic of the node image corresponding to each node in the map, the current position of the local machine in the map is positioned, and then the motion path of the target can be selected from the paths comprising the map to move according to the current node and the target node. Therefore, the positioning in the map can be completed through the feature matching between the images, the environmental influence caused by the positioning of the sensing signals is avoided, and the accuracy of motion control is improved.
In one embodiment, after step S302, the motion control method further includes a step of face detection, where the step of face detection specifically includes:
s402, inputting the image frame into a convolutional neural network model.
The convolutional neural network model is a complex network model formed by interconnecting multiple layers. The neural network model can comprise a plurality of Feature conversion layers, each Feature conversion layer is provided with a corresponding nonlinear change operator, each layer of nonlinear change operator can be a plurality of nonlinear change operators, and one nonlinear change operator in each layer of Feature conversion layer carries out nonlinear change on an input image to obtain a Feature Map (Feature Map) as an operation result.
Specifically, the convolutional neural network model is a model for extracting face features, which is obtained by learning and training by taking an image including a face image as training data. After the computer equipment acquires the image frame, the image frame is input into a convolutional neural network model, and face features of the image frame are extracted by using the convolutional neural network model. The facial features may be one or more features for reflecting the sex of a person, the contour of a person's face, hairstyles, glasses, nose, mouth, and distance between facial organs, among others.
In one embodiment, the convolutional neural network model is a model obtained by learning and training with images as training data and used for extracting image features. After the computer equipment acquires the image frame, inputting the image frame into a convolutional neural network model, and extracting image features of the image frame by using the convolutional neural network model.
S404, obtaining a characteristic diagram of a plurality of network layer outputs included in the convolutional neural network model.
In particular, a computer device may obtain a feature map of a plurality of network layer outputs included in a convolutional neural network model. The feature map is composed of response values obtained by processing an input image by a nonlinear change operator. The features extracted by different network layers are different. The computer equipment can determine the corresponding face feature data of the input image by utilizing the feature map output by the convolutional neural network for extracting the face features. The computer equipment can determine the corresponding image feature data of the input image by utilizing the feature map output by the convolutional neural network for extracting the image features, and further judge whether the image feature data comprises the face feature data.
For example, the computer device may employ a 52-layer depth residual network model for image processing, extracting a feature map of the 4-layer full-connection layer output included in the depth residual network model as a subsequent input.
S406, sequentially inputting the feature maps into the memory neural network model.
The memory neural network model is a neural network model capable of comprehensively processing the sequence input. The memory neural network model is a recurrent neural network model. The Memory neural network model may specifically be an LSTM (Long Short-Term Memory neural network). Specifically, the computer device may sequentially input the obtained feature maps into the memory neural network model, and perform face feature detection.
S408, obtaining a result of whether the image frame output by the memory neural network model comprises a face image.
Specifically, the computer device may obtain a face detection result obtained by comprehensively processing the memory neural network model according to each input feature map. The face detection result includes a probability that a face image exists and a coordinate region of the face image in the image frame.
In one embodiment, the computer device may further filter out a face detection result with an overlapping area exceeding a preset overlapping threshold according to a coordinate area of a face image in the image frame included in the face detection result after the face detection result is extracted, and obtain the coordinate area of the face image in the image frame according to the face detection result retained after the filtering.
In one embodiment, the memory neural network model may use a rectangular window, move in the input feature map according to a preset direction and a preset step length, so as to perform window scanning, extract face feature data in the scanned window image during scanning, and obtain the probability of the face image in the scanned window image according to the extracted face feature image. And storing the window images with the calculated probability sequences at the front in a coordinate area in the image frame, and continuously processing the subsequently input feature images.
Fig. 5 shows a schematic diagram of face recognition of a face image in one embodiment. Referring to fig. 5, a memory neural network model adopted by a computer device performs scanning analysis on an input feature map according to a rectangular window to obtain probability P of a face image corresponding to the rectangular window A A Probability P of existence of face image corresponding to rectangular window B B Probability P of existence of face image corresponding to rectangular window C C . At this time, P C >P A >P B The memory neural network model can model P C Recording a corresponding rectangular window C, continuing to scan and analyze the subsequently input feature images according to the rectangular window, comprehensively analyzing for multiple times to obtain the rectangular window and the corresponding probability of existence of the face image, and outputting the probability of existence of the face image in the image frame obtained by the computer equipment and the coordinate area of the face image in the image frame.
In this embodiment, image features are fully extracted through a plurality of network layers included in the convolutional neural network model, and then the features extracted by the plurality of network layers are input into the memory neural network model for comprehensive processing, so that face detection is more accurate.
In one embodiment, after step S304, the motion control method further includes a step of face recognition, where the step of face recognition specifically includes: extracting face characteristic data of a face image; inquiring a preset face image matched with the face image according to the face characteristic data; obtaining a target identity recognition result according to a preset face image; a service type associated with the target identification is determined. After step S310, the motion control method further includes: a service trigger entry corresponding to the service type is provided.
Wherein the target identity recognition result is data for reflecting the target identity. The identity of the target may be the name, social status, or job information of the target, etc.
In one embodiment, a preset face image library is provided on the computer device, and the preset face image library includes a plurality of preset face images. The computer device may detect whether a face image in the image frame matches a preset face image included in the preset face image library by comparing the face image in the image frame with the preset face image included in the preset face image library when the face image is detected to be included in the image frame. The computer equipment can judge that the face image included in the image frame is the same person image as the preset face image when the face image in the image frame is matched with the preset face image, and acquire target identity information corresponding to the preset face image as a target identity recognition result.
The preset face image may be a real face image for reflecting a corresponding target. The image selected by the user definition of the corresponding target or a picture selected by the system is automatically analyzed as a corresponding preset face image from the personal data uploaded by the target and the picture information published by the history.
In one embodiment, the computer device may calculate the similarity between the face image in the image frame and the preset face image in detecting whether the face image in the image frame matches the preset face image. The computer device may extract the features of the face image and the preset face image in the image frame first, so as to calculate the difference between the two features, where the larger the difference between the features is, the lower the similarity is, and the smaller the difference between the features is, the higher the similarity is. When the computer equipment calculates the similarity between the face image in the image frame and the preset face image, an acceleration algorithm suitable for an image processor can be adopted, so that the operation speed is improved.
In one embodiment, the computer device may extract face feature data from the image data after determining that the image frame includes a face image, and then compare the extracted face feature data with face feature data corresponding to each preset face image in the preset face image library to obtain the target identification result.
In one embodiment, the image frame detected by the computer device may include one or more face images. The computer equipment can determine the duty ratio of the face image included in the image frame to the image frame, and extract the face characteristic data of the face image with the duty ratio exceeding the preset proportion; and/or determining the definition of the face image included in the image frame, and extracting face feature data of the face image with the definition exceeding a definition threshold. The computer device then identifies the face image from which the face feature data was extracted.
Further, after the computer device identifies the target identity, the service type associated with the target identity may be found. Wherein the service type is a type to which a service provided to the target belongs. Service types such as restaurant ordering service or hotel reception service, etc. The service type may be a uniformly set type, a type related to the target identity, or a type related to the target attribute.
In one embodiment, the computer device may set the service type in advance, associate the service type with the target identifier, and store the set service type in a database or file, and read from the database or file as needed. After the computer equipment identifies the target identity identification result, the service type associated with the target identifier corresponding to the target identity identification result can be pulled.
Still further, after determining the service type associated with the target identification result, the computer device may provide the target with a service trigger entry corresponding to the determined service type after moving to the target. In particular, the computer device may provide a service trigger portal through the display screen, or a voice service portal with the target through the speaker and sound collector.
In one embodiment, after moving to the target node, the computer device may collect the image frame to determine the current location, and provide a service trigger entry for the target, receive the input service parameters through the display or the sound collector, and thereby determine the object of the current service, the location of the current service, and the content of the current service.
In the above embodiment, when the acquired image detects that a face exists, the face is identified, and after the identity of the target is obtained by identification and the target is moved, the service entrance associated with the target can be provided for the target, so that the service providing efficiency is greatly improved.
In the above embodiment, both the face recognition step processed by the computer device and the determination of the service type associated with the target identification result may be processed by the server. The computer device may send the acquired image frames to a server, which sends the target identification result and the associated service type to the computer device after face detection, face recognition, and determination of the service type associated with the target identification result are performed on the image frames.
In one embodiment, before step S402, the motion control method further includes a step of constructing a map, where the step specifically includes:
s602, selecting an image frame from the image frames acquired in time sequence.
The selected image frame may be a key frame in the acquired image frames.
In one embodiment, the computer device may receive a user selection instruction, and select an image frame from the acquired image frames based on the user selection instruction.
In one embodiment, the computer device may select image frames from the acquired image frames by a preset number of interval frames. For example, every 20 frames of image frames, the image frames are selected.
S604, judging whether the characteristics of the selected image frames accord with the characteristics of the preset node images.
Specifically, the preset feature of the node image is a preset feature for selecting the node image. The feature conforming to the preset node image may be that the number of feature points included in the image and the number of feature points matched with each other in the feature points included in the existing node image exceeds a preset number, or may be that the proportion of the feature points matched with each other in the feature points included in the existing node image to the feature points included in the existing node image is lower than a preset proportion.
For example, assuming that the most recently added node image includes 100 feature points, the currently selected image frame includes 120 feature points. The preset number is 50 and the preset proportion is 90%. If the number of the feature points matched with the feature points included in the recently added node image is 70. Then, the number of matching feature points included in the current image frame with feature points included in the existing node image exceeds a preset number, and it may be determined that features of the currently selected image frame conform to features of the preset node image.
S606, when the characteristics of the selected image frames accord with the characteristics of the node images, acquiring the selected image frames as the node images.
In one embodiment, after the computer device acquires the instruction for constructing the map, the computer device may acquire image frames according to a fixed or dynamic frame rate, select, as an initial node image, an image frame having a number of feature points greater than a preset number threshold, determine a corresponding node of the node image in the map, and a corresponding position of the feature point included in the node image in the map, and construct the local map. The computer equipment selects an image frame from the image frames acquired according to the time sequence, and takes the image frame which is selected to be in accordance with the characteristics of the preset node image as a subsequent node image until a global map is obtained.
Specifically, the computer device may track feature points in the reference node image with the initial node image as the reference node image. And when the number of the matching of the characteristic points included in the selected image frame and the characteristic points included in the reference node image is lower than the first preset number and higher than the second preset number, taking the selected image frame as the node image. When the number of the matching of the feature points included in the selected image frame and the feature points included in the reference node image is lower than a second preset number, the most recently acquired node image is taken as the reference node image, and image tracking is continued to be carried out so as to select the node image.
S608, determining corresponding nodes of the acquired node images in the map.
In particular, the computer device may determine nodes in the natural space where the acquired node images were acquired and projected in the map space. The computer equipment can extract the characteristics of the node images with the front time sequence of the acquired node images, calculate the characteristics of the node images with the front time sequence and the change matrix of the acquired node images, obtain the change quantity from the position when the node images with the front time sequence are acquired to the position when the acquired node images are acquired according to the change matrix, and then determine the corresponding nodes of the acquired node images in the map according to the change quantity.
In one embodiment, step S608 includes: extracting the characteristics of the obtained node images; acquiring characteristics of node images corresponding to existing nodes in the map; determining a change matrix between the acquired features and the extracted features; and determining corresponding nodes of the acquired node images in the map according to the nodes and the change matrix.
Wherein the change matrix is a similar change relation between the features of the two-dimensional image and the features of the two-dimensional image. Specifically, the computer device may extract features of the obtained node image, match features of the node image corresponding to the existing node in the map, and obtain positions of the successfully matched features in the obtained node image and the existing node image, respectively. The acquired node image is a later acquired image frame, and the existing node image is a later acquired image frame. The computer equipment can determine a change matrix between two successively acquired image frames according to the obtained positions of the matched features on the two successively acquired image frames, so that the position change and the posture change of the computer equipment when the two successively acquired image frames are acquired are obtained, and then the position and the posture of the later acquired image can be obtained according to the position and the posture of the earlier acquired image.
In one embodiment, the node image corresponding to the existing node in the map may be one or more frames. The computer equipment can also compare the characteristics of the acquired node images with the characteristics of the node images corresponding to a plurality of existing nodes to obtain a change matrix of the acquired image frames and a plurality of image frames acquired before, and then comprehensively obtain the position and the posture of the acquired image after the acquisition according to the change matrixes. For example, the calculated plurality of position changes and posture changes are weighted and averaged.
In this embodiment, the transformation relationship between the currently acquired node image and the previous node image is obtained through the change matrix between the features of the node image, so that the position of the current image frame in the map is estimated from the position of the previous image frame in the map, and real-time positioning is realized.
S610, storing the features of the acquired node image corresponding to the determined node.
Specifically, the computer equipment can extract the characteristics of the node images, store the characteristics of the node images corresponding to the corresponding nodes of the node images, and directly search the characteristics of the corresponding node images according to the nodes when the image characteristic comparison is required, so that the storage space is saved and the searching efficiency is improved.
In this embodiment, through self collection image frame, again handle the image frame of gathering and can carry out the map construction automatically, avoided needing a large amount of staff's manual work that have professional drawing ability to survey and draw the environment, the problem that requires highly and the amount of labour to staff's ability is big improves the efficiency of map construction.
In one embodiment, after step S608, the motion control method further includes: calculating the similarity between the characteristics of the node images corresponding to the existing nodes in the map and the acquired characteristics of the node images; when the similarity between the features of the node images corresponding to the existing nodes in the map and the features of the acquired node images exceeds a preset similarity threshold, generating a circular path comprising the existing nodes in the map according to the corresponding nodes of the acquired node images.
Specifically, when the computer device acquires the node image, the feature of the newly added node image may be compared with the feature of the node image corresponding to the existing node in the map, and the similarity between the feature of the newly added node image and the feature of the node image corresponding to the existing node in the map may be calculated. When the similarity between the features of the node images corresponding to the existing nodes in the map and the features of the newly added node images exceeds a preset similarity threshold, the computer equipment can judge that the acquisition position of the newly added node images in the natural space is consistent with the acquisition position of the node images corresponding to the existing nodes in the natural space.
The computer device may generate a torus path from the existing node, through the node added after the existing node, to the existing node in the map by the node corresponding to the acquired node image. The computer equipment can sequentially acquire the characteristics of the node images corresponding to the nodes included in the annular path from the existing node in turn; sequentially determining a change matrix between the acquired characteristics of the node images corresponding to the adjacent nodes; and adjusting the characteristics of the node images corresponding to the nodes included in the annular path according to the sequentially determined change matrix in an inverted order.
For example, the computer device builds a local map by sequentially adding node images from the first frame of node images. When the similarity between the characteristic of the current fourth frame node image and the characteristic of the first frame node image exceeds a preset similarity threshold value, judging that the acquisition position of the fourth frame node image in the natural space is consistent with the acquisition position of the first frame node image in the natural space, and generating a circular path of the first frame node image, the second frame node image, the third frame node image and the first frame node image.
The change matrix between the features of the first frame node image and the features of the second frame node image is H1, the change matrix between the features of the second frame node image and the features of the third frame node image is H2, and the change matrix between the features of the third frame node image and the features of the fourth frame node image is H4. The computer device may change the characteristics of the first frame node image according to H4, optimize the third frame node image according to the obtained characteristics of the image, and then change the optimized third frame node image according to H3, and optimize the second frame node image according to the obtained characteristics of the image.
In this embodiment, the similarity between the features of the newly added node image and the features of the existing node image is used as a basis to perform closed-loop detection, and when a closed loop is detected, a loop path is generated in the map to perform subsequent closed-loop optimization, so that the accuracy of constructing the map is improved.
FIG. 7 illustrates a flow diagram of a map creation process in one embodiment. Referring to fig. 7, the map creation process includes three parts, tracking, mapping, and closed loop detection. After the computer device obtains the instructions to construct the map, the image frames may be acquired at a fixed or dynamic frame rate. After the image frame is acquired, extracting feature points of the image frame, and matching the extracted feature points with feature points of a node image corresponding to newly added nodes in the map. When the extracted feature points are failed to match the feature points of the node images corresponding to the newly added nodes in the map, the computer equipment can acquire the acquired image frames again for repositioning.
When the extracted feature points are successfully matched with the feature points of the node images corresponding to the newly added nodes in the map, the collected image frames are estimated to correspond to the nodes in the map according to the newly added nodes in the map. The computer device may re-track feature points in the map that match the acquired image, and optimize nodes in the map corresponding to the image frames according to the matched features. After the acquired image is optimized, judging whether the characteristic points of the image frame accord with the characteristic points of a preset node image, and if not, acquiring the acquired image frame again by the computer equipment to carry out characteristic point matching.
If the feature points of the image frame accord with the feature points of the preset node image, the computer equipment can acquire the image frame as a newly added node image. The computer equipment can extract the characteristic points of the newly added node images, represent the extracted characteristic points according to a preset unified format, then determine the positions of the characteristic points of the newly added node images in the map according to a triangular ranging algorithm, so as to update the local map, perform local bundling adjustment, and remove redundant nodes corresponding to the node images with the similarity higher than a preset similarity threshold.
After the computer equipment acquires the image frame as a newly added node image, closed loop detection can be asynchronously performed. Comparing the characteristics of the newly added node image with those of the node image corresponding to the existing node, and when the similarity between the characteristics of the newly added node image and those of the node image corresponding to the existing node is higher than a preset similarity threshold, the computer equipment can judge that the acquisition position of the newly added node image in the natural space is consistent with the acquisition position of the node image corresponding to the existing node in the natural space, namely a closed loop exists. The computer equipment can generate a circular path comprising nodes with consistent positions in the map according to the nodes corresponding to the newly added node images, and perform closed-loop optimization and closed-loop fusion. Finally, global map comprising feature points, nodes and paths is obtained
FIG. 8 illustrates a schematic diagram of creating a completed map in one embodiment. Referring to fig. 8, the map is a feature distribution diagram built based on sparse features. The schematic diagram includes feature points 801, nodes 802, and paths 803 formed between the nodes. Wherein the feature point 801 is a projection position of a feature point of an object in the natural space in the map space. Node 802 is the projected location of the natural space location in map space at the time the computer device acquired the image frame in natural space. The path 803 formed between the nodes is a projection of the path of the computer device moving in the natural space in the map space.
In one embodiment, step S306 includes: extracting features of the image frames; acquiring characteristics of node images corresponding to nodes included in the map; determining a similarity between features of the image frame and features of the node image; and selecting a node corresponding to the characteristic of the node image with the highest similarity to obtain a starting node matched with the image frame.
Specifically, when the computer device compares the feature of the node image corresponding to the existing node in the calculation map with the feature of the acquired node image, the computer device may calculate the difference between the two image features, the larger the difference between the features is, the lower the similarity is, and the smaller the difference between the features is, the higher the similarity is. The similarity may be cosine similarity or hamming distance of the respective perceptual hash values between the images. After the computer equipment calculates the similarity between the characteristics of the node images corresponding to the existing nodes in the map and the characteristics of the acquired node images, selecting the node corresponding to the characteristic of the node image with the highest corresponding similarity to obtain the initial node matched with the image frame.
In this embodiment, the current position in the map is located by matching the characteristics of the node image corresponding to the node included in the map with the current image frame, so that the self-locating result is more accurate.
In one embodiment, prior to step S310, the motion control method further includes: extracting features of the image frames; acquiring characteristics of a node image corresponding to the initial node; determining a spatial state difference amount between the features of the image frame and the features of the node image; and performing movement according to the space state difference quantity.
Wherein the spatial state difference amount is a change amount of the spatial state of the computer device when different image frames are acquired. The spatial state difference amount includes a spatial position difference amount and a spatial angle difference amount. The amount of spatial location difference is the movement of the computer device over the physical location. For example, the computer device translates horizontally forward by 0.5m from when the first image frame is acquired to when the second image frame is acquired. The amount of spatial angle difference is the rotation of the computer device in the physical orientation, e.g., the computer device rotates 15 degrees counterclockwise from when the first image frame is acquired to when the second image frame is acquired.
Specifically, the computer device may calculate a change matrix between the features of the image frame and the features of the node image corresponding to the start node, recover the motion position of the computer device according to the calculated change matrix, decompose the change matrix to obtain a rotation matrix and a displacement matrix, obtain a spatial angle difference between the features of the image frame and the features of the node image according to the rotation matrix, and obtain a spatial position difference between the features of the image frame and the features of the node image according to the displacement matrix. The computer device may then determine the direction of the current movement based on the amount of spatial angle difference and determine the distance of the current movement based on the amount of spatial position difference, thereby moving the determined distance in accordance with the determined direction.
In this embodiment, the space state difference between the currently acquired image frame and the node image corresponding to the determined initial node is used to move to the initial node in the map, so that the object moves according to the selected object moving path, and the accuracy of the movement is ensured.
In one embodiment, step S310 includes: sequentially acquiring characteristics of node images corresponding to all nodes included in a trend target motion path; sequentially determining the space state difference quantity between the acquired characteristics of the node images corresponding to the adjacent nodes; and performing movement according to the space state difference quantities determined in sequence.
Specifically, the computer device may acquire the feature of the node image corresponding to the second node adjacent to the initial node included in the trending target movement path, calculate a change matrix between the feature of the node image corresponding to the initial node and the feature of the node image corresponding to the second node. The computer equipment then decomposes the change matrix to obtain a rotation matrix and a displacement matrix, obtains the space angle difference between the characteristics of the image frame and the characteristics of the node image according to the rotation matrix, and obtains the space position difference between the characteristics of the image frame and the characteristics of the node image according to the displacement matrix. The computer device may then determine a direction of the current movement based on the amount of spatial angle difference and determine a distance of the current movement based on the amount of spatial position difference, thereby moving the determined distance to a second node in the map according to the determined direction. The computer device may then determine the distance and direction of the current movement in the same manner, in turn moving from the second node in the map along the trending target path of movement until the target node is reached.
In the embodiment, the space state difference quantity of the characteristics of the node images corresponding to the adjacent nodes included in the trend target motion path gradually moves from the initial node to the target node on the map according to the trend target motion path, so that the problem that the current position cannot be determined due to deviation in the motion process is avoided, and the motion accuracy is ensured.
FIG. 9 illustrates a schematic diagram of selecting a trending target motion path in a map in one embodiment. Referring to fig. 9, the schematic includes a target node 901, a start node 902, and a trending target motion path 903. After determining the position of the target node 901, i.e. the target, and the position of the start node 902, i.e. the local, the computer device takes the start node 902 as a starting point, takes the target node 901 as an end point, and selects a target motion path 903 in the map.
As shown in fig. 10, in a specific embodiment, the motion control method includes the steps of:
s1002, selecting an image frame from the image frames acquired in time sequence.
S1004, judging whether the characteristics of the selected image frames accord with the characteristics of preset node images or not; if yes, go to step S1006; if not, the process returns to step S1002.
S1006, acquiring the selected image frame as a node image.
S1008, extracting the characteristics of the acquired node images; acquiring characteristics of node images corresponding to existing nodes in the map; determining a change matrix between the acquired features and the extracted features; and determining corresponding nodes of the acquired node images in the map according to the nodes and the change matrix, and storing the characteristics of the acquired node images corresponding to the determined nodes.
S1010, calculating the similarity between the characteristics of the node images corresponding to the existing nodes in the map and the characteristics of the acquired node images; when the similarity between the features of the node images corresponding to the existing nodes in the map and the features of the acquired node images exceeds a preset similarity threshold, generating a circular path comprising the existing nodes in the map according to the corresponding nodes of the acquired node images.
S1012, acquiring an image frame.
S1014, inputting the image frame into a convolutional neural network model; acquiring a characteristic diagram of a plurality of network layer outputs included in a convolutional neural network model; and sequentially inputting the feature maps into a memory neural network model to obtain a face detection result output by the memory neural network model.
S1016, judging whether the face detection result indicates that the image frame comprises a face image; if yes, go to step S1018; if not, return to step S1012.
S1018, extracting face feature data of the face image; inquiring a preset face image matched with the face image according to the face characteristic data; obtaining a target identity recognition result according to a preset face image; a service type associated with the target identification is determined.
S1020, determining a corresponding target node of the face image in the map.
S1022, extracting the characteristics of the image frame; acquiring characteristics of node images corresponding to nodes included in the map; determining a similarity between features of the image frame and features of the node image; and selecting a node corresponding to the characteristic of the node image with the highest similarity to obtain a starting node matched with the image frame.
S1024, selecting a trend target motion path from paths included in the map according to the initial node and the target node.
S1026, extracting the characteristics of the image frames; acquiring characteristics of a node image corresponding to the initial node; determining a spatial state difference amount between the features of the image frame and the features of the node image; and performing movement according to the space state difference quantity.
S1028, sequentially acquiring characteristics of node images corresponding to all nodes included in the trend target motion path; sequentially determining the space state difference quantity between the acquired characteristics of the node images corresponding to the adjacent nodes; and performing movement according to the space state difference quantities determined in sequence.
S1030, providing a service trigger entry corresponding to the service type.
In this embodiment, after an image frame is acquired, when it is detected that the image frame includes a face image, a target node corresponding to the face image is determined in a map, a position of a target in the map is located, then, based on a matching relationship between a feature of the image frame and a feature of a node image corresponding to each node in the map, a starting node matched with the image frame can be selected from the map, a current position of the local machine in the map is located, and then, a path tending to the target can be selected from paths included in the map to move according to the current node and the target node. Therefore, the positioning in the map can be completed through the feature matching between the images, the environmental influence caused by the positioning of the sensing signals is avoided, and the accuracy of motion control is improved.
As shown in fig. 11, in one embodiment, there is provided a motion control apparatus 1100 comprising: an acquisition module 1101, a determination module 1102, a selection module 1103, a selection module 1104, and a movement module 1105.
The acquisition module 1101 is configured to acquire an image frame.
The determining module 1102 is configured to determine a corresponding target node of the face image in the map when the face detection is performed on the image frame to obtain that the image frame includes the face image.
A selecting module 1103, configured to select a starting node matched with the image frame from the map; wherein the characteristics of the image frames are matched with the characteristics of the node images corresponding to the starting node.
The selecting module 1104 is configured to select a path tending to the target from paths included in the map according to the start node and the target node.
A motion module 1105 for moving along a selected trending target path of motion.
After the above-mentioned motion control device 1100 acquires an image frame, it can automatically determine a target node corresponding to a face image in a map when the image frame includes the face image, locate the position of the target in the map, and then select a starting node matched with the image frame from the map based on the matching relationship between the features of the image frame and the features of the node images corresponding to the nodes in the map, locate the current position of the local node in the map, and then select a path for moving toward the target in the path included in the map according to the current node and the target node. Therefore, the positioning in the map can be completed through the feature matching between the images, the environmental influence caused by the positioning of the sensing signals is avoided, and the accuracy of motion control is improved.
As shown in fig. 12, in one embodiment, the motion control apparatus 1100 further includes: a detection module 1106.
A detection module 1106 for inputting the image frames into a convolutional neural network model; acquiring a characteristic diagram of a plurality of network layer outputs included in a convolutional neural network model; sequentially inputting the feature maps into a memory neural network model; and obtaining a result of whether the image frame output by the memory neural network model comprises a face image.
In this embodiment, image features are fully extracted through a plurality of network layers included in the convolutional neural network model, and then the features extracted by the plurality of network layers are input into the memory neural network model for comprehensive processing, so that face detection is more accurate.
As shown in fig. 13, in one embodiment, the motion control apparatus 1100 further includes: an identification module 1107 and a service module 1108.
A recognition module 1107, configured to extract face feature data of a face image; inquiring a preset face image matched with the face image according to the face characteristic data; obtaining a target identity recognition result according to a preset face image; a service type associated with the target identification is determined.
Service module 1108 is configured to provide a service trigger entry corresponding to a service type.
In this embodiment, when the acquired image detects that a face exists, the face is identified, and after the identity of the target is obtained by identification and the target is moved, a service entry associated with the target can be provided for the target, so that the service providing efficiency is greatly improved.
As shown in fig. 14, in one embodiment, the motion control apparatus 1100 further includes: the map building block 1109.
A map construction module 1109 for selecting an image frame from among image frames collected in time series; judging whether the characteristics of the selected image frames accord with the characteristics of preset node images or not; when the characteristics of the selected image frames accord with the characteristics of the node images, acquiring the selected image frames as the node images; determining corresponding nodes of the acquired node images in the map; the features of the acquired node images are stored corresponding to the determined nodes.
In this embodiment, through self collection image frame, again handle the image frame of gathering and can carry out the map construction automatically, avoided needing a large amount of staff's manual work that have professional drawing ability to survey and draw the environment, the problem that requires highly and the amount of labour to staff's ability is big improves the efficiency of map construction.
In one embodiment, the map construction module 1109 is further configured to extract features of the acquired node images; acquiring characteristics of node images corresponding to existing nodes in the map; determining a change matrix between the acquired features and the extracted features; and determining corresponding nodes of the acquired node images in the map according to the nodes and the change matrix.
In this embodiment, the transformation relationship between the currently acquired node image and the previous node image is obtained through the change matrix between the features of the node image, so that the position of the current image frame in the map is estimated from the position of the previous image frame in the map, and real-time positioning is realized.
In one embodiment, the map construction module 1109 is further configured to calculate a similarity between a feature of a node image corresponding to an existing node in the map and the feature of the acquired node image; when the similarity between the features of the node images corresponding to the existing nodes in the map and the features of the acquired node images exceeds a preset similarity threshold, generating a circular path comprising the existing nodes in the map according to the corresponding nodes of the acquired node images.
In this embodiment, the similarity between the features of the newly added node image and the features of the existing node image is used as a basis to perform closed-loop detection, and when a closed loop is detected, a loop path is generated in the map to perform subsequent closed-loop optimization, so that the accuracy of constructing the map is improved.
In one embodiment, the picking module 1103 is further configured to extract features of the image frames; acquiring characteristics of node images corresponding to nodes included in the map; determining a similarity between features of the image frame and features of the node image; and selecting a node corresponding to the characteristic of the node image with the highest similarity to obtain a starting node matched with the image frame.
In one embodiment, the motion module 1105 is also configured to extract features of the image frame; acquiring characteristics of a node image corresponding to the initial node; determining a spatial state difference amount between the features of the image frame and the features of the node image; and performing movement according to the space state difference quantity.
In this embodiment, the current position in the map is located by matching the characteristics of the node image corresponding to the node included in the map with the current image frame, so that the self-locating result is more accurate.
In one embodiment, the motion module 1105 is further configured to sequentially acquire characteristics of node images corresponding to each node included in the trending target motion path; sequentially determining the space state difference quantity between the acquired characteristics of the node images corresponding to the adjacent nodes; and performing movement according to the space state difference quantities determined in sequence.
In this embodiment, the space state difference between the currently acquired image frame and the node image corresponding to the determined initial node is used to move to the initial node in the map, so that the object moves according to the selected object moving path, and the accuracy of the movement is ensured.
A computer readable storage medium having stored thereon computer readable instructions which when executed by a processor perform the steps of: acquiring an image frame; when the image frame is subjected to face detection to obtain that the image frame comprises a face image, determining a corresponding target node of the face image in a map; selecting a starting node matched with the image frame from the map; the characteristics of the image frames are matched with the characteristics of the node images corresponding to the initial nodes; selecting a trend target motion path from paths included in the map according to the starting node and the target node; according to the selected trend object motion path.
When the computer readable instructions stored on the computer readable storage medium are executed, after the image frame is acquired, the corresponding target node of the face image can be automatically determined in the map when the image frame is detected to comprise the face image, the position of the target in the map is positioned, then the initial node matched with the image frame can be selected from the map according to the matching relation of the characteristics of the image frame and the characteristics of the node images corresponding to the nodes in the map, the current position of the local node in the map is positioned, and the movement towards the target movement path can be selected from the paths included in the map according to the current node and the target node. Therefore, the positioning in the map can be completed through the feature matching between the images, the environmental influence caused by the positioning of the sensing signals is avoided, and the accuracy of motion control is improved.
In one embodiment, the computer readable instructions cause the processor, after executing the capturing of the image frame, to further perform the steps of: inputting the image frames into a convolutional neural network model; acquiring a characteristic diagram of a plurality of network layer outputs included in a convolutional neural network model; sequentially inputting the feature maps into a memory neural network model; and obtaining a result of whether the image frame output by the memory neural network model comprises a face image.
In one embodiment, the computer readable instructions cause the processor to, after performing face detection on the image frame to determine that the image frame includes a face image, further perform the steps of: extracting face characteristic data of a face image; inquiring a preset face image matched with the face image according to the face characteristic data; obtaining a target identity recognition result according to a preset face image; a service type associated with the target identification is determined. The computer readable instructions cause the processor, after executing the movement according to the selected trending target movement path, to further perform the steps of: a service trigger entry corresponding to the service type is provided.
In one embodiment, the computer readable instructions cause the processor to, prior to executing the acquiring the image frame, further perform the steps of: selecting an image frame from the image frames acquired according to time sequence; judging whether the characteristics of the selected image frames accord with the characteristics of preset node images or not; when the characteristics of the selected image frames accord with the characteristics of the node images, acquiring the selected image frames as the node images; determining corresponding nodes of the acquired node images in the map; the features of the acquired node images are stored corresponding to the determined nodes.
In one embodiment, determining a corresponding node of the acquired node image in the map includes: extracting the characteristics of the obtained node images; acquiring characteristics of node images corresponding to existing nodes in the map; determining a change matrix between the acquired features and the extracted features; and determining corresponding nodes of the acquired node images in the map according to the nodes and the change matrix.
In one embodiment, the computer readable instructions cause the processor to, after executing determining the corresponding node of the acquired node image in the map, further perform the steps of: calculating the similarity between the characteristics of the node images corresponding to the existing nodes in the map and the acquired characteristics of the node images; when the similarity between the features of the node images corresponding to the existing nodes in the map and the features of the acquired node images exceeds a preset similarity threshold, generating a circular path comprising the existing nodes in the map according to the corresponding nodes of the acquired node images.
In one embodiment, selecting a starting node from the map that matches the image frame includes: extracting features of the image frames; acquiring characteristics of node images corresponding to nodes included in the map; determining a similarity between features of the image frame and features of the node image; and selecting a node corresponding to the characteristic of the node image with the highest similarity to obtain a starting node matched with the image frame.
In one embodiment, the computer readable instructions cause the processor to, prior to executing the movement according to the selected trending target movement path, further perform the steps of: extracting features of the image frames; acquiring characteristics of a node image corresponding to the initial node; determining a spatial state difference amount between the features of the image frame and the features of the node image; and performing movement according to the space state difference quantity.
In one embodiment, moving according to a selected trending target path of movement includes: sequentially acquiring characteristics of node images corresponding to all nodes included in a trend target motion path; sequentially determining the space state difference quantity between the acquired characteristics of the node images corresponding to the adjacent nodes; and performing movement according to the space state difference quantities determined in sequence.
A computer device comprising a memory and a processor, the memory having stored therein computer readable instructions that when executed by the processor cause the processor to perform the steps of: acquiring an image frame; when the image frame is subjected to face detection to obtain that the image frame comprises a face image, determining a corresponding target node of the face image in a map; selecting a starting node matched with the image frame from the map; the characteristics of the image frames are matched with the characteristics of the node images corresponding to the initial nodes; selecting a trend target motion path from paths included in the map according to the starting node and the target node; according to the selected trend object motion path.
After the image frame is acquired, the computer equipment can automatically determine the corresponding target node of the face image in the map when the face image is detected to be included in the image frame, position the target in the map, select the initial node matched with the image frame from the map according to the matching relation between the characteristic of the image frame and the characteristic of the node image corresponding to each node in the map, position the current position of the computer in the map, and select the trend target motion path in the path included in the map according to the current node and the target node to move. Therefore, the positioning in the map can be completed through the feature matching between the images, the environmental influence caused by the positioning of the sensing signals is avoided, and the accuracy of motion control is improved.
In one embodiment, the computer readable instructions cause the processor, after executing the capturing of the image frame, to further perform the steps of: inputting the image frames into a convolutional neural network model; acquiring a characteristic diagram of a plurality of network layer outputs included in a convolutional neural network model; sequentially inputting the feature maps into a memory neural network model; and obtaining a result of whether the image frame output by the memory neural network model comprises a face image.
In one embodiment, the computer readable instructions cause the processor to, after performing face detection on the image frame to determine that the image frame includes a face image, further perform the steps of: extracting face characteristic data of a face image; inquiring a preset face image matched with the face image according to the face characteristic data; obtaining a target identity recognition result according to a preset face image; a service type associated with the target identification is determined. The computer readable instructions cause the processor, after executing the movement according to the selected trending target movement path, to further perform the steps of: a service trigger entry corresponding to the service type is provided.
In one embodiment, the computer readable instructions cause the processor to, prior to executing the acquiring the image frame, further perform the steps of: selecting an image frame from the image frames acquired according to time sequence; judging whether the characteristics of the selected image frames accord with the characteristics of preset node images or not; when the characteristics of the selected image frames accord with the characteristics of the node images, acquiring the selected image frames as the node images; determining corresponding nodes of the acquired node images in the map; the features of the acquired node images are stored corresponding to the determined nodes.
In one embodiment, determining a corresponding node of the acquired node image in the map includes: extracting the characteristics of the obtained node images; acquiring characteristics of node images corresponding to existing nodes in the map; determining a change matrix between the acquired features and the extracted features; and determining corresponding nodes of the acquired node images in the map according to the nodes and the change matrix.
In one embodiment, the computer readable instructions cause the processor to, after executing determining the corresponding node of the acquired node image in the map, further perform the steps of: calculating the similarity between the characteristics of the node images corresponding to the existing nodes in the map and the acquired characteristics of the node images; when the similarity between the features of the node images corresponding to the existing nodes in the map and the features of the acquired node images exceeds a preset similarity threshold, generating a circular path comprising the existing nodes in the map according to the corresponding nodes of the acquired node images.
In one embodiment, selecting a starting node from the map that matches the image frame includes: extracting features of the image frames; acquiring characteristics of node images corresponding to nodes included in the map; determining a similarity between features of the image frame and features of the node image; and selecting a node corresponding to the characteristic of the node image with the highest similarity to obtain a starting node matched with the image frame.
In one embodiment, the computer readable instructions cause the processor to, prior to executing the movement according to the selected trending target movement path, further perform the steps of: extracting features of the image frames; acquiring characteristics of a node image corresponding to the initial node; determining a spatial state difference amount between the features of the image frame and the features of the node image; and performing movement according to the space state difference quantity.
In one embodiment, moving according to a selected trending target path of movement includes: sequentially acquiring characteristics of node images corresponding to all nodes included in a trend target motion path; sequentially determining the space state difference quantity between the acquired characteristics of the node images corresponding to the adjacent nodes; and performing movement according to the space state difference quantities determined in sequence.
A service robot comprising a memory and a processor, the memory storing computer readable instructions that, when executed by the processor, cause the processor to perform the steps of: acquiring an image frame; when the image frame is subjected to face detection to obtain that the image frame comprises a face image, determining a corresponding target node of the face image in a map; selecting a starting node matched with the image frame from the map; the characteristics of the image frames are matched with the characteristics of the node images corresponding to the initial nodes; selecting a trend target motion path from paths included in the map according to the starting node and the target node; according to the selected trend object motion path.
After the image frame is acquired, the service robot can automatically determine the corresponding target node of the face image in the map when the face image is detected to be included in the image frame, position the target in the map, select the initial node matched with the image frame from the map based on the matching relation between the characteristic of the image frame and the characteristic of the node image corresponding to each node in the map, position the current position of the service robot in the map, and select the trend target motion path in the path included in the map to move according to the current node and the target node. Therefore, the positioning in the map can be completed through the feature matching between the images, the environmental influence caused by the positioning of the sensing signals is avoided, and the accuracy of motion control is improved.
In one embodiment, the computer readable instructions cause the processor, after executing the capturing of the image frame, to further perform the steps of: inputting the image frames into a convolutional neural network model; acquiring a characteristic diagram of a plurality of network layer outputs included in a convolutional neural network model; sequentially inputting the feature maps into a memory neural network model; and obtaining a result of whether the image frame output by the memory neural network model comprises a face image.
In one embodiment, the computer readable instructions cause the processor to, after performing face detection on the image frame to determine that the image frame includes a face image, further perform the steps of: extracting face characteristic data of a face image; inquiring a preset face image matched with the face image according to the face characteristic data; obtaining a target identity recognition result according to a preset face image; a service type associated with the target identification is determined. The computer readable instructions cause the processor, after executing the movement according to the selected trending target movement path, to further perform the steps of: a service trigger entry corresponding to the service type is provided.
In one embodiment, the computer readable instructions cause the processor to, prior to executing the acquiring the image frame, further perform the steps of: selecting an image frame from the image frames acquired according to time sequence; judging whether the characteristics of the selected image frames accord with the characteristics of preset node images or not; when the characteristics of the selected image frames accord with the characteristics of the node images, acquiring the selected image frames as the node images; determining corresponding nodes of the acquired node images in the map; the features of the acquired node images are stored corresponding to the determined nodes.
In one embodiment, determining a corresponding node of the acquired node image in the map includes: extracting the characteristics of the obtained node images; acquiring characteristics of node images corresponding to existing nodes in the map; determining a change matrix between the acquired features and the extracted features; and determining corresponding nodes of the acquired node images in the map according to the nodes and the change matrix.
In one embodiment, the computer readable instructions cause the processor to, after executing determining the corresponding node of the acquired node image in the map, further perform the steps of: calculating the similarity between the characteristics of the node images corresponding to the existing nodes in the map and the acquired characteristics of the node images; when the similarity between the features of the node images corresponding to the existing nodes in the map and the features of the acquired node images exceeds a preset similarity threshold, generating a circular path comprising the existing nodes in the map according to the corresponding nodes of the acquired node images.
In one embodiment, selecting a starting node from the map that matches the image frame includes: extracting features of the image frames; acquiring characteristics of node images corresponding to nodes included in the map; determining a similarity between features of the image frame and features of the node image; and selecting a node corresponding to the characteristic of the node image with the highest similarity to obtain a starting node matched with the image frame.
In one embodiment, the computer readable instructions cause the processor to, prior to executing the movement according to the selected trending target movement path, further perform the steps of: extracting features of the image frames; acquiring characteristics of a node image corresponding to the initial node; determining a spatial state difference amount between the features of the image frame and the features of the node image; and performing movement according to the space state difference quantity.
In one embodiment, moving according to a selected trending target path of movement includes: sequentially acquiring characteristics of node images corresponding to all nodes included in a trend target motion path; sequentially determining the space state difference quantity between the acquired characteristics of the node images corresponding to the adjacent nodes; and performing movement according to the space state difference quantities determined in sequence.
Those skilled in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, where the program may be stored in a non-volatile computer readable storage medium, and where the program, when executed, may include processes in the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), or the like.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the invention and are described in detail herein without thereby limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.

Claims (16)

1. A method of motion control, the method comprising:
acquiring an image frame;
when the image frame is subjected to face detection to obtain that the image frame comprises a face image, corresponding target nodes of the face image in a map are determined, the map comprises a plurality of nodes, each node is provided with a one-to-one corresponding node image, the nodes are positions obtained by projecting positions of the image frame acquired from a natural space into a map space, the node images are images acquired at positions in the natural space with projection relation with the nodes, and the target nodes are positions obtained by projecting positions of a target in the natural space into the map space;
Selecting a starting node matched with the image frame from the map according to the characteristics of the node image corresponding to each node in the map; the characteristics of the image frames are matched with the characteristics of the node images corresponding to the initial nodes;
selecting a trend target motion path from paths included in the map according to the starting node and the target node;
calculating a change matrix between the characteristics of the image frame and the characteristics of the node image corresponding to the starting node, decomposing the change matrix to obtain a rotation matrix and a displacement matrix, respectively obtaining a space angle difference quantity and a space position difference quantity between the characteristics of the image frame and the characteristics of the node image according to the rotation matrix and the displacement matrix, determining the current movement direction according to the space angle difference quantity, determining the current movement distance according to the space position difference quantity, moving the determined distance according to the determined direction, and moving to the starting node;
and acquiring the characteristics of a node image corresponding to a second node adjacent to the initial node, which is included in the trend target motion path, and moving to a second node in a map according to the spatial angle difference and the spatial position difference between the characteristics of the node image corresponding to the initial node and the characteristics of the node image corresponding to the second node, and continuing to move from the second node according to the selected trend target motion path until reaching the target node.
2. The method of claim 1, wherein after the capturing the image frames, the method further comprises:
inputting the image frame into a convolutional neural network model;
acquiring a characteristic diagram of a plurality of network layer outputs included in the convolutional neural network model;
sequentially inputting each characteristic diagram into a memory neural network model;
and obtaining a result of whether the image frame output by the memory neural network model comprises a face image.
3. The method of claim 1, wherein when the image frame includes a face image as a result of face detection of the image frame, determining that the face image is behind a corresponding target node in a map, the method further comprises:
extracting face characteristic data of the face image;
inquiring a preset face image matched with the face image according to the face characteristic data;
obtaining a target identity recognition result according to the preset face image;
determining a service type associated with the target identity recognition result;
after the movement according to the selected trend target movement path, the method further comprises:
providing a service trigger entry corresponding to the service type.
4. The method of claim 1, wherein prior to the capturing the image frame, the method further comprises:
selecting an image frame from the image frames acquired according to time sequence;
judging whether the characteristics of the selected image frames accord with the characteristics of preset node images or not;
when the characteristics of the selected image frames accord with the characteristics of the node images, acquiring the selected image frames as the node images;
determining corresponding nodes of the acquired node images in a map;
storing the acquired characteristics of the node image corresponding to the determined node.
5. The method of claim 4, wherein said determining the corresponding node in the map for the acquired node image comprises:
extracting the characteristics of the obtained node images;
acquiring characteristics of node images corresponding to existing nodes in the map;
determining a change matrix between the acquired feature and the extracted feature;
and determining corresponding nodes of the acquired node images in the map according to the nodes and the change matrix.
6. The method of claim 4, wherein the determining that the acquired node image is behind a corresponding node in a map, the method further comprising:
Calculating the similarity between the characteristics of the node images corresponding to the existing nodes in the map and the acquired characteristics of the node images;
when the similarity between the features of the node images corresponding to the existing nodes in the map and the acquired features of the node images exceeds a preset similarity threshold value, then
And generating a circular path comprising the existing nodes in the map according to the nodes corresponding to the acquired node images.
7. The method according to claim 1, wherein the selecting a starting node from the map that matches the image frame according to the feature of the node image corresponding to each node in the map comprises:
extracting features of the image frames;
acquiring characteristics of node images corresponding to nodes included in the map;
determining a similarity between features of the image frame and features of the node image;
and selecting a node corresponding to the characteristic of the node image with the highest similarity to obtain a starting node matched with the image frame.
8. A motion control apparatus, the apparatus comprising:
an acquisition module for acquiring an image frame;
the determining module is used for determining corresponding target nodes of the face image in a map when the face detection is carried out on the image frame to obtain the image frame comprising the face image, the map comprises a plurality of nodes, each node is provided with a one-to-one corresponding node image, the nodes are positions obtained by projecting positions of the image frame acquired from a natural space into a map space, the node images are images acquired at positions in the natural space with projection relation with the nodes, and the target nodes are positions obtained by projecting positions of a target in the natural space into the map space;
The selecting module is used for selecting a starting node matched with the image frame from the map according to the characteristics of the node image corresponding to each node in the map; the characteristics of the image frames are matched with the characteristics of the node images corresponding to the initial nodes;
the selecting module is used for selecting a trend target motion path from paths included in the map according to the starting node and the target node;
the motion module is used for calculating a change matrix between the characteristics of the image frame and the characteristics of the node image corresponding to the initial node, decomposing the change matrix to obtain a rotation matrix and a displacement matrix, respectively obtaining a space angle difference quantity and a space position difference quantity between the characteristics of the image frame and the characteristics of the node image according to the rotation matrix and the displacement matrix, determining the current motion direction according to the space angle difference quantity, determining the current motion distance according to the space position difference quantity, moving the determined distance according to the determined direction, and moving to the initial node;
and acquiring the characteristics of a node image corresponding to a second node adjacent to the initial node, which is included in the trend target motion path, and moving to a second node in a map according to the spatial angle difference and the spatial position difference between the characteristics of the node image corresponding to the initial node and the characteristics of the node image corresponding to the second node, and continuing to move from the second node according to the selected trend target motion path until reaching the target node.
9. The method of claim 8, wherein the apparatus further comprises:
the detection module is used for inputting the image frames into a convolutional neural network model; acquiring a characteristic diagram of a plurality of network layer outputs included in the convolutional neural network model; sequentially inputting each characteristic diagram into a memory neural network model; and obtaining a result of whether the image frame output by the memory neural network model comprises a face image.
10. The apparatus of claim 8, wherein the apparatus further comprises:
the identification module is used for extracting face characteristic data of the face image; inquiring a preset face image matched with the face image according to the face characteristic data; obtaining a target identity recognition result according to the preset face image; determining a service type associated with the target identity recognition result;
and the service module is used for providing a service triggering entry corresponding to the service type.
11. The apparatus of claim 8, wherein the apparatus further comprises:
the map construction module is used for selecting image frames from the image frames acquired according to time sequence; judging whether the characteristics of the selected image frames accord with the characteristics of preset node images or not; when the characteristics of the selected image frames accord with the characteristics of the node images, acquiring the selected image frames as the node images; determining corresponding nodes of the acquired node images in a map; storing the acquired characteristics of the node image corresponding to the determined node.
12. The apparatus of claim 11, wherein the map construction module is further configured to extract features of the acquired node images; acquiring characteristics of node images corresponding to existing nodes in the map; determining a change matrix between the acquired feature and the extracted feature; and determining corresponding nodes of the acquired node images in the map according to the nodes and the change matrix.
13. The apparatus of claim 11, wherein the map construction module is further configured to calculate a similarity between a feature of a node image corresponding to an existing node in the map and the obtained feature of the node image; when the similarity between the features of the node images corresponding to the existing nodes in the map and the obtained features of the node images exceeds a preset similarity threshold, generating a circular path comprising the existing nodes in the map according to the obtained nodes corresponding to the node images.
14. The apparatus of claim 8, wherein the culling module is further configured to extract features of the image frame; acquiring characteristics of node images corresponding to nodes included in the map; determining a similarity between features of the image frame and features of the node image; and selecting a node corresponding to the characteristic of the node image with the highest similarity to obtain a starting node matched with the image frame.
15. A computer device comprising a memory and a processor, the memory having stored therein computer readable instructions which, when executed by the processor, cause the processor to perform the steps of the method of any of claims 1 to 7.
16. A service robot comprising a memory and a processor, the memory having stored therein computer readable instructions which, when executed by the processor, cause the processor to perform the steps of the method of any of claims 1 to 7.
CN201710365516.XA 2017-05-22 2017-05-22 Motion control method, motion control device, computer equipment and service robot Active CN107341442B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201710365516.XA CN107341442B (en) 2017-05-22 2017-05-22 Motion control method, motion control device, computer equipment and service robot
PCT/CN2018/085065 WO2018214706A1 (en) 2017-05-22 2018-04-28 Movement control method, storage medium, computer apparatus, and service robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710365516.XA CN107341442B (en) 2017-05-22 2017-05-22 Motion control method, motion control device, computer equipment and service robot

Publications (2)

Publication Number Publication Date
CN107341442A CN107341442A (en) 2017-11-10
CN107341442B true CN107341442B (en) 2023-06-06

Family

ID=60221306

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710365516.XA Active CN107341442B (en) 2017-05-22 2017-05-22 Motion control method, motion control device, computer equipment and service robot

Country Status (2)

Country Link
CN (1) CN107341442B (en)
WO (1) WO2018214706A1 (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107341442B (en) * 2017-05-22 2023-06-06 腾讯科技(上海)有限公司 Motion control method, motion control device, computer equipment and service robot
CN108236777A (en) * 2018-01-08 2018-07-03 深圳市易成自动驾驶技术有限公司 It picks up ball method, pick up ball vehicle and computer readable storage medium
KR20200010640A (en) * 2018-06-27 2020-01-31 삼성전자주식회사 Method and device to estimate ego motion using motion recognition model and method and device to train motion recognition model
CN110794951A (en) * 2018-08-01 2020-02-14 北京京东尚科信息技术有限公司 Method and device for determining shopping instruction based on user action
CN109389156B (en) * 2018-09-11 2022-05-03 深圳大学 Training method and device of image positioning model and image positioning method
CN109579847B (en) * 2018-12-13 2022-08-16 歌尔股份有限公司 Method and device for extracting key frame in synchronous positioning and map construction and intelligent equipment
CN111144275A (en) * 2019-12-24 2020-05-12 中石化第十建设有限公司 Intelligent running test system and method based on face recognition
CN111241943B (en) * 2019-12-31 2022-06-21 浙江大学 Scene recognition and loopback detection method based on background target and triple loss
CN113343739B (en) * 2020-03-02 2022-07-22 杭州萤石软件有限公司 Relocating method of movable equipment and movable equipment
CN111506104B (en) * 2020-04-03 2021-10-01 北京邮电大学 Method and device for planning position of unmanned aerial vehicle
CN111815738B (en) * 2020-06-15 2024-01-12 北京京东乾石科技有限公司 Method and device for constructing map
CN112528728B (en) * 2020-10-16 2024-03-29 深圳银星智能集团股份有限公司 Image processing method and device for visual navigation and mobile robot
CN112464989B (en) * 2020-11-02 2024-02-20 北京科技大学 Closed loop detection method based on target detection network
CN112914601B (en) * 2021-01-19 2024-04-02 深圳市德力凯医疗设备股份有限公司 Obstacle avoidance method and device for mechanical arm, storage medium and ultrasonic equipment
CN114359910A (en) * 2021-12-30 2022-04-15 科大讯飞股份有限公司 Text point-reading method, computer equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006195969A (en) * 2004-12-14 2006-07-27 Honda Motor Co Ltd Apparatus for generating movement path for autonomous mobile robot
JP2015180974A (en) * 2015-07-17 2015-10-15 株式会社ナビタイムジャパン Information processing system including hierarchal map data, information processing program, information processor and information processing method
CN106125730A (en) * 2016-07-10 2016-11-16 北京工业大学 A kind of robot navigation's map constructing method based on Mus cerebral hippocampal spatial cell
CN106574975A (en) * 2014-04-25 2017-04-19 三星电子株式会社 Trajectory matching using peripheral signal

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9250081B2 (en) * 2005-03-25 2016-02-02 Irobot Corporation Management of resources for SLAM in large environments
CN102411368B (en) * 2011-07-22 2013-10-09 北京大学 Active vision human face tracking method and tracking system of robot
US9218529B2 (en) * 2012-09-11 2015-12-22 Southwest Research Institute 3-D imaging sensor based location estimation
US10068373B2 (en) * 2014-07-01 2018-09-04 Samsung Electronics Co., Ltd. Electronic device for providing map information
CN104236548B (en) * 2014-09-12 2017-04-05 清华大学 Autonomous navigation method in a kind of MAV room
CN105911992B (en) * 2016-06-14 2019-02-22 广东技术师范学院 A kind of automatic path planning method and mobile robot of mobile robot
CN107341442B (en) * 2017-05-22 2023-06-06 腾讯科技(上海)有限公司 Motion control method, motion control device, computer equipment and service robot

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006195969A (en) * 2004-12-14 2006-07-27 Honda Motor Co Ltd Apparatus for generating movement path for autonomous mobile robot
CN106574975A (en) * 2014-04-25 2017-04-19 三星电子株式会社 Trajectory matching using peripheral signal
JP2015180974A (en) * 2015-07-17 2015-10-15 株式会社ナビタイムジャパン Information processing system including hierarchal map data, information processing program, information processor and information processing method
CN106125730A (en) * 2016-07-10 2016-11-16 北京工业大学 A kind of robot navigation's map constructing method based on Mus cerebral hippocampal spatial cell

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Dynamic window based approach to mobile robot motion control in the presence of moving obstacles;Seder M, et al.;Proceedings 2007 IEEE International Conference on Robotics and Automation.;1986-1991页 *
Working day movement model;Ekman F, et al;Proceedings of the 1st ACM SIGMOBILE workshop on Mobility models;33-40页 *
基于群智感知的无线室内定位;吴陈沭;清华大学;第1-120页 *

Also Published As

Publication number Publication date
WO2018214706A1 (en) 2018-11-29
CN107341442A (en) 2017-11-10

Similar Documents

Publication Publication Date Title
CN107341442B (en) Motion control method, motion control device, computer equipment and service robot
CN109508678B (en) Training method of face detection model, and detection method and device of face key points
CN111126304B (en) Augmented reality navigation method based on indoor natural scene image deep learning
CN109919977B (en) Video motion person tracking and identity recognition method based on time characteristics
CN108200334B (en) Image shooting method and device, storage medium and electronic equipment
US8855369B2 (en) Self learning face recognition using depth based tracking for database generation and update
US11238653B2 (en) Information processing device, information processing system, and non-transitory computer-readable storage medium for storing program
CN109934847B (en) Method and device for estimating posture of weak texture three-dimensional object
CN111062263B (en) Method, apparatus, computer apparatus and storage medium for hand gesture estimation
WO2020007483A1 (en) Method, apparatus and computer program for performing three dimensional radio model construction
CN111523545B (en) Article searching method combined with depth information
KR20160003066A (en) Monocular visual slam with general and panorama camera movements
CN113632097B (en) Method, device, equipment and storage medium for predicting relevance between objects
US20160210761A1 (en) 3d reconstruction
KR102464271B1 (en) Pose acquisition method, apparatus, electronic device, storage medium and program
KR20140026629A (en) Dynamic gesture recognition process and authoring system
CN113781519A (en) Target tracking method and target tracking device
JP2018120283A (en) Information processing device, information processing method and program
CN114241379A (en) Passenger abnormal behavior identification method, device and equipment and passenger monitoring system
CN115482556A (en) Method for key point detection model training and virtual character driving and corresponding device
US20240161254A1 (en) Information processing apparatus, information processing method, and program
CN115471863A (en) Three-dimensional posture acquisition method, model training method and related equipment
Altuntaş et al. Comparison of 3-dimensional SLAM systems: RTAB-Map vs. Kintinuous
CN116824641B (en) Gesture classification method, device, equipment and computer storage medium
CN111531546B (en) Robot pose estimation method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant