CN111932623A - Face data automatic acquisition and labeling method and system based on mobile robot and electronic equipment thereof - Google Patents

Face data automatic acquisition and labeling method and system based on mobile robot and electronic equipment thereof Download PDF

Info

Publication number
CN111932623A
CN111932623A CN202010799010.1A CN202010799010A CN111932623A CN 111932623 A CN111932623 A CN 111932623A CN 202010799010 A CN202010799010 A CN 202010799010A CN 111932623 A CN111932623 A CN 111932623A
Authority
CN
China
Prior art keywords
face
robot
mobile robot
camera
labeling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010799010.1A
Other languages
Chinese (zh)
Inventor
何山
么子赢
宋涛
霍向
吴新开
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Lobby Technology Co ltd
Original Assignee
Beijing Lobby Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Lobby Technology Co ltd filed Critical Beijing Lobby Technology Co ltd
Priority to CN202010799010.1A priority Critical patent/CN111932623A/en
Publication of CN111932623A publication Critical patent/CN111932623A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The method comprises the steps of shooting face photos at different angles through a mobile robot according to a planned path, collecting the characteristics of faces at different angles, setting new face number information for the faces, and storing the face number information and the characteristic information of the faces at different angles into a face characteristic database. The method is simple to implement, high in reliability and capable of efficiently collecting and labeling a large amount of face data by using the mobile robot.

Description

Face data automatic acquisition and labeling method and system based on mobile robot and electronic equipment thereof
Technical Field
The invention relates to the technical field of automatic face data acquisition and labeling, in particular to a method and a system for automatically acquiring and labeling face data based on a mobile robot and electronic equipment thereof.
Background
With the development and application of machine learning, artificial intelligence gradually advances social life, and the living standard of people is continuously improved and enhanced. Machine learning, particularly deep learning, requires a large amount of calibration data for model training, but manual data collection and calibration are inefficient and costly. The vslam technology can estimate the motion of the camera and freely calculate the position of an object in an image, so that the cost of manual calibration can be saved, and if automatically generated sample data with high-quality labels exist, the training process of the classifier can be accelerated to a great extent. The autonomous mobile robot carries the camera, and the human face data is collected and calibrated when the autonomous mobile robot autonomously moves in a specified area, so that the collection and calibration efficiency of image data can be greatly improved, and sample data with high-quality marks can be automatically generated.
The method has certain innovativeness in the current research, for example, patent application CN202010384123.5 provides a method and a device for acquiring a face image and an electronic device, and patent application CN202010071240.6 provides a method and a device for acquiring a three-dimensional face based on a trinocular camera, and the methods do not relate to a method for acquiring and calibrating face data by using an autonomous mobile robot.
Therefore, in order to enrich algorithm research in related fields, solve the acquisition and calibration of images by a more efficient method, reduce labor expenditure, and design a face data automatic acquisition and labeling method and system based on a mobile robot and electronic equipment thereof.
Disclosure of Invention
In order to solve the technical problems, the embodiment of the application provides a method and a system for automatically acquiring and labeling face data based on a mobile robot and an electronic device thereof. The application is realized by the following technical scheme:
a face data automatic acquisition labeling method based on a mobile robot is applied to a face data automatic acquisition labeling system, and comprises the following steps:
step S1, the automatic face data acquisition and labeling system carries out initialization operation;
step S2, the mobile robot moves on the established path;
step S3, after the camera of the mobile robot collects the face image, the position of the current face in the corresponding image detection frame is detected through a face detection algorithm, the camera is adjusted to collect the currently detected face image, and the face feature is extracted from the detected face image;
step S4, matching the extracted face features with features in a built face feature database, determining whether the currently extracted face features exist in the face feature database, and returning to and executing step S2 if the currently extracted face features exist in the face feature database; the robot continues to move on the established path, if the face features extracted currently are not in the face feature database, step S5 is executed;
step S5, the robot determines the pose change of the camera of the mobile robot based on the motion information and the vslam algorithm;
step S6, obtaining the three-dimensional space position of the detected face according to the pose change of the mobile robot camera and a triangulation method;
s7, after the three-dimensional space position of the detected face is obtained, the mobile robot plans a path to shoot face photos at different angles, collects the characteristics of the face at different angles, sets new face number information for the face, and marks the relative position of the face robot when the face photo is shot into the photo information of the face photo;
and step S8, storing the new face number information and the characteristic information of the face with different angles into a face characteristic database, returning to execute step S2, and returning the mobile robot to the set path to continue moving.
Further, in step S1, the automatic face data acquisition and labeling system performs an initialization operation, which specifically includes:
step S101, importing initialization parameters and a mark grid, wherein the initialization parameters comprise: the system collects time, the number of collected human face targets, a human face collection similarity threshold value, the maximum moving speed of the mobile robot and the surrounding movement safety distance; the marking grid includes: a plurality of marking points on the predetermined cruising route are determined;
step S102, importing an environment map, and setting the position of the current robot as a positioning zero point;
step S103, converting the environment map into an environment grid map;
and step S104, setting the positions of various obstacles including walls in each grid of the environment grid map as unviable grids, and setting other grids as passable grids.
Furthermore, the grid size in the environment grid map is divided according to the outline size of the robot, and the side length of each unit grid is the maximum outline length of the robot plus a preset obstacle avoidance safety distance.
Further, in step S3, the detecting the position of the current face in the corresponding image detection frame by the face detection algorithm specifically includes:
and (3) segmenting the face part in the image by using an HOG + SVM method, extracting the features of the face part to form a feature vector, and inferring the position of the complete face according to the feature vector information of the face and the position information of the features in the image detection frame.
Further, in step S5, the determining the pose change of the mobile robot camera based on the motion information and the vslam algorithm includes:
501, calculating the pose change between two continuous photos shot by the robot according to the motion information of the positioning change of the mobile robot and by combining the data of the odom sensor and the imu sensor;
502, calculating the relative pose change of the camera for shooting the two pictures by using a vslam algorithm;
and 503, calculating the absolute pose change of the camera by combining the pose change of the robot and the pose change of the camera.
Further, in step S6, the obtaining a three-dimensional spatial position of the detected face according to the pose change of the mobile robot camera and a triangulation method specifically includes:
601, calculating the spatial position of the face relative to the cameras by a triangulation method according to the absolute pose changes of the cameras for shooting the two adjacent pictures and the positions of the face in the two pictures;
and step 602, calculating the spatial position of the face relative to the robot positioning zero point by combining the positioning information when the robot takes the two continuous pictures and the spatial position of the face relative to the camera.
Further, in step S8, after the three-dimensional spatial position of the detected face is obtained, the mobile robot plans a path to take pictures of the face from different angles, which specifically includes:
the mobile robot sets the position of the human face in the environment as a motion path circular point, the circular path set in the system initialization is taken as the radius, the circular path at the planned position is taken as the moving path of the mobile robot, and the mobile robot takes human face pictures at different angles by one-time circular motion.
The utility model provides a face data automatic acquisition mark system based on mobile robot, this system includes robot environment detection module, robot orientation module, treater, memory and backstage face characteristic database:
the robot environment detection module comprises a visual sensor, an ultrasonic sensor, an infrared sensor, a laser radar and a millimeter wave radar;
the robot positioning module comprises a vision estimation module, a robot IMU, a robot odometer and a robot laser radar;
the background face feature database is used for storing the acquired and labeled face feature information and the face number information;
the memory is used for storing a program of the automatic face data acquisition and labeling method;
and the processor is used for executing the program stored in the memory so as to realize the automatic face data acquisition and labeling method.
Furthermore, the vision sensor comprises a monocular camera, a binocular camera and a depth camera.
An electronic device comprising a memory unit having a computer program stored thereon and a processor unit implementing the above method when executing the program.
Compared with the prior art, the method is simple to implement, high in reliability and capable of efficiently collecting and labeling a large amount of face data by using the mobile robot.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings required to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive labor.
FIG. 1 is a schematic flow chart of an automatic face data acquisition and labeling method according to the present application;
fig. 2 is a diagram illustrating a scenario application example of an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
Fig. 1 is a flow chart of an automatic face data acquisition and labeling method based on a mobile robot according to the present application, which is schematically illustrated in the present application, and the automatic face data acquisition and labeling method based on the mobile robot includes the following steps:
step S1, initializing the system;
the system initialization comprises the following steps: importing various parameters and marking grids;
the importing various parameters includes: the system collects the time length (8 hours in the implementation of the invention), the number of the human face targets is collected (1000 in the implementation of the invention), the human face collection similarity threshold value (95% in the implementation of the invention), the maximum moving speed of the mobile robot (0.5 m/s in the implementation of the invention) and the safe distance of the surrounding movement (2 m in the implementation of the invention).
The marking grid, comprising: multiple mark points on predetermined cruise route
And importing an environment map, and setting the current position of the robot as a positioning zero point.
Converting the environment map into an environment grid map;
setting the positions of various barriers including walls in each grid of the environment grid map as unviable grids, and setting other grids as passable grids;
the division of the grid size in the environment grid map depends on: the size of the outline of the robot is the length of the side of each unit grid, which is the length of the maximum outline of the robot plus a preset obstacle avoidance safety distance.
Step S2, the mobile robot moves on the predetermined path;
step S3, after the camera of the mobile robot collects the face image, the position of the current face in the corresponding image detection frame is detected through a face detection algorithm, the camera is adjusted to collect the currently detected face image, and the face feature is extracted;
the face detection algorithm detects the position of the current face in the corresponding image detection frame, and comprises the steps of utilizing an HOG + SVM method to segment the face part in the image, extracting the features of the face part to form a feature vector, and inferring the position of the complete face according to the feature vector information of the face and the position information of the features in the image detection frame.
The HOG + SVM method comprises the following steps:
HOG (histogram of Oriented gradients) direction gradient histogram method is a characteristic value detection mode. The characteristic descriptor for object detection is used for calculating and counting the gradient direction histogram of the local area of the image to form the characteristic. The method mainly utilizes the gradient information of the characteristic points in the picture as the characteristic value and can be used for detecting pedestrians and some objects.
Svm (support Vector machine) refers to a support Vector machine, and is a common discrimination method. Can be used as a classifier for distinguishing pedestrians from non-pedestrians in pedestrian detection.
Step S4, matching the detected face features with features in the built face feature database, and determining whether the current face features are in the face feature database, that is, whether the current detected face is a face with determined face number information. If the currently detected face features already exist in the face feature database, the process returns to step S2, and the robot continues to move on the predetermined path. If the currently detected face features are not in the face feature database, executing step S5;
step S5, the robot determines the pose change of the camera of the mobile robot based on the motion information and the vslam algorithm;
the method for determining the pose change of the mobile robot camera based on the motion information and the vslam algorithm comprises the following steps:
and 501, calculating the pose change between two continuous photos shot by the robot according to the positioning change of the mobile robot and by combining the data of the odom sensor and the imu sensor.
And 502, calculating the relative pose change of the camera for shooting the two pictures by using a vslam algorithm.
And 503, calculating the absolute pose change of the camera by combining the pose change of the robot and the pose change of the camera.
The vslam technique refers to: visual simultaneous positioning and mapping (visual simultaneous localization and mapping), the change of a 6-dimensional pose track of a camera can be calculated through an image shot by the camera, and the spatial position of an object (map point) in the image is calculated at the same time, so that the map is built and used for positioning. This document uses the vslam technique to calculate the pose change between two images taken by the camera.
Step S6, obtaining the three-dimensional space position of the detected face according to the pose change of the mobile robot camera and a triangulation method;
the method for obtaining the three-dimensional space position of the detected face according to the pose change of the mobile robot camera and a triangulation method comprises the following steps:
601, calculating the spatial position of the face relative to the cameras by a triangulation method according to the absolute pose changes of the cameras for shooting the two adjacent pictures and the positions of the face in the two pictures;
the triangulation method refers to: the depth of a certain object, characteristic point or pixel point in two pictures is estimated by a Triangulation method, and the three-dimensional space position of the object, characteristic point or pixel in the picture can be obtained by combining camera external parameters and picture coordinates
And step 602, calculating the spatial position of the face relative to the robot positioning zero point by combining the positioning information when the robot takes the two continuous pictures and the spatial position of the face relative to the camera.
Step S7, after the three-dimensional space position of the detected face is obtained, the mobile robot plans a path to shoot the face photos of different angles, collects the characteristics of the face of different angles, sets new face number information for the face, and marks the relative position of the face robot shooting the photos into the photo information.
Step S8, the new face number information and the feature information of the face at different angles are stored in the face feature database, and the process returns to step S2, and the mobile robot returns to the predetermined path to continue moving.
The method comprises the steps that the mobile robot sets the position of the face in the environment as a motion path circular point, the circular path set in the system initialization is taken as a radius, the circular path at the planned position is taken as the moving path of the mobile robot, and the mobile robot takes face pictures at different angles by one-time circular motion.
The application also provides a face data automatic acquisition marking system based on mobile robot, and the system includes:
the processor is used for executing the automatic face data acquisition and labeling method program based on the mobile robot;
the robot environment detection module comprises a visual sensor, an ultrasonic sensor, an infrared sensor, a laser radar and a millimeter wave radar;
the robot positioning module comprises a vision estimation module and a robot IMU;
the memory is used for storing a program of the automatic face data acquisition and labeling method based on the mobile robot;
and the background human face characteristic database system is used for storing the collected and labeled human face characteristic information and human face number information.
The vision sensor includes: monocular camera, binocular camera, degree of depth camera.
In some embodiments, part or all of the computer program may be loaded and/or installed onto the device via ROM. When being loaded and executed, may carry out one or more of the steps of the method described above.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a load programmable logic device (CPLD), and the like.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
Further, while operations are depicted in a particular order, this should be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (10)

1. A face data automatic acquisition labeling method based on a mobile robot is applied to a face data automatic acquisition labeling system and is characterized by comprising the following steps:
step S1, the automatic face data acquisition and labeling system carries out initialization operation;
step S2, the mobile robot moves on the established path;
step S3, after the camera of the mobile robot collects the face image, the position of the current face in the corresponding image detection frame is detected through a face detection algorithm, the camera is adjusted to collect the currently detected face image, and the face feature is extracted from the detected face image;
step S4, matching the extracted face features with features in a built face feature database, determining whether the currently extracted face features exist in the face feature database, and returning to and executing step S2 if the currently extracted face features exist in the face feature database; the robot continues to move on the established path, if the face features extracted currently are not in the face feature database, step S5 is executed;
step S5, the robot determines the pose change of the camera of the mobile robot based on the motion information and the vslam algorithm;
step S6, obtaining the three-dimensional space position of the detected face according to the pose change of the mobile robot camera and a triangulation method;
s7, after the three-dimensional space position of the detected face is obtained, the mobile robot plans a path to shoot face photos at different angles, collects the characteristics of the face at different angles, sets new face number information for the face, and marks the relative position of the face robot when the face photo is shot into the photo information of the face photo;
and step S8, storing the new face number information and the characteristic information of the face with different angles into a face characteristic database, returning to execute step S2, and returning the mobile robot to the set path to continue moving.
2. The method for automatically acquiring and labeling face data as claimed in claim 1, wherein in step S1, the initialization operation of the system for automatically acquiring and labeling face data specifically comprises:
step S101, importing initialization parameters and a mark grid, wherein the initialization parameters comprise: the system collects time, the number of collected human face targets, a human face collection similarity threshold value, the maximum moving speed of the mobile robot and the surrounding movement safety distance; the marking grid includes: a plurality of marking points on the predetermined cruising route are determined;
step S102, importing an environment map, and setting the position of the current robot as a positioning zero point;
step S103, converting the environment map into an environment grid map;
and step S104, setting the positions of various obstacles including walls in each grid of the environment grid map as unviable grids, and setting other grids as passable grids.
3. The method for automatically acquiring and labeling face data as claimed in claim 1, wherein the division of the grid size in the environment grid map depends on the outline size of the robot, and the side length of each unit grid is the maximum outline length of the robot plus a preset obstacle avoidance safety distance.
4. The method for automatically acquiring and labeling face data as claimed in claim 1, wherein in step S3, the detecting the position of the current face in the corresponding image detection frame by the face detection algorithm specifically comprises:
and (3) segmenting the face part in the image by using an HOG + SVM method, extracting the features of the face part to form a feature vector, and inferring the position of the complete face according to the feature vector information of the face and the position information of the features in the image detection frame.
5. The automatic face data acquisition and labeling method of claim 1, wherein in step S5, the determining the pose change of the mobile robot camera based on the motion information and the vslam algorithm includes:
501, calculating the pose change between two continuous photos shot by the robot according to the motion information of the positioning change of the mobile robot and by combining the data of the odom sensor and the imu sensor;
502, calculating the relative pose change of the camera for shooting the two pictures by using a vslam algorithm;
and 503, calculating the absolute pose change of the camera by combining the pose change of the robot and the pose change of the camera.
6. The method for automatically acquiring and labeling face data as claimed in claim 1, wherein in step S6, the obtaining of the three-dimensional spatial position of the detected face according to the pose change of the mobile robot camera and triangulation specifically comprises:
601, calculating the spatial position of the face relative to the cameras by a triangulation method according to the absolute pose changes of the cameras for shooting the two adjacent pictures and the positions of the face in the two pictures;
and step 602, calculating the spatial position of the face relative to the robot positioning zero point by combining the positioning information when the robot takes the two continuous pictures and the spatial position of the face relative to the camera.
7. The method for automatically acquiring and labeling face data according to claim 1, wherein in step S8, after the three-dimensional spatial position of the detected face is obtained, the moving robot plans a path to take pictures of the face from different angles, which specifically includes:
the mobile robot sets the position of the human face in the environment as a motion path circular point, the circular path set in the system initialization is taken as the radius, the circular path at the planned position is taken as the moving path of the mobile robot, and the mobile robot takes human face pictures at different angles by one-time circular motion.
8. The utility model provides a face data automatic acquisition mark system based on mobile robot which characterized in that, this system includes robot environment detection module, robot orientation module, treater, memory and backstage face characteristic database:
the robot environment detection module comprises a visual sensor, an ultrasonic sensor, an infrared sensor, a laser radar and a millimeter wave radar;
the robot positioning module comprises a vision estimation module, a robot IMU, a robot odometer and a robot laser radar;
the background face feature database is used for storing the acquired and labeled face feature information and the face number information;
the memory is used for storing a program of the automatic face data acquisition and labeling method;
a processor for executing the program stored in the memory to implement the automatic human face data acquisition and labeling method according to one of claims 1 to 7.
9. The automatic human face data acquisition and annotation system of claim 8, wherein the vision sensor comprises a monocular camera, a binocular camera, a depth camera.
10. An electronic device comprising a memory unit and a processor unit, the memory unit having stored thereon a computer program, characterized in that the processor unit, when executing the program, implements the method according to one of claims 1 to 7.
CN202010799010.1A 2020-08-11 2020-08-11 Face data automatic acquisition and labeling method and system based on mobile robot and electronic equipment thereof Pending CN111932623A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010799010.1A CN111932623A (en) 2020-08-11 2020-08-11 Face data automatic acquisition and labeling method and system based on mobile robot and electronic equipment thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010799010.1A CN111932623A (en) 2020-08-11 2020-08-11 Face data automatic acquisition and labeling method and system based on mobile robot and electronic equipment thereof

Publications (1)

Publication Number Publication Date
CN111932623A true CN111932623A (en) 2020-11-13

Family

ID=73307347

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010799010.1A Pending CN111932623A (en) 2020-08-11 2020-08-11 Face data automatic acquisition and labeling method and system based on mobile robot and electronic equipment thereof

Country Status (1)

Country Link
CN (1) CN111932623A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103472828A (en) * 2013-09-13 2013-12-25 桂林电子科技大学 Mobile robot path planning method based on improvement of ant colony algorithm and particle swarm optimization
WO2018107916A1 (en) * 2016-12-14 2018-06-21 南京阿凡达机器人科技有限公司 Robot and ambient map-based security patrolling method employing same
CN109333535A (en) * 2018-10-25 2019-02-15 同济大学 A kind of guidance method of autonomous mobile robot
US20190281209A1 (en) * 2016-12-02 2019-09-12 SZ DJI Technology Co., Ltd. Photographing control method, apparatus, and control device
US20190333239A1 (en) * 2016-12-02 2019-10-31 Cloudminds (Shenzhen) Robotics Systems Co., Ltd. Positioning method and device
CN110728715A (en) * 2019-09-06 2020-01-24 南京工程学院 Camera angle self-adaptive adjusting method of intelligent inspection robot
US20200159246A1 (en) * 2018-11-19 2020-05-21 Ankobot (Shenzhen) Smart Technologies Co., Ltd. Methods and systems for mapping, localization, navigation and control and mobile robot

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103472828A (en) * 2013-09-13 2013-12-25 桂林电子科技大学 Mobile robot path planning method based on improvement of ant colony algorithm and particle swarm optimization
US20190281209A1 (en) * 2016-12-02 2019-09-12 SZ DJI Technology Co., Ltd. Photographing control method, apparatus, and control device
US20190333239A1 (en) * 2016-12-02 2019-10-31 Cloudminds (Shenzhen) Robotics Systems Co., Ltd. Positioning method and device
WO2018107916A1 (en) * 2016-12-14 2018-06-21 南京阿凡达机器人科技有限公司 Robot and ambient map-based security patrolling method employing same
CN109333535A (en) * 2018-10-25 2019-02-15 同济大学 A kind of guidance method of autonomous mobile robot
US20200159246A1 (en) * 2018-11-19 2020-05-21 Ankobot (Shenzhen) Smart Technologies Co., Ltd. Methods and systems for mapping, localization, navigation and control and mobile robot
CN110728715A (en) * 2019-09-06 2020-01-24 南京工程学院 Camera angle self-adaptive adjusting method of intelligent inspection robot

Similar Documents

Publication Publication Date Title
CN110349250B (en) RGBD camera-based three-dimensional reconstruction method for indoor dynamic scene
US10953545B2 (en) System and method for autonomous navigation using visual sparse map
Fraundorfer et al. Visual odometry: Part ii: Matching, robustness, optimization, and applications
CA2950791C (en) Binocular visual navigation system and method based on power robot
Zhou et al. Self‐supervised learning to visually detect terrain surfaces for autonomous robots operating in forested terrain
JP2022520019A (en) Image processing methods, equipment, mobile platforms, programs
CN112634451A (en) Outdoor large-scene three-dimensional mapping method integrating multiple sensors
CN111242994B (en) Semantic map construction method, semantic map construction device, robot and storage medium
CN109871745A (en) Identify method, system and the vehicle of empty parking space
CN103680291A (en) Method for realizing simultaneous locating and mapping based on ceiling vision
CN110260866A (en) A kind of robot localization and barrier-avoiding method of view-based access control model sensor
WO2024087962A1 (en) Truck bed orientation recognition system and method, and electronic device and storage medium
CN113256731A (en) Target detection method and device based on monocular vision
Aryal Object detection, classification, and tracking for autonomous vehicle
CN113189610B (en) Map-enhanced autopilot multi-target tracking method and related equipment
CN112907625B (en) Target following method and system applied to quadruped bionic robot
CN106408593A (en) Video-based vehicle tracking method and device
Shuai et al. Target recognition and range-measuring method based on binocular stereo vision
CN111932623A (en) Face data automatic acquisition and labeling method and system based on mobile robot and electronic equipment thereof
CN115953471A (en) Indoor scene multi-scale vector image retrieval and positioning method, system and medium
Huang et al. Image-based localization for indoor environment using mobile phone
Tu et al. Automatic recognition of civil infrastructure objects in mobile mapping imagery using a markov random field model
CN115790568A (en) Map generation method based on semantic information and related equipment
CN115342811A (en) Path planning method, device, equipment and storage medium
Wang et al. Simultaneous clustering classification and tracking on point clouds using Bayesian filter

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination