CN112132865A - Personnel identification method and system - Google Patents

Personnel identification method and system Download PDF

Info

Publication number
CN112132865A
CN112132865A CN202010996889.9A CN202010996889A CN112132865A CN 112132865 A CN112132865 A CN 112132865A CN 202010996889 A CN202010996889 A CN 202010996889A CN 112132865 A CN112132865 A CN 112132865A
Authority
CN
China
Prior art keywords
person
robot
image
unit
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010996889.9A
Other languages
Chinese (zh)
Inventor
龚迪琛
李学生
牟春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Delu Power Technology Hainan Co ltd
Original Assignee
Delu Power Technology Hainan Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Delu Power Technology Hainan Co ltd filed Critical Delu Power Technology Hainan Co ltd
Priority to CN202010996889.9A priority Critical patent/CN112132865A/en
Publication of CN112132865A publication Critical patent/CN112132865A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Manipulator (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种人员跟随方法,其用于机器人跟随单个人员,其包括以下步骤:S1.所述机器人确认所述机器人拍摄的图像中的人员数量并识别出所述机器人需要跟随的所述人员;S2.所述机器人获取所述人员的实时特征信息以及路径信息;S3.所述机器人对所述人员的下一步的位置进行预测;以及S4.所述机器人根据所述S3步骤中预测的所述人员的位置信息确定所述机器人下一帧的检测范围并跟随所述人员。通过预测人员下一步的位置,机器人提前确定下一帧需要检测的范围,提高了机器人检测与识别行人的速度从而实现机器人跟随人员。

Figure 202010996889

The present invention discloses a person following method, which is used for a robot to follow a single person, which includes the following steps: S1. The robot confirms the number of people in an image captured by the robot and identifies the personnel; S2. the robot obtains real-time feature information and path information of the personnel; S3. the robot predicts the next position of the personnel; and S4. the robot predicts according to the S3 step The position information of the person determines the detection range of the robot in the next frame and follows the person. By predicting the next position of the person, the robot determines the range that needs to be detected in the next frame in advance, which improves the speed of the robot to detect and recognize pedestrians so that the robot can follow the person.

Figure 202010996889

Description

人员识别方法及系统Personnel identification method and system

技术领域technical field

本发明涉及机器人识别方法及系统领域,具体涉及一种人员识别方法及系统。The invention relates to the field of robot identification methods and systems, in particular to a person identification method and system.

背景技术Background technique

随着科技的进步,现有的机器人能够代替人工来实现越来越多的功能,其中机器人跟随特定的人员是必不可少的功能。现有的机器人跟随人员的方法分为生成式方法和判别式方法。其中生成式方法采用特征模型描述目标的外观特征,再最小化跟踪目标与候选目标之间的重构误差来确认目标;生成式方法着重于目标本身的特征提取,忽略目标的背景信息,在目标外观发生剧烈变化或者遮挡时候容易出现目标漂移或者目标丢失。由于判别式的跟随方法能够显著的区分背景与人员之间的差异性,并且性能优异,抗干扰能力强,近年来判别式的跟随方法占据了主流位置,以相关滤波和深度学习为代表的判别式方法取得了令人满意的跟随效果。With the advancement of technology, existing robots can replace human beings to achieve more and more functions, and it is an essential function for robots to follow a specific person. Existing methods for robots to follow people are divided into generative methods and discriminative methods. Among them, the generative method uses a feature model to describe the appearance features of the target, and then minimizes the reconstruction error between the tracking target and the candidate target to confirm the target; the generative method focuses on the feature extraction of the target itself, ignoring the background information of the target. When the appearance changes drastically or is occluded, it is prone to target drift or target loss. Since the discriminative following method can significantly distinguish the difference between the background and the person, and has excellent performance and strong anti-interference ability, the discriminative following method has occupied the mainstream position in recent years, and the discrimination represented by correlation filtering and deep learning The method has achieved a satisfactory follow-up effect.

但是现有的判别式跟随方法,需要机器人进行深度学习以能够准确识别人员与背景之间的差异,但是深度学习对于机器人在人员跟随领域的应用并不顺利,因为目标跟随任务的特殊性,机器人仅能利用初始帧的图像数据,缺乏大量的数据供神经网络进行学习,只有通过卷积神经网络来将数据迁移到目标跟踪中来才能实现机器人的行人跟随,因此现有的行人跟踪方法中,机器人的推理速度慢,在丢失人员信息后难以重新对人员进行跟随。However, the existing discriminative following methods require robots to perform deep learning to accurately identify the difference between people and backgrounds. However, the application of deep learning in the field of people following robots is not smooth. Because of the particularity of target following tasks, robots Only the image data of the initial frame can be used, and there is a lack of a large amount of data for the neural network to learn. Only by migrating the data to the target tracking through the convolutional neural network can the robot follow the pedestrians. Therefore, in the existing pedestrian tracking methods, The reasoning speed of the robot is slow, and it is difficult to re-follow the person after losing the information of the person.

发明内容SUMMARY OF THE INVENTION

本发明的目的是提供一种行人识别方法,其能够解决现有的行人识别方法的缺陷,提高机器人的推理速度且在机器人丢失人员信息后也能够重新对人员进行跟随。具体地,一种人员跟随方法,其用于机器人跟随单个人员,其包括以下步骤:S1.所述机器人确认所述机器人拍摄的图像中的人员数量并识别出所述机器人需要跟随的所述人员;S2.所述机器人获取所述人员的实时特征信息以及路径信息;S3.所述机器人对所述人员的下一步的位置进行预测;以及S4.所述机器人根据所述S3步骤中预测的所述人员的位置信息确定所述机器人下一帧的检测范围并跟随所述人员。The purpose of the present invention is to provide a pedestrian identification method, which can solve the defects of the existing pedestrian identification methods, improve the reasoning speed of the robot, and can also follow the person again after the robot loses the personnel information. Specifically, a person following method, which is used for a robot to follow a single person, includes the following steps: S1. The robot confirms the number of people in the image captured by the robot and identifies the person the robot needs to follow S2. described robot obtains the real-time feature information and path information of described personnel; S3. described robot predicts the position of the next step of described personnel; The position information of the person determines the detection range of the robot in the next frame and follows the person.

根据本发明的实施例,所述S1步骤包括:S101.所述机器人确认图像中的所述人员数量;以及S102.若所述机器人图像中的人员数量为一个时,所述机器人提取图像中的所述人员的特征;若所述机器人图像中的所述人员数量为至少两个时,所述机器人对图像中的所述人员进行特征匹配,确定所述机器人需要跟随的所述人员。According to an embodiment of the present invention, the step S1 includes: S101. The robot confirms the number of persons in the image; and S102. If the number of persons in the robot image is one, the robot extracts the number of persons in the image. The characteristics of the person; if the number of the people in the robot image is at least two, the robot performs feature matching on the people in the image to determine the person the robot needs to follow.

根据本发明的实施例,所述S102步骤中,所述机器人对图像中的所述人员进行特征匹配时,所述机器人结合深度场信息以及所述人员的特征信息进行匹配。According to an embodiment of the present invention, in the step S102, when the robot performs feature matching on the person in the image, the robot performs matching based on the depth field information and the feature information of the person.

根据本发明的实施例,所述S2步骤包括:S201.所述机器人将图像缩小至包括了所述人员所有信息的长方形图像;S202.所述机器人对所述长方形图像框中的内容进行分析以获取所述人员的特征信息;以及S203.所述机器人根据所述人员的移动方向与距离获取所述人员的路径信息。According to an embodiment of the present invention, the step S2 includes: S201. The robot reduces the image to a rectangular image including all the information of the person; S202. The robot analyzes the content in the rectangular image frame to Acquire characteristic information of the person; and S203. The robot acquires the path information of the person according to the moving direction and distance of the person.

根据本发明的实施例,所述S4步骤包括:S401.所述机器人依据卡尔曼滤波算法对所述人员的位置进行预测;以及S402.所述机器人根据所述人员的位置的预测结果对所述机器人拍摄的图像进行裁剪。According to an embodiment of the present invention, the step S4 includes: S401. The robot predicts the position of the person according to the Kalman filter algorithm; and S402. The robot predicts the position of the person according to the prediction result of the position of the person. Images captured by the robot are cropped.

本发明还公开了一种人员跟随系统,其能够实现上述人员跟随方法,具体地,一种人员跟随系统,其包括:拍摄模块,所述拍摄模块设置在所述机器人上,所述拍摄模块用于拍摄图像;第一识别模块,所述第一识别模块识别所述拍摄模块拍摄的图像中所述人员的数量并确认所述机器人跟随的所述人员的身份;第二识别模块,所述第二识别模块根据所述拍摄模块拍摄的图像识别所述人员的特征信息以及路径信息;预测模块,所述预测模块对所述人员的位置进行预测;以及调整模块,所述调整模块确定所述机器人下一帧的检测范围。The present invention also discloses a personnel following system, which can realize the above-mentioned personnel following method. for capturing images; a first identification module, the first identification module identifies the number of persons in the images captured by the capturing module and confirms the identity of the persons followed by the robot; a second identification module, the first identification module The second recognition module recognizes the feature information and path information of the person according to the image captured by the shooting module; a prediction module, which predicts the position of the person; and an adjustment module, which determines the robot The detection range of the next frame.

根据本发明的实施例,所述第一识别模块包括:数量识别单元,所述数量识别单元用于识别所述拍摄模块拍摄的图像中的所述人员的数量并将所述图像根据图像中的所述人员的数量进行分类,分为包括一个人员的第一图像以及包括至少两个人员的第二图像;一次识别单元,所述一次识别单元用于识别所述第一图像中的所述人员的特征;以及二次识别单元,所述二次识别单元用于对所述第二图像中的所述人员进行特征匹配以确定所述机器人需要跟随的所述人员。According to an embodiment of the present invention, the first identification module includes: a quantity identification unit, and the quantity identification unit is configured to identify the number of the persons in the image captured by the photographing module and classify the image according to the The number of the persons is classified into a first image including one person and a second image including at least two persons; a primary identification unit, the primary identification unit is used to identify the person in the first image and a secondary identification unit, the secondary identification unit is configured to perform feature matching on the person in the second image to determine the person the robot needs to follow.

根据本发明的实施例,所述二次识别单元结合深度场信息以及所述人员的特征信息进行匹配。According to an embodiment of the present invention, the secondary identification unit performs matching in combination with depth field information and feature information of the person.

根据本发明的实施例,所述第二识别模块包括:第一裁剪单元,所述第一裁剪单元对所述拍摄模块拍摄的图像进行裁剪,裁剪至包括了所述人员的所有信息的长方形图形;第二识别单元,所述第二识别单元对所述长方形图形中的内容进行分析以获取所述人员的实时特征信息;以及路径获取单元,所述路径获取单元根据所述拍摄单元拍摄的图像获取所述人员的实时路径信息。According to an embodiment of the present invention, the second recognition module includes: a first cropping unit, the first cropping unit is to crop the image captured by the photographing module to a rectangular shape including all the information of the person ; a second recognition unit, which analyzes the content in the rectangular figure to obtain real-time feature information of the person; and a path acquisition unit, which is based on an image captured by the shooting unit Obtain real-time path information of the person.

根据本发明的实施例,所述预测模块包括:预测单元,所述预测单元依据所述第一识别单元或第二识别单元的识别结果对所述人员的位置进行预测;以及第二裁剪单元,所述第二裁剪单元根据所述预测单元的预测结果对所述拍摄模块拍摄的下一帧图像进行裁剪。According to an embodiment of the present invention, the prediction module includes: a prediction unit, the prediction unit predicts the position of the person according to the recognition result of the first recognition unit or the second recognition unit; and a second cropping unit, The second cropping unit crops the next frame of image photographed by the photographing module according to the prediction result of the predicting unit.

通过采用上述技术方案,本发明主要有如下几点技术效果:By adopting the above-mentioned technical scheme, the present invention mainly has the following technical effects:

1.通过预测人员下一步的位置,机器人提前确定下一帧需要检测的范围,提高了机器人检测行人的速度;1. By predicting the next position of the person, the robot determines the range to be detected in the next frame in advance, which improves the speed of the robot to detect pedestrians;

2.机器人结合深度信息对人员的特征信息进行匹配,机器人的训练速度快且人员检测准确率高;2. The robot matches the feature information of the person in combination with the depth information, the training speed of the robot is fast and the detection accuracy of the person is high;

3.机器人对人员的身份信息进行提取,提取并记录人员的实时特征信息,在机器人丢失人员图像之后也可以利用实时特征信息对人员进行找回;3. The robot extracts the identity information of the person, extracts and records the real-time feature information of the person, and can also use the real-time feature information to retrieve the person after the robot loses the image of the person;

4.通过卡尔曼滤波预测人员的位置,提高机器人的检测速度,同时缩小检测方位能够保证人员以原始尺寸进入深度学习模型,提高了检测的准确性。4. The position of the person is predicted by Kalman filter, which improves the detection speed of the robot. At the same time, reducing the detection position can ensure that the person enters the deep learning model with the original size, which improves the accuracy of detection.

附图说明Description of drawings

图1为根据本发明的实施例的人员跟随方法的示意图;1 is a schematic diagram of a person following method according to an embodiment of the present invention;

图2为根据本发明的实施例的人员跟随系统的示意图。FIG. 2 is a schematic diagram of a person following system according to an embodiment of the present invention.

图中:1、拍摄模块;2、第一识别模块;21、数量识别单元;22、一次识别单元;23、二次识别单元;3、第二识别模块;31、第一裁剪单元;32、第二识别单元;33、路径获取单元;4、预测模块;41、预测单元;42、第二裁剪单元;5、调整模块。In the figure: 1, shooting module; 2, first identification module; 21, quantity identification unit; 22, primary identification unit; 23, secondary identification unit; 3, second identification module; 31, first cutting unit; 32, 33, a path acquisition unit; 4, a prediction module; 41, a prediction unit; 42, a second cropping unit; 5, an adjustment module.

具体实施方式Detailed ways

下面结合说明书附图来说明本发明的具体实施方式。The specific embodiments of the present invention will be described below with reference to the accompanying drawings.

本发明公开了一种人员跟踪方法,其用于机器人跟随单个人员,具体包括以下步骤:The invention discloses a person tracking method, which is used for a robot to follow a single person, and specifically includes the following steps:

S1.所述机器人确认所述机器人拍摄的图像中的人员数量并识别出所述机器人需要跟随的所述人员;S1. The robot confirms the number of people in the image captured by the robot and identifies the person the robot needs to follow;

S2.所述机器人获取所述人员的实时特征信息以及路径信息;S2. the robot obtains real-time feature information and path information of the person;

S3.所述机器人对所述人员的下一步的位置进行预测;以及S3. the robot predicts the next position of the person; and

S4.所述机器人根据所述S3步骤中预测的所述人员的位置信息确定所述机器人下一帧的检测范围并跟随所述人员。S4. The robot determines the detection range of the robot in the next frame according to the position information of the person predicted in the step S3 and follows the person.

其中,S1步骤具体包括S101.所述机器人确认图像中的所述人员数量;以及S102.若所述机器人图像中的人员数量为一个时,所述机器人提取图像中的所述人员的特征;若所述机器人拍摄的图像中的所述人员数量为至少两个时,所述机器人对图像中的所述人员进行特征匹配,确定所述机器人需要跟随的所述人员。机器人先确定其拍摄的初识图像中的人员的数量,根据人员数量的不同采用不同的识别方式,本实施例中为提高机器人的识别速度,在图像中仅存在一个人员时,机器人采用人员重识别算法来提取人员的特征信息,在图像中存在至少两个成员时,机器人利用匹配算法和既定目标进行匹配,从而从多个人员中确定机器人需要跟随的人员具体为哪一个。并且,为了能够在机器人丢失了人员图像后能够重新对人员进行跟随,本实施例中的机器人提取的人员特征信息中包括了人员的身份信息,通过身份信息,机器人能够快速重新跟随人员。并且,为快速在多个人员中分辨出机器人需要跟随的人员,机器人对视野中的人员进行特征匹配时,机器人结合深度场的信息以及人员的特征信息对人员进行匹配。Wherein, step S1 specifically includes S101. The robot confirms the number of people in the image; and S102. If the number of people in the robot image is one, the robot extracts the characteristics of the person in the image; if When the number of the persons in the image captured by the robot is at least two, the robot performs feature matching on the persons in the image to determine the persons that the robot needs to follow. The robot first determines the number of people in the initial recognition image it shoots, and adopts different recognition methods according to the number of people. In this embodiment, in order to improve the recognition speed of the robot, when there is only one person in the image, the robot adopts the personnel re-identification method. The identification algorithm is used to extract the characteristic information of the person. When there are at least two members in the image, the robot uses the matching algorithm to match the predetermined target, so as to determine which person the robot needs to follow from among multiple people. Moreover, in order to be able to follow the person again after the robot loses the image of the person, the person feature information extracted by the robot in this embodiment includes the identity information of the person, and the robot can quickly re-follow the person through the identity information. In addition, in order to quickly distinguish the person that the robot needs to follow among multiple people, when the robot matches the features of the people in the field of view, the robot matches the people by combining the depth field information and the feature information of the person.

具体地,上述S2步骤具体包括S201.所述机器人将图像缩小至包括了所述人员所有信息的长方形图像;S202.所述机器人对所述长方形视野框中的内容进行分析以获取所述人员的特征信息;以及S203.所述机器人根据所述人员的移动方向与距离获取所述人员的路径信息。通过将图像缩小,机器人在识别过程中需要识别的图像大小较小,降低机器人需要识别的内容,从而提高机器人的识别与检测的速度。通过将图像缩小至包括了人员所有特征信息的大小的长方形图形,机器人对图形进行识别时只需识别缩小后的图形即可,无需检测图像中的全部信息。此外,机器人获取人员的路径信息能够帮助机器人对人员的后续路径与位置进行预测。Specifically, the above-mentioned S2 step specifically includes S201. The robot reduces the image to a rectangular image including all the information of the person; S202. The robot analyzes the content in the rectangular field of view to obtain the person's information. feature information; and S203. The robot acquires the path information of the person according to the moving direction and distance of the person. By reducing the image size, the size of the image that the robot needs to recognize during the recognition process is smaller, which reduces the content that the robot needs to recognize, thereby improving the speed of the robot's recognition and detection. By reducing the image to a rectangular figure that includes all the characteristic information of the person, the robot only needs to recognize the reduced figure when it recognizes the figure, and does not need to detect all the information in the image. In addition, the robot obtains the path information of the person, which can help the robot to predict the follow-up path and position of the person.

另外,本实施例中的S4步骤具体包括S401.所述机器人依据卡尔曼滤波算法对所述人员的位置进行预测;以及S402.所述机器人根据所述人员的位置的预测结果对所述机器人拍摄的图像进行裁剪。通过采用上述技术方案,机器人在进行人员跟随的过程中仅在开始阶段需要对拍摄的图像的全部内容进行识别以识别出机器人需要跟随的人员的具体特征信息与路径信息,机器人通过预测人员的位置与路径,提前规划好后续图像的裁剪工作,机器人在除了需要对初始帧的图像进行全部识别外,后续帧的图像机器人仅需识别裁剪后的图像,进而提高机器人识别人员特征信息的速度。In addition, step S4 in this embodiment specifically includes S401. The robot predicts the position of the person according to the Kalman filter algorithm; and S402. The robot photographs the robot according to the predicted result of the position of the person. image is cropped. By adopting the above technical solution, the robot only needs to identify the entire content of the captured image at the beginning stage in the process of following the person to identify the specific feature information and path information of the person the robot needs to follow. The robot predicts the location of the person by In addition to identifying all images of the initial frame, the robot only needs to recognize the cropped image in the image of the subsequent frame, thereby improving the speed of the robot to recognize the characteristic information of persons.

此外,为实现上述人员识别方法,本实施例还公开了一种行人跟随系统,其包括设置在机器人上的拍摄模块1、第一识别模块2、第二识别模块3、预测模块4以及调整模块5。其中拍摄模块1设置在机器人上以拍摄人员图像,第一识别模块2识别拍摄模块1拍摄的图像中人员的数量并确认机器人跟随的人员的身份,第二识别模块3根据拍摄模块1拍摄的退昂识别人员的特征信息以及路径信息,预测模块4对人员的位置进行预测,调整模块5确定机器人下一帧的检测范围。In addition, in order to realize the above-mentioned person identification method, this embodiment also discloses a pedestrian following system, which includes a shooting module 1, a first identification module 2, a second identification module 3, a prediction module 4 and an adjustment module arranged on the robot 5. The photographing module 1 is arranged on the robot to photograph the image of the person, the first recognition module 2 identifies the number of persons in the image photographed by the photographing module 1 and confirms the identity of the person followed by the robot, and the second recognition module 3 is based on the retreat image captured by the photographing module 1. Ang identifies the characteristic information and path information of the person, the prediction module 4 predicts the position of the person, and the adjustment module 5 determines the detection range of the next frame of the robot.

同时,为实现对不同人员数量的图像分别进行检测,本实施例中的第一识别模块2包括用于识别拍摄模块1拍摄的图像中的人员的数量的数量识别单元21、一次识别单元22以及二次识别单元23。其中,数量识别单元21在识别出图像中的人员数量后将图像分类为仅包括一个人员的第一图像以及包括至少两个人员的第二图像,一次识别单元22用于识别第一图相中的人员的特征,二次识别单元23用于对第二图像中的人员内进行特征匹配以确定机器人需要跟随的人员。本实施例中一次识别单元22采用人员重识别算法来提取人员特征信息,二次识别单元23利用匹配算法和既定目标进行匹配从而确定机器人需要跟随的人员。此外,为提高二次识别单元23的匹配的准确性,本实施例中的二次识别单元23结合深度场信息以及人员的特征信息来进行人员匹配。Meanwhile, in order to detect images of different numbers of persons respectively, the first recognition module 2 in this embodiment includes a number recognition unit 21 , a primary recognition unit 22 and a number recognition unit 21 for recognizing the number of persons in the images captured by the photographing module 1 Secondary identification unit 23 . The number recognition unit 21, after recognizing the number of persons in the image, classifies the images into a first image including only one person and a second image including at least two persons, and the primary recognition unit 22 is used to recognize the first image in the image The secondary identification unit 23 is configured to perform feature matching on the person in the second image to determine the person the robot needs to follow. In this embodiment, the primary identification unit 22 uses a person re-identification algorithm to extract personnel feature information, and the secondary identification unit 23 uses a matching algorithm to match with a predetermined target to determine the person the robot needs to follow. In addition, in order to improve the matching accuracy of the secondary identification unit 23, the secondary identification unit 23 in this embodiment performs person matching in combination with the depth field information and the characteristic information of the person.

本实施例中的第二识别模块3包括第一裁剪单元31、第二识别单元32以及路径获取单元33,其中第一裁剪单元31对拍摄模块1拍摄的图像进行裁剪,裁剪至包括了人员的所有信息的长方形图形,第二识别单元32对长方形图像中的内容进行分析以获取人员的实时特征信息,路径获取单元33根据拍摄单元拍摄的图像获取人员的实时路径信息。本实施例中的路径获取单元33可以为根据拍摄模块1的历史拍摄图像来对人员的实时路径进行计算与获取,也可以通过其他的方式来获取人员的路径信息。The second identification module 3 in this embodiment includes a first cropping unit 31 , a second identification unit 32 and a path obtaining unit 33 , wherein the first cropping unit 31 crops the image captured by the photographing module 1 , and crops the image including the person. For a rectangular figure of all information, the second recognition unit 32 analyzes the content in the rectangular image to obtain real-time feature information of the person, and the path obtaining unit 33 obtains the real-time path information of the person according to the image captured by the shooting unit. The path obtaining unit 33 in this embodiment may calculate and obtain the real-time path of the person according to the historically captured images of the shooting module 1, or may obtain the path information of the person in other ways.

本实施例中的预测模块4包括了预测单元41以及第二裁剪单元42,预测单元41根据第一识别单元或第二识别单元32的识别结果对人员的位置进行预测,第二裁剪单元42根据预测单元41的预测结果对拍摄模块1拍摄的下一帧图像进行裁剪。本实施例中的预测单元41采用卡尔曼滤波算法对人员的后续位置以及路径进行预测。The prediction module 4 in this embodiment includes a prediction unit 41 and a second cropping unit 42. The prediction unit 41 predicts the position of the person according to the recognition result of the first recognition unit or the second recognition unit 32, and the second cropping unit 42 predicts the position of the person according to The prediction result of the prediction unit 41 crops the next frame of image captured by the capturing module 1 . The prediction unit 41 in this embodiment uses the Kalman filter algorithm to predict the subsequent position and path of the person.

以上实施方式仅用于说明本发明,而并非对本发明的限制,有关技术领域的普通技术人员,在不脱离本发明的精神和范围的情况下,还可以做出各种变化和变型,因此所有等同的技术方案也属于本发明的范畴,本发明的专利保护范围应由权利要求限定。The above embodiments are only used to illustrate the present invention, but not to limit the present invention. Those of ordinary skill in the relevant technical field can also make various changes and modifications without departing from the spirit and scope of the present invention. Therefore, all Equivalent technical solutions also belong to the scope of the present invention, and the patent protection scope of the present invention should be defined by the claims.

Claims (10)

1.一种人员跟随方法,其用于机器人跟随单个人员,其特征在于,包括以下步骤:1. a personnel following method, it is used for the robot to follow a single person, it is characterized in that, comprise the following steps: S1.所述机器人确认所述机器人拍摄的图像中的人员数量并识别出所述机器人需要跟随的所述人员;S1. The robot confirms the number of people in the image captured by the robot and identifies the people the robot needs to follow; S2.所述机器人获取所述人员的实时特征信息以及路径信息;S2. the robot obtains real-time feature information and path information of the person; S3.所述机器人对所述人员的下一步的位置进行预测;以及S3. the robot predicts the next position of the person; and S4.所述机器人根据所述S3步骤中预测的所述人员的位置信息确定所述机器人下一帧的检测范围并跟随所述人员。S4. The robot determines the detection range of the robot in the next frame according to the position information of the person predicted in the step S3 and follows the person. 2.根据权利要求1所述的跟随方法,其特征在于:2. following method according to claim 1 is characterized in that: 所述S1步骤包括:The S1 step includes: S101.所述机器人确认图像中的所述人员数量;以及S101. The robot confirms the number of people in the image; and S102.若所述机器人图像中的人员数量为一个时,所述机器人提取图像中的所述人员的特征;若所述机器人图像中的所述人员数量为至少两个时,所述机器人对图像中的所述人员进行特征匹配,确定所述机器人需要跟随的所述人员。S102. If the number of people in the robot image is one, the robot extracts the characteristics of the person in the image; if the number of people in the robot image is at least two, the robot The person in the feature matching is performed to determine the person the robot needs to follow. 3.根据权利要求2所述的跟随方法,其特征在于:3. following method according to claim 2 is characterized in that: 所述S102步骤中,所述机器人对图像中的所述人员进行特征匹配时,所述机器人结合深度场信息以及所述人员的特征信息进行匹配。In the step S102, when the robot performs feature matching on the person in the image, the robot performs matching based on the depth field information and the feature information of the person. 4.根据权利要求1所述的跟随方法,其特征在于:4. following method according to claim 1 is characterized in that: 所述S2步骤包括:The S2 step includes: S201.所述机器人将图像缩小至包括了所述人员所有信息的长方形图像;S201. The robot reduces the image to a rectangular image including all the information of the person; S202.所述机器人对所述长方形图像框中的内容进行分析以获取所述人员的特征信息;以及S202. The robot analyzes the content in the rectangular image frame to obtain characteristic information of the person; and S203.所述机器人根据所述人员的移动方向与距离获取所述人员的路径信息。S203. The robot acquires the path information of the person according to the moving direction and distance of the person. 5.根据权利要求1所述的跟随方法,其特征在于:5. following method according to claim 1 is characterized in that: 所述S4步骤包括:The S4 step includes: S401.所述机器人依据卡尔曼滤波算法对所述人员的位置进行预测;以及S401. the robot predicts the position of the person according to the Kalman filter algorithm; and S402.所述机器人根据所述人员的位置的预测结果对所述机器人拍摄的图像进行裁剪。S402. The robot crops the image captured by the robot according to the predicted result of the position of the person. 6.一种人员跟随系统,其特征在于,包括:6. A personnel following system is characterized in that, comprising: 拍摄模块,所述拍摄模块设置在所述机器人上,所述拍摄模块用于拍摄图像;a photographing module, the photographing module is arranged on the robot, and the photographing module is used for photographing images; 第一识别模块,所述第一识别模块识别所述拍摄模块拍摄的图像中所述人员的数量并确认所述机器人跟随的所述人员的身份;a first identification module, the first identification module identifies the number of the persons in the images captured by the photographing module and confirms the identity of the persons followed by the robot; 第二识别模块,所述第二识别模块根据所述拍摄模块拍摄的图像识别所述人员的特征信息以及路径信息;a second identification module, the second identification module identifies the feature information and path information of the person according to the image captured by the shooting module; 预测模块,所述预测模块对所述人员的位置进行预测;以及a prediction module that predicts the location of the person; and 调整模块,所述调整模块确定所述机器人下一帧的检测范围。An adjustment module, the adjustment module determines the detection range of the robot in the next frame. 7.根据权利要求6所述的跟随系统,其特征在于:7. The following system according to claim 6, characterized in that: 所述第一识别模块包括:The first identification module includes: 数量识别单元,所述数量识别单元用于识别所述拍摄模块拍摄的图像中的所述人员的数量并将所述图像根据图像中的所述人员的数量进行分类,分为包括一个人员的第一图像以及包括至少两个人员的第二图像;Quantity recognition unit, the quantity recognition unit is used for recognizing the number of the persons in the images captured by the photographing module and classifying the images according to the number of persons in the images, into the number of persons including one person. an image and a second image including at least two persons; 一次识别单元,所述一次识别单元用于识别所述第一图像中的所述人员的特征;以及a primary identification unit for identifying the characteristics of the person in the first image; and 二次识别单元,所述二次识别单元用于对所述第二图像中的所述人员进行特征匹配以确定所述机器人需要跟随的所述人员。A secondary identification unit, the secondary identification unit is configured to perform feature matching on the person in the second image to determine the person the robot needs to follow. 8.根据权利要求7所述的跟随系统,其特征在于:8. The following system according to claim 7, characterized in that: 所述二次识别单元结合深度场信息以及所述人员的特征信息进行匹配。The secondary identification unit performs matching in combination with the depth field information and the characteristic information of the person. 9.根据权利要求6所述的跟随系统,其特征在于:9. The following system according to claim 6, characterized in that: 所述第二识别模块包括:The second identification module includes: 第一裁剪单元,所述第一裁剪单元对所述拍摄模块拍摄的图像进行裁剪,裁剪至包括了所述人员的所有信息的长方形图形;a first cropping unit, the first cropping unit crops the image captured by the photographing module to a rectangular figure including all the information of the person; 第二识别单元,所述第二识别单元对所述长方形图形中的内容进行分析以获取所述人员的实时特征信息;以及a second identification unit that analyzes the content in the rectangular graphic to obtain real-time feature information of the person; and 路径获取单元,所述路径获取单元根据所述拍摄单元拍摄的图像获取所述人员的实时路径信息。A path acquisition unit, which acquires real-time path information of the person according to the image captured by the shooting unit. 10.根据权利要求6所述的跟随系统,其特征在于:10. The following system according to claim 6, characterized in that: 所述预测模块包括:The prediction module includes: 预测单元,所述预测单元依据所述第一识别单元或第二识别单元的识别结果对所述人员的位置进行预测;以及a prediction unit, the prediction unit predicts the position of the person according to the recognition result of the first recognition unit or the second recognition unit; and 第二裁剪单元,所述第二裁剪单元根据所述预测单元的预测结果对所述拍摄模块拍摄的下一帧图像进行裁剪。a second cropping unit, the second cropping unit performs cropping on the next frame of image photographed by the photographing module according to the prediction result of the predicting unit.
CN202010996889.9A 2020-09-21 2020-09-21 Personnel identification method and system Pending CN112132865A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010996889.9A CN112132865A (en) 2020-09-21 2020-09-21 Personnel identification method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010996889.9A CN112132865A (en) 2020-09-21 2020-09-21 Personnel identification method and system

Publications (1)

Publication Number Publication Date
CN112132865A true CN112132865A (en) 2020-12-25

Family

ID=73842339

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010996889.9A Pending CN112132865A (en) 2020-09-21 2020-09-21 Personnel identification method and system

Country Status (1)

Country Link
CN (1) CN112132865A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112990144A (en) * 2021-04-30 2021-06-18 德鲁动力科技(成都)有限公司 Data enhancement method and system for pedestrian re-identification

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109955248A (en) * 2017-12-26 2019-07-02 深圳市优必选科技有限公司 Robot and face following method thereof
CN209859002U (en) * 2019-01-10 2019-12-27 武汉工控仪器仪表有限公司 Outdoor pedestrian following robot control system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109955248A (en) * 2017-12-26 2019-07-02 深圳市优必选科技有限公司 Robot and face following method thereof
CN209859002U (en) * 2019-01-10 2019-12-27 武汉工控仪器仪表有限公司 Outdoor pedestrian following robot control system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈丽君: "基于视觉的人体目标识别与跟踪", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 2019, pages 21 - 55 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112990144A (en) * 2021-04-30 2021-06-18 德鲁动力科技(成都)有限公司 Data enhancement method and system for pedestrian re-identification

Similar Documents

Publication Publication Date Title
CN107330920B (en) Monitoring video multi-target tracking method based on deep learning
CN109145742B (en) Pedestrian identification method and system
WO2020042419A1 (en) Gait-based identity recognition method and apparatus, and electronic device
US8314854B2 (en) Apparatus and method for image recognition of facial areas in photographic images from a digital camera
CN106557726B (en) Face identity authentication system with silent type living body detection and method thereof
CN109145708B (en) Pedestrian flow statistical method based on RGB and D information fusion
JP4241763B2 (en) Person recognition apparatus and method
CN109635686B (en) A Two-Stage Pedestrian Search Method Combining Face and Appearance
Asif et al. Privacy preserving human fall detection using video data
CN114842397B (en) Real-time old man falling detection method based on anomaly detection
CN107392182B (en) Face acquisition and recognition method and device based on deep learning
CN102609720B (en) Pedestrian detection method based on position correction model
Zhang et al. Detection and tracking of multiple humans with extensive pose articulation
CN111652035B (en) A method and system for pedestrian re-identification based on ST-SSCA-Net
Ji et al. Integrating visual selective attention model with HOG features for traffic light detection and recognition
Li et al. Robust multiperson detection and tracking for mobile service and social robots
CN112733814B (en) Deep learning-based pedestrian loitering retention detection method, system and medium
CN107657232B (en) A kind of pedestrian intelligent recognition method and system
WO2019083509A1 (en) Person segmentations for background replacements
CN111797652B (en) Object tracking method, device and storage medium
KR20220078893A (en) Apparatus and method for recognizing behavior of human in video
CN108196680A (en) Robot vision following method based on human body feature extraction and retrieval
CN110348366B (en) An automatic optimal face search method and device
CN112766065A (en) Mobile terminal examinee identity authentication method, device, terminal and storage medium
CN109146913B (en) Face tracking method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201225