CN109523551B - A method and system for obtaining the walking posture of a robot - Google Patents

A method and system for obtaining the walking posture of a robot Download PDF

Info

Publication number
CN109523551B
CN109523551B CN201811221729.6A CN201811221729A CN109523551B CN 109523551 B CN109523551 B CN 109523551B CN 201811221729 A CN201811221729 A CN 201811221729A CN 109523551 B CN109523551 B CN 109523551B
Authority
CN
China
Prior art keywords
image
point
robot
sequence
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201811221729.6A
Other languages
Chinese (zh)
Other versions
CN109523551A (en
Inventor
杨灿军
朱元超
魏谦笑
杨巍
武鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201811221729.6A priority Critical patent/CN109523551B/en
Publication of CN109523551A publication Critical patent/CN109523551A/en
Application granted granted Critical
Publication of CN109523551B publication Critical patent/CN109523551B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/38Registration of image sequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

本发明涉及一种获取机器人行走姿态的方法及系统,属于图像处理技术领域。方法包括以下步骤:(1)采集背景场景的图像、标记点的图像及机器人在背景场景中行走过程的图像序列,标记点固设在机器人上用于标记所固定位置处的行走轨迹;(2)以背景场景的图像为参考帧,从图像序列中分割出局部图像构成前景图像序列,该局部图像包括机器人的图像;(3)以标记点的图像为模板,从前景图像序列中匹配出各标记点并获取其坐标数据;(4)依据各标记点的坐标数据计算机器人在行走过程中的行走姿态。该方法能够有效的降低获取行走姿态所需设备的成本及减少后续图像处理的计算量,可广泛应用机器人等领域。

Figure 201811221729

The invention relates to a method and a system for obtaining the walking posture of a robot, and belongs to the technical field of image processing. The method includes the following steps: (1) collecting an image of a background scene, an image of a marker point and an image sequence of a robot walking in the background scene, and the marker point is fixed on the robot to mark the walking track at the fixed position; (2) ) Taking the image of the background scene as a reference frame, segmenting out a partial image from the image sequence to form a foreground image sequence, the partial image including the image of the robot; (3) Taking the image of the marked point as a template, matching out each image sequence from the foreground image sequence. Mark the points and obtain their coordinate data; (4) Calculate the walking posture of the robot during the walking process according to the coordinate data of each marked point. The method can effectively reduce the cost of equipment required for obtaining the walking posture and the calculation amount of subsequent image processing, and can be widely used in fields such as robots.

Figure 201811221729

Description

一种获取机器人行走姿态的方法及系统A method and system for obtaining the walking posture of a robot

本申请是申请号为CN201711394246.1、发明名称为“一种获取目标物行走姿态的方法与系统”的发明专利的分案申请。This application is a divisional application for an invention patent with the application number of CN201711394246.1 and the invention titled "A method and system for obtaining the walking posture of a target".

技术领域technical field

本发明涉及图像处理技术领域,具体地说,涉及一种用于获取机器人行走姿态的方法与系统。The present invention relates to the technical field of image processing, in particular, to a method and a system for acquiring a walking posture of a robot.

背景技术Background technique

穿戴式外骨骼机器人适用于脑卒中、偏瘫、中风等病人的康复治疗,为了获取康复治疗数据,通常是通过捕捉康复人员关节处的运动姿态,以判断外骨骼机器人在负载后的康复人员肢体姿态,并基于至少一个行走周期中的肢体姿态变化数据,重构出康复人员的行走姿态。Wearable exoskeleton robots are suitable for the rehabilitation treatment of stroke, hemiplegia, stroke and other patients. In order to obtain rehabilitation treatment data, it is usually by capturing the movement posture of the joints of the rehabilitation personnel to judge the limb posture of the rehabilitation personnel after the exoskeleton robot is loaded. , and reconstruct the walking posture of the rehabilitation personnel based on the limb posture change data in at least one walking cycle.

通常采用光学跟踪式运动测量方法获取康复人员关节处的运动姿态,该测量方法基于计算机视觉技术,通过摄像头等装置对目标物上特定标记点的跟踪和计算来完成对目标物的运动测量,常用标记点有反光标记点和发光标点两种,标记点的大小和形状可以根据需要进行设置。Optical tracking motion measurement method is usually used to obtain the motion posture of the rehabilitation personnel's joints. This measurement method is based on computer vision technology, and the motion measurement of the target object is completed by tracking and calculating the specific mark points on the target object through cameras and other devices. Commonly used There are two types of marking points: reflective marking points and luminous marking points. The size and shape of the marking points can be set as required.

典型的光学跟踪式运动测量系统通常使用多个摄像头环绕表演场地排列,摄像头的视野重叠区为工作空间。为了便于后续图像处理,通常要求被测者会穿上黑色紧身衣,在主要的关节等关键部位贴上特制的光学标记点。在进行测量之前,系统首先要完成标定,摄像头才能开始拍摄被测者的动作,保存并进行分析、处理所拍摄到的图像序列,以识别出图像中的光学标志点并计算其在每一瞬间的空间位置,进而重构出目标物的运动轨迹。为了获取目标物的准确运动轨迹,要求摄像头以较高的拍摄速率进行拍摄。A typical optical tracking motion measurement system usually uses multiple cameras arranged around the performance venue, with the overlapping field of view of the cameras being the workspace. In order to facilitate subsequent image processing, the subject is usually required to wear black tights and affix special optical markers on key parts such as major joints. Before measuring, the system must first complete the calibration, and the camera can start to capture the movements of the subject, save, analyze, and process the captured image sequence to identify the optical landmarks in the image and calculate it at every moment. The spatial position of the object can be reconstructed to reconstruct the trajectory of the target. In order to obtain the accurate motion trajectory of the target, the camera is required to shoot at a high shooting rate.

现有常用测量系统要求采用多个摄像头而导致其成本较高,且后续图像处理计算量大,此外,其对试验场地的光照与反射要求较高,限制了其应用场景,而不利于穿戴式骨骼机器人在前述病人康复治疗中的应用。Existing commonly used measurement systems require the use of multiple cameras, which leads to high costs and a large amount of calculation for subsequent image processing. In addition, they have high requirements on the illumination and reflection of the test site, which limits their application scenarios and is not conducive to wearable wearables. The application of skeletal robot in the aforementioned patient rehabilitation treatment.

发明内容SUMMARY OF THE INVENTION

本发明的主要目的是提供一种获取机器人行走姿态的方法,以降低所需设备成本的同时,减少后续图像处理的计算量;本发明的另一目的是提供一种获取机器人行走姿态的系统,以降低所需设备成本的同时,减少后续图像处理的计算量。The main purpose of the present invention is to provide a method for obtaining the walking posture of a robot, so as to reduce the required equipment cost while reducing the calculation amount of subsequent image processing; another purpose of the present invention is to provide a system for obtaining the walking posture of a robot, In order to reduce the cost of the required equipment, the calculation amount of the subsequent image processing is reduced.

为了实现上述主要目的,本发明提供获取机器人行走姿态的方法包括采集步骤、分割步骤、匹配步骤及计算步骤;其中,采集步骤包括采集背景场景的图像、标记点的图像及机器人在背景场景中行走过程的图像序列,标记点固设在机器人上用于标记所固定位置处的行走轨迹;分割步骤包括以背景场景的图像为参考帧,从图像序列中分割出局部图像构成前景图像序列,该局部图像包括机器人的图像;匹配步骤包括以标记点的图像为模板,从前景图像序列中匹配出各标记点并获取其坐标数据;计算步骤包括依据各标记点的坐标数据计算机器人在行走过程中的行走姿态。In order to achieve the above-mentioned main purpose, the present invention provides a method for obtaining a walking posture of a robot, which includes a collection step, a segmentation step, a matching step and a calculation step; wherein, the collection step includes collecting an image of a background scene, an image of marked points, and the robot walking in the background scene. The image sequence of the process, the marking point is fixed on the robot to mark the walking trajectory at the fixed position; the segmentation step includes taking the image of the background scene as a reference frame, and segmenting the partial image from the image sequence to form the foreground image sequence, and the partial image is divided into the foreground image sequence. The image includes the image of the robot; the matching step includes using the image of the marked point as a template, matching each marked point from the foreground image sequence and obtaining its coordinate data; walking posture.

以背景场景为参考帧,从图像序列的各帧图像中分割出至少包含机器人的图像局部,并以该图像局部区域作为后续图像处理的对象—前景图像,以在后续处理中无需对大部分背景图像进行处理而有效地减少后续图像处理的计算量;同时,采用标记点的图像为模板,从分割出的图像局部上匹配出标记点的位置,从而可使用图标式标记点进行标记,同时可采用单目摄像头进行采集图像系统而有效地降低所需设备成本。Taking the background scene as the reference frame, segment the image part that contains at least the robot from each frame of the image sequence, and use the image local area as the object of subsequent image processing—the foreground image, so that most of the background image does not need to be processed in the subsequent processing. image processing to effectively reduce the amount of calculation of subsequent image processing; at the same time, the image of the marked point is used as a template, and the position of the marked point is partially matched from the segmented image, so that the icon-type mark point can be used for marking, and at the same time, it can be The cost of the required equipment is effectively reduced by adopting the monocular camera to collect the image system.

具体的方案为在采集步骤之后及分割步骤之前进行去色步骤,该去色步骤包括将背景场景的图像、标记点的图像及图像序列转换成灰度图。在后续分割步骤及匹配步骤以灰度图为对象进行处理,可进一步减少后续处理的计算量。A specific solution is to perform a decolorization step after the acquisition step and before the segmentation step, and the decolorization step includes converting the image of the background scene, the image of the marker points and the sequence of images into grayscale images. In the subsequent segmentation step and the matching step, the grayscale image is used as the object for processing, which can further reduce the calculation amount of the subsequent processing.

更具体的方案为分割步骤包括构建步骤、二值化步骤与裁选步骤;其中,构建步骤包括对经去色处理后的图像序列中每帧图像与参考帧上对应像素点的灰度值进行求差取绝对值处理,构建出差值帧序列;二值化步骤包括基于预定阈值,对差值帧序列进行二值化处理,而用黑白两色分别表征机器人与背景场景;裁选步骤包括基于机器人颜色区域的坐标数据,利用矩形边界从经去色处理后的图像序列上裁出前景区域序列,矩形边界完全包容机器人颜色区域,机器人颜色区域为表征机器人的颜色区域。A more specific solution is that the segmentation step includes a construction step, a binarization step and a cropping step; wherein, the construction step includes performing the grayscale value of each frame of image and the corresponding pixel on the reference frame in the decolorized image sequence. The absolute value of the difference is processed to construct a difference frame sequence; the binarization step includes binarizing the difference frame sequence based on a predetermined threshold, and the robot and the background scene are represented by black and white respectively; the cropping step includes Based on the coordinate data of the color region of the robot, the foreground region sequence is cut out from the decolorized image sequence by using the rectangular boundary.

再具体的方案为在二值化步骤之后及裁选步骤之前,对二值化处理后的差值帧进行膨胀处理,以进一步明晰机器人与背景之间的边界线,便于后续的裁选处理;机器人颜色区域为经膨胀处理后的颜色区域;在去色步骤之后及分割步骤之前,对经去色处理后的图像序列进行平滑处理,以降低拍摄等过程中所引入的噪声。A further specific solution is to perform expansion processing on the difference frame after the binarization process after the binarization step and before the selection step, so as to further clarify the boundary line between the robot and the background and facilitate subsequent selection processing; The color area of the robot is the color area after the expansion process; after the decolorization step and before the segmentation step, the image sequence after the decolorization process is smoothed to reduce the noise introduced in the process of shooting.

优选的方案为标记点包括固设在机器人的行走机构的各关节位置处关节标记点,匹配步骤包括预匹配步骤与再匹配步骤;其中,预匹配步骤包括以模板为基准,遍历前景区域,计算前景区域中以坐标(x,y)是的像素点为中心的局部区域与模板的负相关度R(x,y),并依据负相关度小于预设阈值为基准,获取像素点簇构成用于表征所述局部区域具有一个标记点的预选标记点簇;再匹配步骤包括在一个预选标记点簇中,以负相关度最小的像素点表征局部区域内标记点的坐标;负相关度R(x,y)的计算公式为:A preferred solution is that the marking points include joint marking points fixed at each joint position of the walking mechanism of the robot, and the matching step includes a pre-matching step and a re-matching step; wherein, the pre-matching step includes taking the template as a reference, traversing the foreground area, calculating In the foreground area, the negative correlation degree R(x, y) of the local area centered on the pixel point with the coordinates (x, y) and the template, and based on the negative correlation degree being less than the preset threshold, the pixel cluster composition is obtained. There is a pre-selected marker cluster for characterizing the local area; the re-matching step is included in a pre-selected marker cluster, and the coordinates of the marker in the local area are represented by the pixel with the minimum negative correlation; the negative correlation R ( The formula for calculating x, y) is:

Figure BDA0001834939230000041
Figure BDA0001834939230000041

其中,T(x′,y′)为模板中坐标为(x′,y′)像素点的灰度值,模板上像素点坐标以其中心点为原点所构建坐标系中的坐标,I(x+x′,y+y′)为前景区域中坐标为(x+x′,y+y′)的像素点的灰度值,前景区域上像素点坐标为该像素点在图像序列中的坐标。Among them, T(x', y') is the gray value of the pixel point whose coordinates are (x', y') in the template, and the coordinates of the pixel point on the template are the coordinates in the coordinate system constructed with the center point as the origin, I( x+x', y+y') is the gray value of the pixel whose coordinates are (x+x', y+y') in the foreground area, and the coordinate of the pixel in the foreground area is the pixel's coordinate in the image sequence. coordinate.

更优选的方案为标记点的图像包括标记点的正视图像、左斜视角图像及右斜视角图像,以解决单目摄像头在视角不正时所拍摄图像的处理;在再匹配步骤之后及计算步骤之前,根据匹配出的标记点坐标,获取图像序列中彩色图像上对应点处的局部区域颜色,对该局部区域与标记点上的颜色是否匹配而筛选出真实标记点。利用彩色图像中颜色信息,对获取的标记点进行筛选,有效地避免因在匹配步骤中匹配出虚假标记点的问题。A more preferred solution is that the image of the marked point includes the front view image, the left oblique view image and the right oblique view image of the marked point, so as to solve the processing of the image captured by the monocular camera when the viewing angle is not correct; after the rematching step and before the calculation step. , according to the matched coordinates of the marked point, obtain the color of the local area at the corresponding point on the color image in the image sequence, and filter out the real marked point whether the local area matches the color on the marked point. Using the color information in the color image, the obtained markers are screened, which effectively avoids the problem of false markers being matched in the matching step.

另一个优选的方案为图像序列由单目摄像机采集。Another preferred solution is that the image sequence is acquired by a monocular camera.

再一个优选的方案为标记点包括圆心部及环绕圆心部的环形部,所述圆心部与所述环形部中,一者表面为白色,另一者表面为黑色。将标记点设置成由两部分色差明显且相互套嵌的圆形部分与环形步骤组成,有效地提高其识别精度,同时无需对机器人的外表颜色进行限定。Another preferred solution is that the marking point includes a center portion and an annular portion surrounding the center portion, one of the center portion and the annular portion has a white surface, and the other has a black surface. The marking point is set to be composed of two circular parts with obvious color difference and nested with each other and an annular step, which can effectively improve the recognition accuracy, and at the same time, it is not necessary to limit the appearance color of the robot.

为了实现上述另一个目的,本发明提供的获取机器人行走姿态的系统包括处理器及存储器,存储器存储有计算机程序,计算机程序被处理器执行时,能实现以下接收步骤、分割步骤、匹配步骤及计算步骤;其中,接收步骤包括接收由摄像头所采集的背景场景的图像与标记点的图像,及接收由单目摄像头所采集的机器人在背景场景中行走过程的图像序列,标记点固定在机器人上用于标记固定位置处的行走轨迹;分割步骤包括以背景场景的图像为参考,从图像序列中分割出局部图像构成待处理图像序列,局部图像包括机器人的图像;匹配步骤包括以标记点的图像为模板,从前景图像序列中匹配出各标记点并获取其坐标数据;计算步骤包括依据各标记点的坐标数据计算机器人在行走过程中的行走姿态。In order to achieve the above-mentioned other object, the system for obtaining the walking posture of a robot provided by the present invention includes a processor and a memory, and the memory stores a computer program. When the computer program is executed by the processor, the following receiving steps, segmentation steps, matching steps and calculation steps can be implemented. step; wherein, the receiving step includes receiving the image of the background scene collected by the camera and the image of the marked point, and receiving the image sequence of the robot walking process in the background scene collected by the monocular camera, and the marked point is fixed on the robot for use. The walking track at the fixed position of the marker; the segmentation step includes taking the image of the background scene as a reference, and segmenting the partial image from the image sequence to form the image sequence to be processed, and the partial image includes the image of the robot; the matching step includes taking the image of the marker point as the reference. A template is used to match each marker point from the foreground image sequence and obtain its coordinate data; the calculation step includes calculating the walking posture of the robot during the walking process according to the coordinate data of each marker point.

具体的方案为在采集步骤之后及分割步骤之前,将背景场景的图像、标记点的图像及图像序列转换成灰度图;标记点包括固设在机器人的行走机构的各关节位置处关节标记点;标记点包括圆心部及环绕圆心部的环形部,圆心部与环形部中,一者表面为白色,另一者表面为黑色。The specific scheme is to convert the image of the background scene, the image of marked points and the image sequence into grayscale images after the acquisition step and before the segmentation step; the marked points include joint marked points fixed at each joint position of the walking mechanism of the robot ; The marking point includes a center part and an annular part surrounding the center part, one of the center part and the annular part has a white surface, and the other has a black surface.

附图说明Description of drawings

图1为本发明获取目标物行走姿态的方法实施例的工作流程图;1 is a work flow diagram of an embodiment of a method for obtaining a walking posture of a target according to the present invention;

图2为本发明获取目标物行走姿态的方法实施例中对图像进行分割处理的过程示意图,其中,图2(a)是作为分割参考帧的背景场景图像,图2(b)为待分割处理的图像序列中的一帧,图2(c)经差值帧的示意图,图2(d)为经二值化处理后的图像,图2(e)为经膨胀处理后的图像,图2(f)为以矩形边界从该图像中分割出前景图像的示意图;2 is a schematic diagram of a process of segmenting an image in an embodiment of a method for obtaining a walking posture of a target according to the present invention, wherein FIG. 2( a ) is a background scene image serving as a reference frame for segmentation, and FIG. 2( b ) is an image to be segmented. Figure 2(c) is a schematic diagram of the difference frame, Figure 2(d) is the image after binarization processing, Figure 2(e) is the image after dilation processing, Figure 2 (f) is a schematic diagram of segmenting the foreground image from the image with a rectangular boundary;

图3为本发明获取目标物行走姿态的方法实施例中识别步骤的不同方向视角下标记点的模板,其中,图3(a)为右斜视角下模板图像,图3(b)为正对视角下的模板图像,图3(c)为左斜视角下模板图像;Fig. 3 is the template of the marked point in the recognition step in the embodiment of the method for obtaining the walking posture of the target according to the present invention, wherein Fig. 3 (a) is the template image under the right oblique view, and Fig. 3 (b) is facing The template image under the viewing angle, Figure 3(c) is the template image under the left oblique viewing angle;

图4为本发明获取目标物行走姿态的方法实施例中预匹配步骤的过程示意图;4 is a schematic process diagram of a pre-matching step in an embodiment of a method for obtaining a walking posture of a target according to the present invention;

图5为本发明获取目标物行走姿态的方法实施例中计算步骤进行行走姿态计算的过程示意图;5 is a schematic diagram of a process of calculating a walking attitude in a calculation step in an embodiment of a method for obtaining a walking attitude of a target according to the present invention;

图6为本发明行走姿态检测系统实施例的原理结构框图。FIG. 6 is a schematic structural block diagram of an embodiment of a walking attitude detection system according to the present invention.

以下结合实施例及其附图对本发明作进一步说明。The present invention will be further described below with reference to the embodiments and the accompanying drawings.

具体实施方式Detailed ways

在下述实施例中,以穿戴式外骨骼机器人在负载后的人员为目标人员,以获取其下肢的行走姿态为例,对本发明获取目标物行走姿态的方法与系统进行示例性说明,但本发明的方法与系统的应用场景并不局限于下述实施例所示应用场景,其还可用于获取机器人、机器狗等其他目标物的行走姿态。In the following embodiments, the person after the load of the wearable exoskeleton robot is the target person, and the walking posture of the lower limb is obtained as an example to illustrate the method and system of the present invention for obtaining the walking posture of the target object, but the present invention The application scenarios of the method and system are not limited to the application scenarios shown in the following embodiments, and can also be used to obtain the walking postures of other objects such as robots and robot dogs.

方法实施例Method embodiment

参见图1,本发明获取目标物行走姿态的方法包括采集步骤S1、去色步骤S2、去噪步骤S3、分割步骤S4、匹配步骤S5、筛选步骤S6及计算步骤S7。Referring to FIG. 1 , the method for obtaining the walking posture of a target object of the present invention includes a collection step S1 , a decolorization step S2 , a denoising step S3 , a segmentation step S4 , a matching step S5 , a screening step S6 and a calculation step S7 .

一、采集步骤S1,采集背景场景的图像、标记点的图像及目标物在背景场景中行走过程的图像序列,标记点固设在目标物上用于标记所固定位置处的行走轨迹。1. Collecting step S1 , collecting an image of a background scene, an image of a marker point, and an image sequence of the target object walking in the background scene, and the marker point is fixed on the target object to mark the walking track at the fixed position.

如图2(b)所示,为标记出人在行走过程中的下肢姿态,至少需在人下肢体的踝关节、膝关节及髋关节的三个关节处各设置一个标记点;在本实施例中,标记点采用图标式标记点,如图4所示,具体地包括白色的圆心部及环绕该圆心部布置的黑色环形部,当然也可设置成包括黑色的圆心部及环绕该圆心部布置的白色环形部,通过将标记点设置成黑白两色差较大的部分组成,便于在经去色步骤S2中的去色处理后,在该标记点图像处仍保留原色差以便于后续识别处理;此外,将该图像标记点设置成圆心部与环绕该圆心部的圆环部组成,便于在取像视角不正时的校正以获取圆心部的中心点位置。当然了,可只设置单色或三色以上的标记点,对单色图像标记点优选采用与所固定位置处周围颜色在去色处理后仍存在较大色差的颜色;对于两色以上的多色拼合结构,其各部分结构并不局限于上述圆形结构,例如,还可设置成四块正方形拼接成,且相邻的两块正方形中,一块为黑色而另一块为白色,到时通过获取有色差块交点处为标记点的中心位置。对于后续需对图像进行去色处理的,优选为黑白反差色;当然可设成其他在去色处理后仍具有较大色差的多色拼合结构;若无需对图像进行去色处理,则也可采用具有较大色差的彩色结构进行拼合。As shown in Figure 2(b), in order to mark the posture of the lower limbs of a person during walking, at least one marking point must be set at the three joints of the ankle joint, knee joint and hip joint of the lower limbs of the person; in this implementation In an example, the marking point is an icon-type marking point, as shown in FIG. 4 , which specifically includes a white center portion and a black annular portion arranged around the center portion, and of course, it can also be set to include a black center portion and a black center portion surrounding the center portion. The arranged white annular part is formed by setting the marking point to be a part with a larger black and white color difference, so that after the color removal process in the color removal step S2, the primary color difference is still retained at the marked point image to facilitate subsequent identification processing. ; In addition, the image marking point is set as a center part and a ring part surrounding the center part, which is convenient for correction when the viewing angle is not correct to obtain the center point position of the center part. Of course, only single-color or more than three-color marking points can be set. For single-color image marking points, it is preferable to use a color that still has a large color difference from the surrounding color at the fixed position after decolorization; The structure of each part is not limited to the above-mentioned circular structure. For example, it can also be set into four squares spliced together, and among the two adjacent squares, one is black and the other is white. Obtain the center position of the marked point at the intersection of the chromatic aberration blocks. For the image that needs to be decolorized later, it is preferably black and white contrast color; of course, it can be set to other multi-color stitching structures that still have large color difference after decolorization; if the image does not need to be decolorized, you can also Use a color structure with a large color difference for flattening.

摄像头所采集的图像以数据帧的形式被捕获,多帧图像依序快速迭代形成视频流,视频流中每帧图像都包含了目标物的步态信息。在视频流的存储数据中,摄像头捕获到的每帧图像均是以矩阵数组的形式存在。在本实施例中,用于获取背景场景图像与图像序列的摄像头为安装在人行走路径侧旁的单目摄像头,人行走路径优选为直线路径,且人的整个检测行走过程及整体背景场景图像均在该单目摄像头的视角内,可将该单目摄像头安装在整体背景场景图像的中轴位置处。The images collected by the camera are captured in the form of data frames, and multiple frames of images are rapidly iterated in sequence to form a video stream. Each frame of the video stream contains the gait information of the target. In the storage data of the video stream, each frame of image captured by the camera exists in the form of a matrix array. In this embodiment, the camera used to obtain the background scene images and image sequences is a monocular camera installed on the side of the person's walking path. The person's walking path is preferably a straight path. All are within the viewing angle of the monocular camera, and the monocular camera can be installed at the central axis position of the overall background scene image.

通常,摄像头所采集到的每帧图像默认使用BGR色彩空间格式表示。BGR有三个通道,即数组中的三个矩阵,分别表示三基色蓝色、绿色、红色部分,每个像素的颜色可以拆分为这三种颜色按照不同比例色光的混合,拆分后每一种颜色的比例数据在每个通道的对应位置保存。Usually, each frame of image captured by the camera is represented by the BGR color space format by default. BGR has three channels, namely the three matrices in the array, which respectively represent the blue, green and red parts of the three primary colors. The color of each pixel can be divided into the mixture of these three colors according to different proportions. The scale data of each color is saved in the corresponding position of each channel.

二、去色步骤S2,对采集到的背景场景图像、标记点图像及图像序列进行去色处理,以将彩色图转换成灰度图。2. Decoloring step S2, performing decoloring processing on the collected background scene image, marker point image and image sequence, so as to convert the color image into a grayscale image.

虽然BGR格式的图像数据保留了最多的光学信息,但是在寻找标记点的过程中,并不是所有操作都需利用全部光学信息。将三通道的矩阵数组压缩为一个通道,虽然会以丢失一部分数据为代价,但能够将真正有用的信息更好地呈现出来,加快标记点的搜索速度。表现为图像的形式,就是将彩色图片压缩为灰度图片,但灰度图像的描述与彩色图像一样仍然反映了整幅图像的整体和局部的色度和亮度等级的分布和特征。矩阵转换按照如下式1进行:Although the image data in BGR format retains the most optical information, not all operations need to utilize all the optical information in the process of finding marker points. Compressing a three-channel matrix array into one channel, although at the cost of losing some data, can better present the really useful information and speed up the search for markers. In the form of an image, the color image is compressed into a grayscale image, but the description of the grayscale image, like the color image, still reflects the overall and local distribution and characteristics of the chromaticity and brightness levels of the entire image. The matrix transformation is performed according to the following equation 1:

Y=0.114·B+0.587·G+0.299·RY=0.114·B+0.587·G+0.299·R

其中,Y表示灰度图片的矩阵,其数值越大,表征该位置处的像素越白,反之越黑。Among them, Y represents the matrix of the grayscale image, and the larger the value is, the whiter the pixel at this position is, and vice versa.

三、降噪步骤S3,对经去色处理后的图像序列进行平滑处理。3. Noise reduction step S3, performing smoothing processing on the decolorized image sequence.

由于自然震动、光照变化或者硬件自身问题等因素,使采集获取的每帧图像均存噪声。通过对数据帧进行平滑处理,可有效去除噪声、避免噪声对后续检测过程产生干扰。Due to factors such as natural vibration, illumination changes, or hardware problems, each frame of image acquired contains noise. By smoothing the data frame, the noise can be effectively removed and the interference of the noise to the subsequent detection process can be avoided.

在本实施例中,平滑处理为使用高斯模糊对图像的每个像素进行处理,即,对任意一个像素点,都取它周边像素的加权平均,权值按照正态分布分配,距离越近的点权重越大,距离越远的点权重越小。在实际平滑操作中发现,以目标像素点为中心,取21×21的矩阵,对矩阵内的周边像素加权平均得到的结果去除噪声效果最好。In this embodiment, the smoothing process is to use Gaussian blur to process each pixel of the image, that is, for any pixel, the weighted average of its surrounding pixels is taken, and the weights are assigned according to a normal distribution. The larger the point weight, the less weight the farther point is. In the actual smoothing operation, it is found that taking the target pixel as the center, taking a 21 × 21 matrix, and weighting and averaging the surrounding pixels in the matrix has the best noise removal effect.

四、分割步骤S4,以所述背景场景的图像为参考帧,从图像序列中分割出局部图像构成前景图像序列,局部图像包括目标物的图像。4. Segmentation step S4, taking the image of the background scene as a reference frame, segmenting a partial image from the image sequence to form a foreground image sequence, and the partial image includes the image of the target object.

在实际拍摄环境中,目标人员将从左到右完整地走过单目摄像头的镜头视野,并在这个过程中将固设自己下肢三个关节点处的标记点暴露给该摄像头。单目摄像头捕获光学信息的过程中,将三维环境映射为了二维平面。从得到的二维视频的角度来看,由于摄像头是静止的,从而可将整个视频流中的每帧图像大致划分为前景与背景两个部分;其中,前景是指运动着的目标人员所在的局部区域,背景是指除了目标人员之外,填充在整个环境场景中的区域,即除了前景区域以外的其它区域;随着目标人员从镜头视野的一端走入取景框,又从另一端走出该取景框,前景区域相对背景区域在视频中表现为不断地滑动,而背景区域在整段视频中保持静止不动,并不同区域被移动着的前景区域所遮挡。从整个视频帧的面积占比方面来看,前景区域所占面积较小,而背景区域所占面积较大。由于所要定标的目标标记点所在区域为前景区域,扫描背景区域不仅会白白消耗算力、浪费时间,在匹配阈值设置较低时甚至还可能会解出错误的结果,进一步干扰标记点数据的提取。In the actual shooting environment, the target person completely walks through the lens field of the monocular camera from left to right, and during this process, the marker points at the three joint points of the fixed lower limbs are exposed to the camera. In the process of capturing optical information, the monocular camera maps the three-dimensional environment into a two-dimensional plane. From the perspective of the obtained two-dimensional video, since the camera is still, each frame of the image in the entire video stream can be roughly divided into two parts, the foreground and the background; the foreground refers to the location where the moving target person is located. Local area, background refers to the area that fills the entire environment scene except the target person, that is, other areas except the foreground area; as the target person walks into the viewfinder from one end of the lens field of view, and walks out of the scene from the other end. In the viewfinder frame, the foreground area is constantly sliding relative to the background area in the video, while the background area remains stationary in the entire video, and different areas are occluded by the moving foreground area. From the perspective of the area ratio of the entire video frame, the foreground area occupies a smaller area, while the background area occupies a larger area. Since the area where the target marker to be calibrated is located is the foreground area, scanning the background area will not only consume computing power and waste time, but may even solve the wrong result when the matching threshold is set low, further interfering with the marking point data. extract.

而如果能够将前景区域从整个数据帧中分割出来,接下来就可以只针对这个区域进行标记点的搜索,而不必理会面积占大多数的背景区域,这样在很大程度上能够提高搜索效率。If the foreground area can be segmented from the entire data frame, then the marker point search can be performed only for this area, ignoring the background area that accounts for the majority of the area, which can greatly improve the search efficiency.

因此,在本实施例中,通过将前景区域从数据帧分割出来作为后续图像处理对象,可有效地降低了其计算量的同时,提高对目标标记点的定位精确度。根据实际情况,在视频的一开始测试者尚未走入镜头,则约定该视频流的第一帧图像都属于背景区域,即构成背景场景图像,以该帧图像为参考背景帧,在后续数据帧的图像处理中,以参考背景帧为依据划分前景与背景。Therefore, in this embodiment, by dividing the foreground region from the data frame as the subsequent image processing object, the calculation amount thereof can be effectively reduced, and the positioning accuracy of the target marker point can be improved at the same time. According to the actual situation, at the beginning of the video, if the tester has not yet entered the camera, it is agreed that the first frame of the video stream belongs to the background area, which constitutes the background scene image. In the image processing of , the foreground and background are divided based on the reference background frame.

在本实施例中,前景区域的分割依据是阈值二值化,具体包括构建步骤S41、二值化步骤S42、膨胀化步骤S43及裁选步骤S44。In this embodiment, the segmentation basis of the foreground region is threshold binarization, which specifically includes a construction step S41 , a binarization step S42 , a dilation step S43 and a clipping step S44 .

(1)构建步骤S41,对经去色处理后的图像序列中每帧图像与所述参考帧上对应像素点的灰度值进行求差取绝对值处理,构建出差值帧序列。(1) Constructing step S41 , performing absolute difference processing on the grayscale values of each frame of image in the decolorized image sequence and the corresponding pixel points on the reference frame to construct a difference value frame sequence.

对任意一个数据帧M,按照下式2计算:For any data frame M, it is calculated according to the following formula 2:

absdiff(I)=|M(I)-Mo(I)|absdiff(I)=|M(I)-M o (I)|

其中,Mo为参考背景帧,M(I)代表待处理帧,I代表数据帧中的某一具体位置,absdiff是一个矩阵数组,其元素表示I位置处的像素点的灰度值之差的绝对值。Among them, M o is the reference background frame, M(I) represents the frame to be processed, I represents a specific position in the data frame, and absdiff is a matrix array whose elements represent the difference between the gray values of the pixels at the I position the absolute value of .

在经去色处理所获得的灰度图像中,各元素点的取值范围为0-255,易知absdiff中各元素取值也都在0-255的范围内,absdiff也可以看成一个灰度图像,结果如图2(c)所示。In the grayscale image obtained by decolorization, the value range of each element point is 0-255. It is easy to know that the value of each element in absdiff is also in the range of 0-255, and absdiff can also be regarded as a gray degree image, and the result is shown in Figure 2(c).

(2)二值化步骤S42,基于预定阈值,对差值帧序列进行二值化处理,而用黑白两色分别表征所述目标物与所述背景场景。(2) Binarization step S42 , based on a predetermined threshold, the difference frame sequence is binarized, and the target object and the background scene are represented by black and white respectively.

通过设定阈值,以将absdiff二值化为纯黑白图像,白色区域和黑色区域大致就代表了前景和背景的分布,结果如图2(d)所示。By setting a threshold, the absdiff is binarized into a pure black and white image, and the white and black areas roughly represent the distribution of the foreground and background. The result is shown in Figure 2(d).

(3)膨胀化步骤S43,对二值化处理后的差值帧进行膨胀处理。(3) Dilation step S43 , performing dilation processing on the difference frame after the binarization process.

经二值化处理后的图像不可避免地会有很多噪点,这是由于在前景与背景交界处的区域内的各个数据点相对于阈值总是有大有小,很难划分出理想的黑-白分界线。可以通过膨胀来修剪毛刺、消除噪点。The binarized image will inevitably have a lot of noise, because each data point in the area at the junction of the foreground and the background is always large or small relative to the threshold, and it is difficult to divide the ideal black- White dividing line. Dilation can be used to trim burrs and remove noise.

首先,定义一个大小为5×5的结构矩阵{Eij,i、j=1、2、3、4、5},该结构矩阵的生成规则如下式3所示:First, define a structure matrix {E ij , i, j=1, 2, 3, 4, 5} with a size of 5×5. The generation rule of the structure matrix is shown in the following formula 3:

Figure BDA0001834939230000111
Figure BDA0001834939230000111

可根据实际情形调整结构矩阵大小,与之对应,生成规则的

Figure BDA0001834939230000112
也要改成合适的数字,其中
Figure BDA0001834939230000113
是个经验值。The size of the structure matrix can be adjusted according to the actual situation, and correspondingly, the regular
Figure BDA0001834939230000112
is also changed to a suitable number, where
Figure BDA0001834939230000113
is an experience value.

其次,生成结构元素后,用它遍历整个图像,按照如下规则得到经过膨胀处理的二值图像:Second, after generating the structuring element, use it to traverse the entire image, and obtain the dilated binary image according to the following rules:

dilate(x,y)=max absdiff(x+x′,y+y′),E(x′,y′)≠0dilate(x, y)=max absdiff(x+x', y+y'), E(x', y')≠0

其中,(x,y)为待处理像素点的坐标,E(x′,y′)为矩阵{Eij}的元素。Among them, (x, y) is the coordinates of the pixel to be processed, and E(x', y') is the element of the matrix {E ij }.

得到的dilate图像中,白色部分膨胀为较为连续的一块区域,边界也更为明显,结果如图2(e)所示。In the obtained dilate image, the white part is expanded into a more continuous area, and the boundary is more obvious. The result is shown in Figure 2(e).

(4)裁选步骤S44,基于目标物颜色区域的坐标数据,利用矩形边界从经去色处理后的图像序列上裁出前景区域序列,矩形边界完全包容目标物颜色区域,目标物颜色区域为表征目标物的颜色区域。(4) the selection step S44, based on the coordinate data of the target object color region, the foreground region sequence is cut out from the decolorized image sequence by using a rectangular boundary, the rectangular boundary completely contains the target object color region, and the target object color region is The color area that characterizes the target.

根据白色部分的分布,用矩形边界在图像序列的数据帧上框出一块完整的区域,将其作为前景区域,剩余区域即为背景区域,结果如图2(f)所示。According to the distribution of the white part, a complete area is framed on the data frame of the image sequence with a rectangular boundary, which is used as the foreground area, and the remaining area is the background area. The result is shown in Figure 2(f).

五、匹配步骤S5,以所述标记点的图像为模板,从前景图像序列中匹配出各标记点并获取其坐标数据。5. Matching step S5 , using the image of the marked point as a template, each marked point is matched from the foreground image sequence and its coordinate data is obtained.

在该步骤中,以事先采集的三个标记点图像作为匹配模板,接着将以这三个匹配模板为基准遍历前述步骤所分割到的前景区域。具体包括预匹配步骤S51与再匹配步骤S52。In this step, the three pre-collected marker images are used as matching templates, and then the foreground regions segmented in the preceding steps will be traversed based on the three matching templates. Specifically, the pre-matching step S51 and the re-matching step S52 are included.

(1)预匹配步骤S51,以模板为基准,遍历前景区域,计算前景区域中以坐标(x,y)是的像素点为中心的局部区域与模板的负相关度R(x,y),并依据该负相关度小于预设阈值为基准,获取像素点簇构成用于表征局部区域具有一个标记点的预选标记点簇。(1) pre-matching step S51, taking the template as a benchmark, traverse the foreground area, and calculate the negative correlation degree R(x, y) between the local area and the template in the foreground area centered on the pixel point with coordinates (x, y), And according to the reference that the negative correlation degree is less than the preset threshold, the acquired pixel point cluster constitutes a preselected marker point cluster used to represent that the local area has one marker point.

在模板遍历到前景区域以(x,y)为中心的某一局部区域时,利用以下式4计算它们的标准化差值平方和(SQDIFF_NORMED):When the template traverses to a certain local area centered on (x, y) in the foreground area, the following formula 4 is used to calculate their normalized sum of squared differences (SQDIFF_NORMED):

Figure BDA0001834939230000121
Figure BDA0001834939230000121

其中,R该局部为与模板相关程度有关的变量,用于表示负相关度,其中,R值越小,对应像素点及其周围区域与模板匹配程度越好;T(x′,y′)为模板中坐标为(x′,y′)像素点的灰度值,模板上像素点坐标以其中心点为原点所构建坐标系中的坐标,I(x+x′,y+y′)为前景区域中坐标为(x+x′,y+y′)的像素点的灰度值,前景区域上像素点坐标为该像素点在所述图像序列中的坐标,通常以图像中的左上角点或左下角点为原点。Among them, R is a variable related to the degree of template correlation, which is used to represent the negative correlation degree. The smaller the value of R, the better the matching degree of the corresponding pixel and its surrounding area with the template; T(x', y') is the gray value of the pixel point whose coordinates are (x', y') in the template, the coordinates of the pixel point on the template are the coordinates in the coordinate system constructed with the center point as the origin, I(x+x', y+y') is the gray value of the pixel whose coordinates are (x+x', y+y') in the foreground area, and the coordinates of the pixel in the foreground area are the coordinates of the pixel in the image sequence, usually the upper left in the image. The corner or lower left corner is the origin.

由于,在实际情况检测中,目标人员身上的标记点不可能总是正对着摄像头,因此,被摄像头所捕获的标记点图像并不总是如图3(b)所示几何形态良好的同心圆,还可能出现如图3(a)及图3(c)所示的不规则椭圆形态。所以针对每一种标记点就其正视、左斜与右斜三种视角情况,分别设计三个模板分别如图3(b)、3(a)及3(c)所示。Because, in the actual situation detection, the marked points on the target person cannot always face the camera, so the marked image captured by the camera is not always the concentric circles with good geometry as shown in Figure 3(b). , the irregular ellipse shape as shown in Figure 3(a) and Figure 3(c) may also appear. Therefore, three templates are designed for each marker point with respect to its frontal, left oblique and right oblique viewing angles, as shown in Figures 3(b), 3(a) and 3(c).

遍历的时候分别计算三个模板与前景区域局部的匹配度,保留结果最小的一项,匹配度采用如下式5进行计算:When traversing, calculate the local matching degree between the three templates and the foreground area, and keep the one with the smallest result. The matching degree is calculated by the following formula 5:

Figure BDA0001834939230000131
Figure BDA0001834939230000131

在遍历完成后,大多数像素点对应的R值都是偏大的,表示该区域不能与模板相匹配;少数像素点R值降到一个很小的范围内,说明该区域与模板很接近。在本实施例中,取0.1为阈值,R值大于该值的像素区域被舍弃,反之认为该区域为标记点所在区域,将其中心像素坐标数据记录下来。After the traversal is completed, the R values corresponding to most of the pixels are too large, indicating that the region cannot match the template; the R value of a few pixels drops to a small range, indicating that the region is very close to the template. In this embodiment, 0.1 is taken as the threshold value, and the pixel area with the R value greater than this value is discarded. Otherwise, the area is considered to be the area where the marker point is located, and the center pixel coordinate data is recorded.

(2)再匹配步骤S52,在一个所述预选标记点簇中,以负相关度R最小的像素点表征该局部区域内标记点的坐标。(2) Re-matching step S52, in one of the preselected marker point clusters, the coordinates of the marker points in the local area are represented by the pixel points with the smallest negative correlation R.

检查模板匹配层得到的坐标数据,可以发现有较大的重叠现象,即匹配程度较好的点往往是扎堆出现的,它们的差值平方和都小于阈值,因此都被记录下来,但是,这些点对应的匹配窗口会发生明显的重叠,重叠中心一般就是某一个标记点。既然这一簇坐标点都是表示同一个标记点,那么就可以从一簇中选出一个代表性最好的,用来表示这个标记点,将剩下的坐标作为噪音丢弃掉。采用下式6进行筛选标记点:Checking the coordinate data obtained by the template matching layer, it can be found that there is a large overlap, that is, the points with better matching degree are often clustered together, and the sum of the squares of their differences is less than the threshold, so they are all recorded. However, these The matching windows corresponding to the points will overlap significantly, and the center of the overlap is generally a certain marker point. Since this cluster of coordinate points all represent the same marker point, the best representative one can be selected from a cluster to represent the marker point, and the remaining coordinates are discarded as noise. Use the following formula 6 to screen markers:

(x,y)=minR(x′,y′),(x′,y′)∈Range(x, y) = minR(x', y'), (x', y') ∈ Range

其中,Range为一簇坐标点所在区域,选出其中R值最低的坐标点,作为该簇坐标点中匹配度最好的结果代表此处的标记点。Among them, Range is the area where a cluster of coordinate points is located, and the coordinate point with the lowest R value is selected as the result of the best matching degree among the coordinate points in the cluster to represent the marked point here.

六、筛选步骤S6,根据匹配出的标记点坐标,获取图像序列中彩色图像上对应点处的局部区域颜色,对该局部区域与标记点上的颜色是否匹配而筛选出真实标记点。6. Screening step S6: According to the matched coordinates of the marked points, obtain the color of the local area at the corresponding point on the color image in the image sequence, and screen out the real marked point whether the local area matches the color on the marked point.

在匹配步骤S5中,所获取的坐标点在理论上就是各个标记点所在位置,但在极少数情况下仍不排除因为前景区域中其它区域变化而导致匹配过程中的误判。In the matching step S5, the obtained coordinate points are theoretically the positions of the respective marker points, but in rare cases, misjudgments in the matching process due to changes in other regions in the foreground region cannot be ruled out.

由于,前景区域是不断变化的,而误判区域不会持续造成干扰,所以这种误判是随机不间断发生的。为了消除这一干扰,对每一个目标区域采取色域筛选作为一种简单的附加判别规则。Since the foreground area is constantly changing, and the misjudgment area will not cause continuous interference, the misjudgment occurs randomly and continuously. To eliminate this interference, gamut filtering is adopted for each target region as a simple additional discrimination rule.

色域筛选利用了之前一直没有参与标记点识别的彩色图像,在该筛选步骤中,通过对坐标点颜色的比对,将这部分信息利用了起来,进一步优化了识别结果。Color gamut screening utilizes color images that have not been involved in marker point recognition before. In this screening step, this part of the information is utilized by comparing the colors of coordinate points to further optimize the recognition results.

色域筛选具体过程如下:The specific process of color gamut screening is as follows:

首先,通过以下式7至式9将BGR格式的图像转换为HSV格式图像:First, convert the image in BGR format to an image in HSV format through the following equations 7 to 9:

V=max(R,G,B)V=max(R, G, B)

Figure BDA0001834939230000141
Figure BDA0001834939230000141

Figure BDA0001834939230000151
Figure BDA0001834939230000151

在HSV色彩空间中,H、S、V三个参数分别表示色调、饱和度、明度,相比BGR空间来说,它更适合比较两个颜色的相近程度。规定对一个待测坐标点来说,其核心颜色若为白色,则通过检测,若核心颜色不是白色,则不通过检测,则该坐标点被舍弃。判定一个颜色属不属于白色范围内,是根据S和V参数。规定满足以下式10与式11则属于白色:In the HSV color space, the three parameters H, S, and V represent hue, saturation, and lightness, respectively. Compared with the BGR space, it is more suitable for comparing the similarity of two colors. It is stipulated that for a coordinate point to be measured, if its core color is white, it will pass the test. If the core color is not white, it will not pass the test, and the coordinate point will be discarded. Determining whether a color belongs to the white range is based on the S and V parameters. It is stipulated that the following formulas 10 and 11 are satisfied, and they belong to white:

0≤S≤0.50≤S≤0.5

0.5≤V≤10.5≤V≤1

该方法可作为一个快速筛选所定位标记点是否符合要求的辅助手段,因为它所面对的数据已经在之前几层经过了相对可靠的筛选,所以对精度要求不用太高。This method can be used as an auxiliary means to quickly screen whether the located markers meet the requirements. Because the data it faces has been relatively reliably screened in the previous layers, the accuracy requirements are not too high.

七、计算步骤S7,依据获取的各标记点坐标数据计算目标物在行走过程中的行走姿态。7. Calculation step S7, calculating the walking posture of the target object during the walking process according to the obtained coordinate data of each marked point.

如图5所示,依据获取三个标记点在某一时刻的坐标,即目标人员的髋关节21、膝关节22及踝关节23的坐标值,该三个关节的坐标分别为(x1,y1)、(x2,y2)及(x3,y3),从而可计算出大腿中轴线24与垂向之间的夹角θ及大腿中轴线24与小腿中轴线25之间夹角α,用于该两个角度值表征此刻目标人员下肢的行走姿态。夹角与夹角的具体计算公式如下式12、13所示:As shown in FIG. 5 , according to the coordinates of the three marked points at a certain moment, that is, the coordinates of the target person’s hip joint 21 , knee joint 22 and ankle joint 23 , the coordinates of the three joints are (x1, y1 ) ), (x2, y2) and (x3, y3), so that the angle θ between the mid-thigh axis 24 and the vertical and the angle α between the mid-thigh axis 24 and the mid-axis 25 of the calf can be calculated for the The two angle values represent the walking posture of the lower limbs of the target person at the moment. The specific calculation formulas of the included angle and the included angle are shown in the following formulas 12 and 13:

Figure BDA0001834939230000152
Figure BDA0001834939230000152

Figure BDA0001834939230000161
Figure BDA0001834939230000161

其中,向量r1=(0,1),向量r2=(x2-x1,y2-y1),向量r3=(x3-x2,y3-y2)。Wherein, vector r1=(0, 1), vector r2=(x2-x1, y2-y1), and vector r3=(x3-x2, y3-y2).

当然可用其他夹角组合表征行走人员的下肢的行走姿态,也可采用夹角之外的其他物理量表征目标人员的行走姿态。Of course, other combinations of angles can be used to represent the walking posture of the walking person's lower limbs, and other physical quantities other than the angles can also be used to represent the walking posture of the target person.

系统实施例System embodiment

参见图6,本发明获取目标物行走姿态的系统1包括单目摄像头、处理器及存储器,存储器存储有计算机程序,该计算机程序被处理器执行时,能实现接收步骤S1、去色步骤S2、降噪步骤S3、分割步骤S4、匹配步骤S5、筛选步骤S6及计算步骤S8。Referring to FIG. 6, the system 1 for obtaining the walking posture of the target object of the present invention includes a monocular camera, a processor and a memory, and the memory stores a computer program. When the computer program is executed by the processor, it can realize the receiving step S1, the color removal step S2, Noise reduction step S3, segmentation step S4, matching step S5, screening step S6 and calculation step S8.

接收步骤S1为接收由摄像头采集到的。该接收步骤S1中的具体采集过程及去色步骤S2至计算步骤S7的具体内容在上述方法实施例中已经详细阐述,在此不再赘述。The receiving step S1 is to receive the data collected by the camera. The specific collection process in the receiving step S1 and the specific contents of the decolorizing step S2 to the calculating step S7 have been described in detail in the above method embodiments, and will not be repeated here.

在上述实施例中,去色步骤S2、降噪步骤S3及筛选步骤S6并非必选步骤,而是可选的优化步骤,即通过去色步骤S2可进一步减少后续图像处理的计算量,通过降噪步骤S3进一步提高后续匹配准确度,而通过筛选步骤S7可排除误判标记点。In the above embodiment, the decolorization step S2, the noise reduction step S3 and the screening step S6 are not mandatory steps, but optional optimization steps, that is, the decolorization step S2 can further reduce the calculation amount of the subsequent image processing, and by reducing the The noise step S3 further improves the subsequent matching accuracy, and the screening step S7 can eliminate misjudged markers.

Claims (6)

1.一种获取机器人行走姿态的方法,其特征在于,包括:1. a method for obtaining robot walking posture, is characterized in that, comprises: 采集步骤,采集背景场景的图像、标记点的图像及所述机器人在所述背景场景中行走过程的图像序列,所述标记点固设在所述机器人上用于标记所固定位置处的行走轨迹;所述图像序列由单目摄像机采集;The collection step is to collect the image of the background scene, the image of the mark point and the image sequence of the walking process of the robot in the background scene, and the mark point is fixed on the robot to mark the walking trajectory at the fixed position ; the image sequence is collected by a monocular camera; 去色步骤,将所述背景场景的图像、所述标记点的图像及所述图像序列转换成灰度图;Decoloring step, converting the image of the background scene, the image of the marker point and the image sequence into a grayscale image; 分割步骤,以所述背景场景的图像为参考帧,从所述图像序列中分割出局部图像构成前景图像序列,所述局部图像包括所述机器人的图像;所述分割步骤包括构建步骤、二值化步骤及裁选步骤;所述构建步骤包括对经去色处理后的图像序列中每帧图像与所述参考帧上对应像素点的灰度值进行求差取绝对值处理,构建出差值帧序列;所述二值化步骤包括基于预定阈值,对所述差值帧序列进行二值化处理,而用黑白两色分别表征所述机器人与所述背景场景;所述裁选步骤包括基于机器人颜色区域的坐标数据,利用矩形边界从经去色处理后的图像序列上裁出前景区域序列,所述矩形边界完全包容所述机器人颜色区域,所述机器人颜色区域为表征所述机器人的颜色区域;The segmentation step, taking the image of the background scene as a reference frame, and segmenting a partial image from the image sequence to form a foreground image sequence, the partial image including the image of the robot; the segmentation step includes a construction step, a binary image The step of decolorization and the step of cropping; the step of constructing includes performing the difference and absolute value processing on the gray value of each frame of image in the decolorized image sequence and the gray value of the corresponding pixel on the reference frame, and constructing the difference value frame sequence; the binarization step includes performing a binarization process on the difference frame sequence based on a predetermined threshold, and using black and white to represent the robot and the background scene respectively; the cropping step includes based on The coordinate data of the robot color area, the foreground area sequence is cut out from the decolorized image sequence by using a rectangular boundary, the rectangular boundary completely contains the robot color area, and the robot color area is the color that characterizes the robot area; 匹配步骤,以所述标记点的图像为模板,从所述前景图像序列中匹配出各标记点并获取其坐标数据;所述标记点包括固设在所述机器人的行走机构的各关节位置处关节标记点,所述匹配步骤包括预匹配步骤与再匹配步骤;In the matching step, using the image of the marked point as a template, each marked point is matched from the foreground image sequence and its coordinate data is obtained; Joint marking points, the matching step includes a pre-matching step and a re-matching step; 所述预匹配步骤包括以所述模板为基准,遍历所述前景区域序列的前景区域,计算所述前景区域中以坐标(x,y)是的像素点为中心的局部区域与所述模板的负相关度R(x,y),并依据所述负相关度小于预设阈值为基准,获取像素点簇构成用于表征所述局部区域具有一个标记点的预选标记点簇;The pre-matching step includes taking the template as a reference, traversing the foreground areas of the foreground area sequence, and calculating the difference between the local area centered on the pixel point with coordinates (x, y) in the foreground area and the template. Negative correlation degree R(x, y), and based on the negative correlation degree being less than a preset threshold as a reference, obtain a pixel cluster to form a pre-selected marker cluster for representing that the local area has a marker; 所述再匹配步骤包括在一个所述预选标记点簇中,以所述负相关度最小的像素点表征所述局部区域内标记点的坐标;The re-matching step includes in one of the pre-selected marking point clusters, representing the coordinates of the marking points in the local area with the pixel point with the least negative correlation; 所述负相关度R(x,y)的计算公式为:The calculation formula of the negative correlation degree R(x, y) is:
Figure FDA0002598708660000021
Figure FDA0002598708660000021
其中,T(x′,y′)为所述模板中坐标为(x′,y′)像素点的灰度值,所述模板上像素点坐标以其中心点为原点所构建坐标系中的坐标,I(x+x′,y+y′)为所述前景区域中坐标为(x+x′,y+y′)的像素点的灰度值,所述前景区域上像素点坐标为该像素点在所述图像序列中的坐标;Wherein, T(x', y') is the gray value of the pixel point whose coordinates are (x', y') in the template, and the coordinates of the pixel point on the template take the center point as the origin in the coordinate system constructed Coordinates, I(x+x', y+y') is the gray value of the pixel whose coordinates are (x+x', y+y') in the foreground area, and the coordinates of the pixel on the foreground area are the coordinates of the pixel in the image sequence; 计算步骤,依据所述各标记点的坐标数据计算所述机器人在行走过程中的行走姿态。The calculation step is to calculate the walking posture of the robot during the walking process according to the coordinate data of the marked points.
2.根据权利要求1所述的方法,其特征在于:2. method according to claim 1, is characterized in that: 在所述二值化步骤之后及所述裁选步骤之前,对二值化处理后的差值帧进行膨胀处理;After the binarization step and before the cropping step, dilation is performed on the difference frame after the binarization process; 所述机器人颜色区域为经膨胀处理后的颜色区域;The robot color area is the color area after expansion processing; 在所述去色步骤之后及所述分割步骤之前,对经去色处理后的图像序列进行平滑处理;smoothing the decolorized image sequence after the decolorization step and before the segmentation step; 所述平滑处理为使用高斯模糊对图像的每个像素进行处理。The smoothing process is to use Gaussian blur to process each pixel of the image. 3.根据权利要求1所述的方法,其特征在于:3. method according to claim 1, is characterized in that: 所述标记点的图像包括所述标记点的正视图像、左斜视角图像及右斜视角图像;The image of the marked point includes a front view image, a left oblique perspective image and a right oblique perspective image of the marked point; 在所述再匹配步骤之后及所述计算步骤之前,根据匹配出的标记点坐标,获取所述图像序列中彩色图像上对应点处的局部区域颜色,对该局部区域与标记点上的颜色是否匹配而筛选出真实标记点。After the re-matching step and before the calculation step, the color of the local area at the corresponding point on the color image in the image sequence is obtained according to the coordinates of the matched marker point, and whether the color of the local area and the marker point is Match and filter out the real markers. 4.根据权利要求1至3任一项权利要求所述的方法,其特征在于:4. The method according to any one of claims 1 to 3, wherein: 所述标记点包括圆心部及环绕所述圆心部的环形部,所述圆心部与所述环形部中,一者表面为白色,另一者表面为黑色。The marking point includes a center portion and an annular portion surrounding the center portion. One of the center portion and the annular portion has a white surface, and the other has a black surface. 5.一种获取机器人行走姿态的系统,包括处理器及存储器,所述存储器存储有计算机程序,其特征在于,所述计算机程序被所述处理器执行时,能实现以下步骤:5. A system for obtaining the walking posture of a robot, comprising a processor and a memory, wherein the memory stores a computer program, wherein, when the computer program is executed by the processor, the following steps can be realized: 接收步骤,接收由摄像头所采集的背景场景的图像与标记点的图像,及接收由单目摄像头所采集的所述机器人在所述背景场景中行走过程的图像序列,所述标记点固定在所述机器人上用于标记固定位置处的行走轨迹;The receiving step is to receive the image of the background scene collected by the camera and the image of the marker point, and receive the image sequence of the robot walking process in the background scene collected by the monocular camera, and the marker point is fixed at the The robot is used to mark the walking trajectory at the fixed position; 去色步骤,将所述背景场景的图像、所述标记点的图像及所述图像序列转换成灰度图;Decoloring step, converting the image of the background scene, the image of the marker point and the image sequence into a grayscale image; 分割步骤,以所述背景场景的图像为参考帧,从所述图像序列中分割出局部图像构成前景图像序列,所述局部图像包括所述机器人的图像;所述分割步骤包括构建步骤、二值化步骤及裁选步骤;所述构建步骤包括对经去色处理后的图像序列中每帧图像与所述参考帧上对应像素点的灰度值进行求差取绝对值处理,构建出差值帧序列;所述二值化步骤包括基于预定阈值,对所述差值帧序列进行二值化处理,而用黑白两色分别表征所述机器人与所述背景场景;所述裁选步骤包括基于机器人颜色区域的坐标数据,利用矩形边界从经去色处理后的图像序列上裁出前景区域序列,所述矩形边界完全包容所述机器人颜色区域,所述机器人颜色区域为表征所述机器人的颜色区域;The segmentation step, taking the image of the background scene as a reference frame, and segmenting a partial image from the image sequence to form a foreground image sequence, the partial image including the image of the robot; the segmentation step includes a construction step, a binary image The step of decolorization and the step of cropping; the step of constructing includes performing the difference and absolute value processing on the gray value of each frame of image in the decolorized image sequence and the gray value of the corresponding pixel on the reference frame, and constructing the difference value frame sequence; the binarization step includes performing a binarization process on the difference frame sequence based on a predetermined threshold, and using black and white to represent the robot and the background scene respectively; the cropping step includes based on The coordinate data of the robot color area, the foreground area sequence is cut out from the decolorized image sequence by using a rectangular boundary, the rectangular boundary completely contains the robot color area, and the robot color area is the color that characterizes the robot area; 匹配步骤,以所述标记点的图像为模板,从所述前景图像序列中匹配出各标记点并获取其坐标数据;所述标记点包括固设在所述机器人的行走机构的各关节位置处关节标记点,所述匹配步骤包括预匹配步骤与再匹配步骤;In the matching step, using the image of the marked point as a template, each marked point is matched from the foreground image sequence and its coordinate data is obtained; Joint marking points, the matching step includes a pre-matching step and a re-matching step; 所述预匹配步骤包括以所述模板为基准,遍历所述前景区域序列的前景区域,计算所述前景区域中以坐标(x,y)是的像素点为中心的局部区域与所述模板的负相关度R(x,y),并依据所述负相关度小于预设阈值为基准,获取像素点簇构成用于表征所述局部区域具有一个标记点的预选标记点簇;The pre-matching step includes taking the template as a reference, traversing the foreground areas of the foreground area sequence, and calculating the difference between the local area centered on the pixel point with coordinates (x, y) in the foreground area and the template. Negative correlation degree R(x, y), and based on the negative correlation degree being less than a preset threshold as a reference, obtain a pixel cluster to form a pre-selected marker cluster for representing that the local area has a marker; 所述再匹配步骤包括在一个所述预选标记点簇中,以所述负相关度最小的像素点表征所述局部区域内标记点的坐标;The re-matching step includes in one of the pre-selected marking point clusters, representing the coordinates of the marking points in the local area with the pixel point with the least negative correlation; 所述负相关度R(x,y)的计算公式为:The calculation formula of the negative correlation degree R(x, y) is:
Figure FDA0002598708660000041
Figure FDA0002598708660000041
其中,T(x′,y′)为所述模板中坐标为(x′,y′)像素点的灰度值,所述模板上像素点坐标以其中心点为原点所构建坐标系中的坐标,I(x+x′,y+y′)为所述前景区域中坐标为(x+x′,y+y′)的像素点的灰度值,所述前景区域上像素点坐标为该像素点在所述图像序列中的坐标;Among them, T(x', y') is the gray value of the pixel point whose coordinates are (x', y') in the template, and the coordinates of the pixel point on the template take the center point as the origin in the constructed coordinate system. Coordinates, I(x+x', y+y') is the gray value of the pixel whose coordinates are (x+x', y+y') in the foreground area, and the coordinates of the pixel on the foreground area are the coordinates of the pixel in the image sequence; 计算步骤,依据所述各标记点的坐标数据计算所述机器人在行走过程中的行走姿态。The calculation step is to calculate the walking posture of the robot during the walking process according to the coordinate data of the marked points.
6.根据权利要求5所述的系统,其特征在于:6. The system of claim 5, wherein: 所述标记点包括圆心部及环绕所述圆心部的环形部,所述圆心部与所述环形部中,一者表面为白色,另一者表面为黑色。The marking point includes a center portion and an annular portion surrounding the center portion. One of the center portion and the annular portion has a white surface, and the other has a black surface.
CN201811221729.6A 2017-12-21 2017-12-21 A method and system for obtaining the walking posture of a robot Expired - Fee Related CN109523551B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811221729.6A CN109523551B (en) 2017-12-21 2017-12-21 A method and system for obtaining the walking posture of a robot

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811221729.6A CN109523551B (en) 2017-12-21 2017-12-21 A method and system for obtaining the walking posture of a robot
CN201711394246.1A CN107967687B (en) 2017-12-21 2017-12-21 A kind of method and system obtaining object walking posture

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201711394246.1A Division CN107967687B (en) 2017-12-21 2017-12-21 A kind of method and system obtaining object walking posture

Publications (2)

Publication Number Publication Date
CN109523551A CN109523551A (en) 2019-03-26
CN109523551B true CN109523551B (en) 2020-11-10

Family

ID=61995662

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201711394246.1A Expired - Fee Related CN107967687B (en) 2017-12-21 2017-12-21 A kind of method and system obtaining object walking posture
CN201811221729.6A Expired - Fee Related CN109523551B (en) 2017-12-21 2017-12-21 A method and system for obtaining the walking posture of a robot

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201711394246.1A Expired - Fee Related CN107967687B (en) 2017-12-21 2017-12-21 A kind of method and system obtaining object walking posture

Country Status (1)

Country Link
CN (2) CN107967687B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110405777B (en) * 2018-04-28 2023-03-31 深圳果力智能科技有限公司 Interactive control method of robot
CN109102527B (en) * 2018-08-01 2022-07-08 甘肃未来云数据科技有限公司 Method and device for acquiring video action based on identification point
CN110334595B (en) * 2019-05-29 2021-11-19 北京迈格威科技有限公司 Dog tail movement identification method, device, system and storage medium
CN114820689A (en) * 2019-10-21 2022-07-29 深圳市瑞立视多媒体科技有限公司 Identification method, device, equipment and storage medium of marking point
CN110969747A (en) * 2019-12-11 2020-04-07 盛视科技股份有限公司 Anti-following access control system and anti-following method
CN111491089A (en) * 2020-04-24 2020-08-04 厦门大学 A method for monitoring a target on a background using an image acquisition device
CN113916445A (en) * 2021-09-08 2022-01-11 广州航新航空科技股份有限公司 Method, system and device for measuring rotor wing common taper and storage medium
CN115530813B (en) * 2022-10-20 2024-05-10 吉林大学 Marking system for testing and analyzing multi-joint three-dimensional movement of upper body of human body
CN115880783B (en) * 2023-02-21 2023-05-05 山东泰合心康医疗科技有限公司 Child motion gesture recognition method for pediatric healthcare

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100475140C (en) * 2006-11-29 2009-04-08 华中科技大学 A computer-aided gait analysis method based on monocular video
CN101853333B (en) * 2010-05-26 2012-11-07 中国科学院遥感应用研究所 Method for picking marks in medical robot navigation positioning images
US9975248B2 (en) * 2012-03-21 2018-05-22 Kenneth Dean Stephens, Jr. Replicating the remote environment of a proxy robot
CN103577795A (en) * 2012-07-30 2014-02-12 索尼公司 Detection equipment and method, detector generation equipment and method and monitoring system
CN103198492A (en) * 2013-03-28 2013-07-10 沈阳航空航天大学 Human motion capture method
CN103473539B (en) * 2013-09-23 2015-07-15 智慧城市系统服务(中国)有限公司 Gait recognition method and device
CN104408718B (en) * 2014-11-24 2017-06-30 中国科学院自动化研究所 A kind of gait data processing method based on Binocular vision photogrammetry
CN105468896B (en) * 2015-11-13 2017-06-16 上海逸动医学科技有限公司 Joint motions detecting system and method
TW201727418A (en) * 2016-01-26 2017-08-01 鴻海精密工業股份有限公司 Analysis of the ground texture combined data recording system and method for analysing
CN106373140B (en) * 2016-08-31 2020-03-27 杭州沃朴物联科技有限公司 Transparent and semitransparent liquid impurity detection method based on monocular vision
CN107273611B (en) * 2017-06-14 2020-11-10 北京航空航天大学 A gait planning method for a lower limb rehabilitation robot based on the walking characteristics of the lower limbs

Also Published As

Publication number Publication date
CN107967687A (en) 2018-04-27
CN109523551A (en) 2019-03-26
CN107967687B (en) 2018-11-23

Similar Documents

Publication Publication Date Title
CN109523551B (en) A method and system for obtaining the walking posture of a robot
CN104715238B (en) A kind of pedestrian detection method based on multi-feature fusion
JP5873442B2 (en) Object detection apparatus and object detection method
CN106874884B (en) Human body re-identification method based on part segmentation
CN110210276A (en) A kind of motion track acquisition methods and its equipment, storage medium, terminal
CN106023151B (en) Tongue object detection method under a kind of open environment
CN104091324A (en) Quick checkerboard image feature matching algorithm based on connected domain segmentation
CN107239748A (en) Robot target identification and localization method based on gridiron pattern calibration technique
JP4373840B2 (en) Moving object tracking method, moving object tracking program and recording medium thereof, and moving object tracking apparatus
CN111160291B (en) Human eye detection method based on depth information and CNN
CN102034247B (en) Motion capture method for binocular vision image based on background modeling
CN110599522B (en) Method for detecting and removing dynamic target in video sequence
CN108921881A (en) A kind of across camera method for tracking target based on homography constraint
CN107705254A (en) A kind of urban environment appraisal procedure based on streetscape figure
CN107944437B (en) A kind of Face detection method based on neural network and integral image
CN109285183A (en) A Multimodal Video Image Registration Method Based on Motion Region Image Sharpness
CN103164843B (en) A kind of medical image colorize method
Hua et al. Background extraction using random walk image fusion
CN107145820B (en) Binocular positioning method based on HOG characteristics and FAST algorithm
CN110111368B (en) Human body posture recognition-based similar moving target detection and tracking method
Yang et al. Upper limb movement analysis via marker tracking with a single-camera system
CN104881669A (en) Method and system for extracting local area detector based on color contrast
WO2023193763A1 (en) Data processing method and apparatus, and tracking mark, electronic device and storage medium
JP2011150594A (en) Image processor and image processing method, and program
CN116071323A (en) Rain intensity measuring method based on camera parameter normalization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20201110

CF01 Termination of patent right due to non-payment of annual fee