CN109523551B - Method and system for acquiring walking posture of robot - Google Patents

Method and system for acquiring walking posture of robot Download PDF

Info

Publication number
CN109523551B
CN109523551B CN201811221729.6A CN201811221729A CN109523551B CN 109523551 B CN109523551 B CN 109523551B CN 201811221729 A CN201811221729 A CN 201811221729A CN 109523551 B CN109523551 B CN 109523551B
Authority
CN
China
Prior art keywords
image
robot
point
sequence
foreground
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811221729.6A
Other languages
Chinese (zh)
Other versions
CN109523551A (en
Inventor
杨灿军
朱元超
魏谦笑
杨巍
武鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201811221729.6A priority Critical patent/CN109523551B/en
Publication of CN109523551A publication Critical patent/CN109523551A/en
Application granted granted Critical
Publication of CN109523551B publication Critical patent/CN109523551B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/38Registration of image sequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a method and a system for acquiring a walking posture of a robot, and belongs to the technical field of image processing. The method comprises the following steps: (1) acquiring an image of a background scene, an image of a mark point and an image sequence of a robot in a walking process in the background scene, wherein the mark point is fixedly arranged on the robot and used for marking a walking track at a fixed position; (2) taking an image of a background scene as a reference frame, and segmenting a local image from an image sequence to form a foreground image sequence, wherein the local image comprises an image of a robot; (3) matching each marking point from the foreground image sequence by taking the image of the marking point as a template and acquiring coordinate data of the marking point; (4) and calculating the walking posture of the robot in the walking process according to the coordinate data of each mark point. The method can effectively reduce the cost of equipment required for acquiring the walking posture and the calculation amount of subsequent image processing, and can be widely applied to the fields of robots and the like.

Description

Method and system for acquiring walking posture of robot
The application is a divisional application of an invention patent with the application number of CN201711394246.1 and the name of 'a method and a system for acquiring the walking posture of a target'.
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a system for acquiring a walking posture of a robot.
Background
The wearable exoskeleton robot is suitable for rehabilitation treatment of patients with cerebral apoplexy, hemiplegia, apoplexy and the like, and in order to acquire rehabilitation treatment data, the movement posture of the joints of rehabilitation personnel is usually captured to judge the limb posture of the rehabilitation personnel after the exoskeleton robot is loaded, and the walking posture of the rehabilitation personnel is reconstructed based on the limb posture change data in at least one walking cycle.
The method is based on computer vision technology, and completes motion measurement of a target object by tracking and calculating a specific mark point on the target object through a camera and other devices, wherein the common mark points comprise a reflective mark point and a luminous mark point, and the size and the shape of the mark point can be set according to requirements.
Typical optical tracking motion measurement systems typically use multiple cameras arranged around the arena with overlapping fields of view of the cameras being the workspace. In order to facilitate subsequent image processing, a testee is usually required to wear a black tight garment and attach special optical mark points to key parts such as main joints. Before measurement, the system firstly needs to complete calibration, the camera can start to shoot the action of the measured person, and the shot image sequence is stored, analyzed and processed to identify the optical mark points in the image and calculate the spatial position of the optical mark points at each moment so as to reconstruct the motion track of the target object. In order to obtain an accurate motion trajectory of a target object, a camera is required to perform shooting at a high shooting rate.
The existing common measurement system requires a plurality of cameras, so that the cost is high, the subsequent image processing calculation amount is large, in addition, the requirements on illumination and reflection of a test field are high, the application scene is limited, and the application of the wearable skeleton robot in the rehabilitation treatment of the patient is not facilitated.
Disclosure of Invention
The invention mainly aims to provide a method for acquiring the walking posture of a robot, which is used for reducing the cost of required equipment and reducing the calculation amount of subsequent image processing; another object of the present invention is to provide a system for acquiring a walking posture of a robot, so as to reduce the cost of the required equipment and reduce the calculation amount of the subsequent image processing.
In order to achieve the main purpose, the method for acquiring the walking posture of the robot comprises an acquisition step, a segmentation step, a matching step and a calculation step; the method comprises the steps of acquiring an image of a background scene, an image of a mark point and an image sequence of a robot in a walking process in the background scene, wherein the mark point is fixedly arranged on the robot and used for marking a walking track at a fixed position; the segmentation step comprises the steps that an image of a background scene is taken as a reference frame, a partial image is segmented from an image sequence to form a foreground image sequence, and the partial image comprises an image of a robot; matching each marking point in the foreground image sequence by taking the image of the marking point as a template and acquiring coordinate data of the marking point; and the calculating step comprises calculating the walking posture of the robot in the walking process according to the coordinate data of each mark point.
Taking a background scene as a reference frame, dividing image local at least comprising a robot from each frame image of an image sequence, and taking the image local area as a foreground image which is an object of subsequent image processing, so that most of background images are not required to be processed in the subsequent processing, and the calculation amount of the subsequent image processing is effectively reduced; meanwhile, the image of the mark point is used as a template, and the position of the mark point is matched from the local part of the divided image, so that the mark can be carried out by using the icon type mark point, and meanwhile, a monocular camera can be adopted to collect an image system, thereby effectively reducing the cost of required equipment.
The specific scheme is that a decolorizing step is carried out after the collecting step and before the dividing step, and the decolorizing step comprises the step of converting an image of a background scene, an image of a mark point and an image sequence into a gray scale image. The gray-scale image is used as the object to be processed in the subsequent segmentation step and the matching step, so that the calculation amount of the subsequent processing can be further reduced.
The more specific scheme is that the segmentation step comprises a construction step, a binarization step and a cutting step; the construction step comprises the steps of carrying out difference value and absolute value calculation processing on gray values of corresponding pixel points on each frame of image and a reference frame in the image sequence after the decolorizing processing, and constructing a difference value frame sequence; the binaryzation step comprises the steps of carrying out binaryzation processing on the difference frame sequence based on a preset threshold value, and respectively representing the robot and a background scene by black and white; and the clipping step comprises the step of clipping a foreground region sequence from the image sequence subjected to the decolorizing processing by utilizing a rectangular boundary based on the coordinate data of the robot color region, wherein the rectangular boundary completely contains the robot color region, and the robot color region is a color region representing the robot.
The specific scheme is that after the binarization step and before the cutting step, the difference value frame after the binarization processing is subjected to expansion processing so as to further clarify the boundary line between the robot and the background and facilitate subsequent cutting processing; the color area of the robot is the color area after expansion treatment; after the decolorizing step and before the segmentation step, the image sequence after the decolorizing treatment is subjected to smoothing treatment so as to reduce noise introduced in the shooting process and the like.
The preferable scheme is that the mark points comprise joint mark points fixedly arranged at the positions of all joints of a walking mechanism of the robot, and the matching step comprises a pre-matching step and a re-matching step; the pre-matching step comprises the steps of traversing a foreground area by taking a template as a reference, calculating the negative correlation degree R (x, y) between a local area which takes a pixel point with a coordinate (x, y) as a center in the foreground area and the template, and acquiring a pixel point cluster to form a pre-selected mark point cluster which is used for representing that the local area has a mark point according to the condition that the negative correlation degree is smaller than a preset threshold value as the reference; the re-matching step comprises the steps that in a pre-selected mark point cluster, the coordinates of mark points in a local area are represented by a pixel point with the minimum negative correlation degree; the calculation formula of the negative correlation degree R (x, y) is as follows:
Figure BDA0001834939230000041
wherein, T (x ', y') is the gray value of the pixel point whose coordinate is (x ', y') in the template, the coordinate of the pixel point on the template is the coordinate in the coordinate system constructed by using the central point thereof as the origin, I (x + x ', y + y') is the gray value of the pixel point whose coordinate is (x + x ', y + y') in the foreground region, and the coordinate of the pixel point on the foreground region is the coordinate of the pixel point in the image sequence.
The more preferable scheme is that the images of the mark points comprise an orthographic image, a left squint angle image and a right squint angle image of the mark points so as to solve the problem of processing the images shot by the monocular camera when the view angles are not correct; after the re-matching step and before the calculating step, according to the matched mark point coordinates, obtaining the local area color of the corresponding point on the color image in the image sequence, and screening out the real mark point if the local area is matched with the color on the mark point. The obtained mark points are screened by utilizing the color information in the color image, so that the problem of matching false mark points in the matching step is effectively avoided.
Another preferred solution is that the sequence of images is acquired by a monocular camera.
Still another preferred embodiment is that the mark point includes a circular center portion and an annular portion surrounding the circular center portion, and one of the circular center portion and the annular portion has a white surface and the other has a black surface. The marking point is set to be composed of a circular part and an annular step, the color difference of the two parts is obvious, the circular part and the annular step are mutually nested, the identification precision of the marking point is effectively improved, and meanwhile, the appearance color of the robot does not need to be limited.
In order to achieve the above another object, the system for acquiring a walking posture of a robot according to the present invention includes a processor and a memory, wherein the memory stores a computer program, and the computer program is executed by the processor and is capable of implementing the following receiving step, dividing step, matching step and calculating step; the receiving step comprises the steps of receiving an image of a background scene and an image of a mark point, which are acquired by a camera, and receiving an image sequence of a walking process of the robot in the background scene, which is acquired by a monocular camera, wherein the mark point is fixed on the robot and is used for marking a walking track at a fixed position; the segmentation step comprises the steps of taking an image of a background scene as a reference, segmenting a local image from an image sequence to form an image sequence to be processed, wherein the local image comprises an image of a robot; matching each marking point in the foreground image sequence by taking the image of the marking point as a template and acquiring coordinate data of the marking point; and the calculating step comprises calculating the walking posture of the robot in the walking process according to the coordinate data of each mark point.
The specific scheme is that after the acquisition step and before the segmentation step, the image of the background scene, the image of the mark point and the image sequence are converted into a gray image; the marking points comprise joint marking points fixedly arranged at the positions of all joints of a walking mechanism of the robot; the mark point comprises a circle center part and an annular part surrounding the circle center part, wherein one surface of the circle center part and the annular part is white, and the other surface of the circle center part and the annular part is black.
Drawings
FIG. 1 is a flowchart illustrating an embodiment of a method for obtaining a walking posture of a target according to the present invention;
fig. 2 is a schematic diagram of a process of segmenting an image in an embodiment of a method for obtaining a walking posture of an object according to the present invention, where fig. 2(a) is a background scene image as a segmentation reference frame, fig. 2(b) is a frame in an image sequence to be segmented, fig. 2(c) is a schematic diagram of a difference frame, fig. 2(d) is an image after binarization processing, fig. 2(e) is an image after dilation processing, and fig. 2(f) is a schematic diagram of a foreground image segmented from the image by a rectangular boundary;
fig. 3 is a template of mark points under different direction view angles in the step of identifying in the method for obtaining the walking posture of the target object according to the present invention, wherein fig. 3(a) is a template image under a right oblique view angle, fig. 3(b) is a template image under a right oblique view angle, and fig. 3(c) is a template image under a left oblique view angle;
FIG. 4 is a schematic process diagram of a pre-matching step in an embodiment of a method for obtaining a walking posture of a target according to the present invention;
FIG. 5 is a schematic view of a walking posture calculation process performed by the calculation step in the embodiment of the method for obtaining the walking posture of the target object according to the present invention;
fig. 6 is a schematic structural block diagram of an embodiment of the walking posture detection system of the present invention.
The invention is further illustrated by the following examples and figures.
Detailed Description
In the following embodiments, the method and the system for acquiring the walking posture of the target object of the present invention are exemplarily described by taking a loaded person of the wearable exoskeleton robot as a target person and acquiring the walking posture of the lower limb of the wearable exoskeleton robot as an example, but the application scenarios of the method and the system of the present invention are not limited to the application scenarios shown in the following embodiments, and the method and the system can also be used for acquiring the walking posture of other target objects such as robots and robot dogs.
Method embodiment
Referring to fig. 1, the method for acquiring the walking posture of the target object includes an acquisition step S1, a decoloring step S2, a denoising step S3, a segmentation step S4, a matching step S5, a screening step S6, and a calculation step S7.
Firstly, an acquisition step S1, acquiring an image of a background scene, an image of a mark point and an image sequence of a walking process of an object in the background scene, wherein the mark point is fixedly arranged on the object and used for marking a walking track at a fixed position.
As shown in fig. 2(b), in order to mark the lower limb posture of a person in the walking process, at least one mark point is required to be respectively arranged at three joints of an ankle joint, a knee joint and a hip joint of the lower limb body of the person; in the present embodiment, the mark points are icon-type mark points, as shown in fig. 4, specifically include a white circle center portion and a black ring portion disposed around the circle center portion, and of course, the mark points may also be configured to include a black circle center portion and a white ring portion disposed around the circle center portion, and by configuring the mark points to be composed of a portion having a large difference between black and white, the original color difference is still maintained at the mark point image after the color removal processing in the color removal step S2, so as to facilitate the subsequent identification processing; in addition, the image mark point is set to be composed of a circle center part and a circular ring part surrounding the circle center part, so that the center point position of the circle center part can be conveniently obtained by correcting the imaging visual angle when the imaging visual angle is not correct. Of course, only the mark points with single color or more than three colors can be set, and the monochrome image mark points are preferably in the color with larger color difference with the surrounding color at the fixed position after the color removal treatment; for a multicolor split structure with more than two colors, the structure of each part is not limited to the circular structure, for example, four squares can be spliced, one square is black and the other is white in two adjacent squares, and the central position of the mark point at the intersection of the colored difference blocks is obtained at present. For the subsequent image to be subjected to decolorizing treatment, black and white contrast is preferred; certainly, other multicolor split structures with larger chromatic aberration after the decolorizing treatment can be set; if the image is not required to be decolored, color structures with larger chromatic aberration can be adopted for splicing.
The image collected by the camera is captured in the form of data frames, a plurality of frames of images are sequentially and rapidly iterated to form a video stream, and each frame of image in the video stream contains gait information of the target object. In the stored data of the video stream, each frame of image captured by the camera exists in the form of matrix array. In this embodiment, the camera for acquiring the background scene image and the image sequence is a monocular camera installed beside the walking path of the person, the walking path of the person is preferably a linear path, the whole walking process of the person and the whole background scene image are both within the visual angle of the monocular camera, and the monocular camera can be installed at the central axis position of the whole background scene image.
Generally, each frame of image captured by the camera is represented by the BGR color space format by default. BGR has three channels, namely three matrixes in an array, which respectively represent blue, green and red parts of three primary colors, the color of each pixel can be split into the three colors according to the mixture of colored lights with different proportions, and the proportion data of each color after splitting is stored at the corresponding position of each channel.
A color removal step S2, performing color removal processing on the collected background scene image, mark point image and image sequence to convert the color image into a gray scale image.
Although image data in the BGR format retains the most optical information, not all operations need to utilize the entire optical information in finding the mark point. The three-channel matrix array is compressed into one channel, and although a part of data is lost, really useful information can be better presented, and the searching speed of the mark points is accelerated. The image is expressed in the form of compressing a color picture into a gray-scale picture, but the description of the gray-scale picture still reflects the distribution and the characteristics of the overall and local chromaticity and brightness levels of the whole image like a color picture. The matrix transformation is performed as follows 1:
Y=0.114·B+0.587·G+0.299·R
wherein, Y represents a matrix of the gray level picture, and the larger the value of the matrix, the whiter the pixel representing the position is, and the blacker the pixel represents the position is.
And a noise reduction step S3, wherein the image sequence after the color removal processing is subjected to smoothing processing.
Due to factors such as natural vibration, illumination change or hardware problems, noise exists in each frame of acquired images. By smoothing the data frame, the noise can be effectively removed and the interference of the noise to the subsequent detection process can be avoided.
In this embodiment, the smoothing process is to process each pixel of the image by using gaussian blur, that is, for any one pixel, the weighted average of its surrounding pixels is taken, the weight is distributed according to a normal distribution, and the closer the point is, the higher the weight is, the farther the point is, the smaller the weight is. In the actual smoothing operation, it is found that the noise removing effect is best by taking a 21 × 21 matrix with the target pixel point as the center and performing weighted average on the peripheral pixels in the matrix.
And a segmentation step S4, taking the image of the background scene as a reference frame, segmenting partial images from the image sequence to form a foreground image sequence, wherein the partial images comprise the image of the target object.
In the actual shooting environment, a target person completely walks across the visual field of the lens of the monocular camera from left to right, and in the process, the mark points fixedly arranged at three joint points of the lower limbs of the target person are exposed to the camera. During the process of capturing optical information by the monocular camera, the three-dimensional environment is mapped into a two-dimensional plane. From the perspective of the obtained two-dimensional video, because the camera is static, each frame of image in the whole video stream can be roughly divided into a foreground part and a background part; the foreground refers to a local area where a moving target person is located, and the background refers to an area filled in the whole environmental scene except the target person, namely, other areas except the foreground area; as the target person walks into the view-finding frame from one end of the visual field of the lens and walks out of the view-finding frame from the other end of the visual field of the lens, the foreground area continuously slides relative to the background area in the video, the background area keeps still in the whole video, and different areas are shielded by the moving foreground area. From the aspect of area occupation of the whole video frame, the area occupied by the foreground area is smaller, and the area occupied by the background area is larger. Since the area where the target mark point to be calibrated is located is the foreground area, scanning the background area consumes computational power and wastes time, and even an erroneous result may be solved when the matching threshold is set to be low, thereby further interfering with the extraction of the mark point data.
If the foreground area can be divided from the whole data frame, the marking point can be searched only for the area, and the background area with most area is not needed to be considered, so that the searching efficiency can be improved to a great extent.
Therefore, in the embodiment, by dividing the foreground region from the data frame as the subsequent image processing object, the calculation amount can be effectively reduced, and the positioning accuracy of the target mark point can be improved. According to the practical situation, at the beginning of a video, a tester does not go into the shot, the first frame image of the video stream is agreed to belong to a background area, namely a background scene image is formed, the frame image is taken as a reference background frame, and in the image processing of the subsequent data frame, the foreground and the background are divided according to the reference background frame.
In this embodiment, the segmentation of the foreground region is based on threshold binarization, and specifically includes a construction step S41, a binarization step S42, an expansion step S43, and a clipping step S44.
(1) And a construction step S41, wherein the gray values of each frame of image in the image sequence after the decoloring processing and the corresponding pixel points on the reference frame are subjected to difference value and absolute value calculation processing, so as to construct a difference value frame sequence.
For any data frame M, it is calculated according to the following formula 2:
absdiff(I)=|M(I)-Mo(I)|
wherein M isoFor reference to the background frame, m (I) represents the frame to be processed, I represents a specific position in the data frame, and absdiff is a matrix array whose elements represent the absolute value of the difference between the gray values of the pixels at the position I.
In the grayscale image obtained by the decolorizing process, the value range of each element point is 0 to 255, it is easy to know that the value of each element in absdiff is also in the range of 0 to 255, absdiff can also be regarded as a grayscale image, and the result is shown in fig. 2 (c).
(2) A binarization step S42, performing binarization processing on the sequence of difference frames based on a predetermined threshold, and respectively representing the object and the background scene by black and white.
By setting the threshold to binarize absdiff into a pure black and white image, the white and black regions roughly represent the distribution of foreground and background, with the result shown in fig. 2 (d).
(3) In the expansion step S43, the difference frame after the binarization processing is subjected to expansion processing.
The binarized image inevitably has a lot of noise, because each data point in the area at the boundary of the foreground and the background is always large or small relative to the threshold value, and an ideal black-white boundary is difficult to be marked. The expansion can be used for trimming burrs and eliminating noise points.
First, a 5 × 5 size structure matrix { E is definedijAnd i and j are 1, 2, 3, 4 and 5, and the generation rule of the structural matrix is shown as the following formula 3:
Figure BDA0001834939230000111
the size of the structural matrix can be adjusted according to the actual situation, and the rule is generated according to the size
Figure BDA0001834939230000112
Also changed to a suitable number, wherein
Figure BDA0001834939230000113
Is an empirical value.
Secondly, after the structural element is generated, the whole image is traversed by the structural element, and a binary image subjected to expansion processing is obtained according to the following rule:
dilate(x,y)=max absdiff(x+x′,y+y′),E(x′,y′)≠0
wherein, (x, y) is the coordinate of the pixel point to be processed, E (x ', y') is the matrix { E { (EijThe elements of (c).
The resulting dilate image has a white portion that expands into a relatively continuous region, and the boundary is more visible, as shown in fig. 2 (e).
(4) And a clipping step S44, based on the coordinate data of the target object color area, clipping a foreground area sequence from the image sequence after the color removal processing by using a rectangular boundary, wherein the rectangular boundary completely contains the target object color area, and the target object color area is a color area representing the target object.
According to the distribution of the white part, a complete area is framed on the data frame of the image sequence by using a rectangular boundary, and the complete area is taken as a foreground area, and the remaining area is taken as a background area, and the result is shown in fig. 2 (f).
And a matching step S5, matching each marking point from the foreground image sequence by taking the image of the marking point as a template and acquiring coordinate data of the marking point.
In this step, three marker point images acquired in advance are used as matching templates, and then the foreground region segmented in the previous step is traversed by taking the three matching templates as references. The method specifically comprises a pre-matching step S51 and a re-matching step S52.
(1) And a pre-matching step S51, traversing the foreground area by taking the template as a reference, calculating the negative correlation R (x, y) between the local area taking the pixel point with the coordinate (x, y) as the center in the foreground area and the template, and acquiring a pixel point cluster to form a pre-selected mark point cluster which is used for representing that the local area has a mark point according to the condition that the negative correlation is smaller than a preset threshold value.
As the template traverses to some local region of the foreground region centered at (x, y), their normalized sum of squared difference (SQDIFF _ normal) is calculated using equation 4 below:
Figure BDA0001834939230000121
wherein, the part of R is a variable related to the template correlation degree and is used for representing the negative correlation degree, wherein the smaller the R value is, the better the matching degree of the corresponding pixel point and the surrounding area thereof with the template is; t (x ', y') is a gray value of a pixel point whose coordinates are (x ', y') in the template, the coordinates of the pixel point on the template are coordinates in a coordinate system constructed with the center point thereof as an origin, I (x + x ', y + y') is a gray value of a pixel point whose coordinates are (x + x ', y + y') in the foreground region, the coordinates of the pixel point on the foreground region are coordinates of the pixel point in the image sequence, and usually the upper left corner point or the lower left corner point in the image is taken as the origin.
Since the mark point on the target person may not always be directly opposite to the camera in the actual situation detection, the image of the mark point captured by the camera may not always be concentric circles with good geometry as shown in fig. 3(b), and irregular oval shapes as shown in fig. 3(a) and 3(c) may also appear. Therefore, for each mark point, three templates are respectively designed for three viewing angle conditions of front view, left oblique view and right oblique view, as shown in fig. 3(b), 3(a) and 3 (c).
During traversal, the matching degrees of the three templates and the local foreground region are respectively calculated, the smallest result is reserved, and the matching degree is calculated by adopting the following formula 5:
Figure BDA0001834939230000131
after traversing is completed, R values corresponding to most pixel points are larger, which indicates that the region cannot be matched with the template; the R value of a few pixels is reduced to a small range, which shows that the region is very close to the template. In this embodiment, 0.1 is taken as the threshold, the pixel area with the R value greater than the threshold is discarded, otherwise, the area is considered as the area where the mark point is located, and the central pixel coordinate data is recorded.
(2) And a matching step S52, in one preselected marking point cluster, representing the coordinates of the marking points in the local area by the pixel point with the minimum negative correlation degree R.
The coordinate data obtained by the template matching layer is checked, so that a large overlapping phenomenon can be found, namely points with a good matching degree are usually piled, and the sum of the squares of the difference values of the points is smaller than a threshold value, so that the points are recorded. Since the coordinate points in the cluster all represent the same mark point, the most representative coordinate point in the cluster can be selected to represent the mark point, and the rest coordinates are discarded as noise. The marker points were screened using the following formula 6:
(x,y)=minR(x′,y′),(x′,y′)∈Range
and selecting the coordinate point with the lowest R value as the mark point with the best matching degree in the cluster of coordinate points.
And a screening step S6, acquiring the local area color of the corresponding point on the color image in the image sequence according to the matched mark point coordinates, and screening out the real mark point according to whether the local area is matched with the color on the mark point.
In the matching step S5, the obtained coordinate points are theoretically the positions of the respective mark points, but in rare cases, erroneous judgment in the matching process due to changes in other areas in the foreground area is not excluded.
Since the foreground region is constantly changing and the misjudgment region does not continuously cause interference, the misjudgment occurs randomly and continuously. To eliminate this interference, gamut screening is applied to each target region as a simple additional discriminant rule.
In the screening step, the information is utilized by comparing the colors of the coordinate points, so that the identification result is further optimized.
The color gamut screening process comprises the following specific steps:
first, an image in the BGR format is converted into an image in the HSV format by the following equations 7 to 9:
V=max(R,G,B)
Figure BDA0001834939230000141
Figure BDA0001834939230000151
in the HSV color space, H, S, V three parameters respectively represent hue, saturation and brightness, and it is more suitable to compare the closeness of two colors than the BGR space. And if the core color of the coordinate point to be detected is white, the coordinate point passes the detection, and if the core color is not white, the coordinate point does not pass the detection, and the coordinate point to be detected is discarded. A color is judged not to fall within the white range based on the S and V parameters. It is specified that satisfying the following formulas 10 and 11 belong to white:
0≤S≤0.5
0.5≤V≤1
the method can be used as an auxiliary means for rapidly screening whether the positioned mark points meet the requirements, and the accuracy requirement is not high because the data faced by the method is screened relatively reliably in previous layers.
And seventhly, calculating step S7, calculating the walking posture of the target object in the walking process according to the coordinate data of the acquired mark points.
As shown in fig. 5, according to the coordinates of the three mark points at a certain time, that is, the coordinates of the hip joint 21, the knee joint 22 and the ankle joint 23 of the target person, the coordinates of the three joints are (x1, y1), (x2, y2) and (x3, y3), so that the included angle θ between the central axis 24 of the thigh and the vertical direction and the included angle α between the central axis 24 of the thigh and the central axis 25 of the calf can be calculated, and the two angle values are used for representing the walking posture of the lower limb of the target person at the moment. The specific calculation formulas of the included angle and the included angle are shown in the following formulas 12 and 13:
Figure BDA0001834939230000152
Figure BDA0001834939230000161
wherein, vector r1 is equal to (0, 1), vector r2 is equal to (x2-x1, y2-y1), and vector r3 is equal to (x3-x2, y3-y 2).
Of course, other included angle combinations can be used for representing the walking posture of the lower limbs of the walking person, and other physical quantities except the included angle can also be used for representing the walking posture of the target person.
System embodiment
Referring to fig. 6, the system 1 for obtaining the walking posture of the object of the present invention includes a monocular camera, a processor and a memory, and the memory stores a computer program, and the computer program, when executed by the processor, can implement the receiving step S1, the decoloring step S2, the denoising step S3, the dividing step S4, the matching step S5, the filtering step S6 and the calculating step S8.
The receiving step S1 is to receive the data captured by the camera. The specific acquisition process in the receiving step S1 and the specific contents of the decoloring step S2 to the calculating step S7 are described in detail in the above method embodiments, and are not repeated herein.
In the above embodiment, the decoloring step S2, the denoising step S3, and the filtering step S6 are not optional steps, but optional optimization steps, i.e., the amount of calculation in the subsequent image processing can be further reduced by the decoloring step S2, the accuracy of the subsequent matching can be further improved by the denoising step S3, and the erroneous judgment mark points can be excluded by the filtering step S7.

Claims (6)

1. A method for acquiring a walking posture of a robot is characterized by comprising the following steps:
the method comprises the following steps of collecting an image of a background scene, an image of a mark point and an image sequence of the robot in the walking process of the background scene, wherein the mark point is fixedly arranged on the robot and used for marking a walking track at a fixed position; the image sequence is acquired by a monocular camera;
a step of decolorizing, namely converting the image of the background scene, the image of the mark point and the image sequence into a gray image;
a segmentation step, taking the image of the background scene as a reference frame, and segmenting partial images from the image sequence to form a foreground image sequence, wherein the partial images comprise the image of the robot; the segmentation step comprises a construction step, a binarization step and a cutting step; the construction step comprises the steps of carrying out difference value and absolute value calculation processing on gray values of each frame of image in the image sequence after the decoloring processing and corresponding pixel points on the reference frame to construct a difference value frame sequence; the binarization step comprises the step of carrying out binarization processing on the difference frame sequence based on a preset threshold value, and respectively representing the robot and the background scene by black and white; the step of cutting comprises the steps of cutting a foreground region sequence from an image sequence subjected to color removal processing by utilizing a rectangular boundary based on coordinate data of a robot color region, wherein the rectangular boundary completely contains the robot color region, and the robot color region is a color region representing the robot;
matching, namely matching each marking point from the foreground image sequence by taking the image of the marking point as a template and acquiring coordinate data of the marking point; the marking points comprise joint marking points fixedly arranged at the positions of all joints of a walking mechanism of the robot, and the matching step comprises a pre-matching step and a re-matching step;
the pre-matching step comprises traversing a foreground area of the foreground area sequence by taking the template as a reference, calculating the negative correlation degree R (x, y) between a local area taking a pixel point with a coordinate (x, y) as a center in the foreground area and the template, and acquiring a pixel point cluster to form a pre-selected mark point cluster for representing that the local area has a mark point according to the condition that the negative correlation degree is smaller than a preset threshold value as the reference;
the step of re-matching comprises representing the coordinates of the mark points in the local area by the pixel point with the minimum negative correlation degree in one pre-selected mark point cluster;
the calculation formula of the negative correlation degree R (x, y) is as follows:
Figure FDA0002598708660000021
wherein, T (x ', y') is a gray value of a pixel point with coordinates (x ', y') in the template, coordinates of the pixel point on the template in a coordinate system constructed with a central point thereof as an origin, I (x + x ', y + y') is a gray value of a pixel point with coordinates (x + x ', y + y') in the foreground region, and coordinates of the pixel point on the foreground region are coordinates of the pixel point in the image sequence;
and a calculating step, calculating the walking posture of the robot in the walking process according to the coordinate data of each mark point.
2. The method of claim 1, wherein:
after the binarization step and before the clipping step, performing expansion processing on the difference value frame after the binarization processing;
the color area of the robot is an expanded color area;
after the decolorizing step and before the segmenting step, carrying out smoothing treatment on the image sequence after the decolorizing treatment;
the smoothing process is to process each pixel of the image using gaussian blur.
3. The method of claim 1, wherein:
the images of the mark points comprise a front-view image, a left oblique-view image and a right oblique-view image of the mark points;
after the re-matching step and before the calculating step, obtaining the local area color of the corresponding point on the color image in the image sequence according to the matched mark point coordinates, and screening out the real mark point if the local area is matched with the color on the mark point.
4. A method according to any one of claims 1 to 3, characterized in that:
the mark point comprises a circular center part and an annular part surrounding the circular center part, wherein one surface of the circular center part and the annular part is white, and the other surface of the circular center part and the annular part is black.
5. A system for acquiring a walking posture of a robot, comprising a processor and a memory, wherein the memory stores a computer program, and when the computer program is executed by the processor, the method comprises the following steps:
a receiving step, namely receiving an image of a background scene and an image of a mark point, which are acquired by a camera, and receiving an image sequence of a walking process of the robot in the background scene, which is acquired by a monocular camera, wherein the mark point is fixed on the robot and is used for marking a walking track at a fixed position;
a step of decolorizing, namely converting the image of the background scene, the image of the mark point and the image sequence into a gray image;
a segmentation step, taking the image of the background scene as a reference frame, and segmenting partial images from the image sequence to form a foreground image sequence, wherein the partial images comprise the image of the robot; the segmentation step comprises a construction step, a binarization step and a cutting step; the construction step comprises the steps of carrying out difference value and absolute value calculation processing on gray values of each frame of image in the image sequence after the decoloring processing and corresponding pixel points on the reference frame to construct a difference value frame sequence; the binarization step comprises the step of carrying out binarization processing on the difference frame sequence based on a preset threshold value, and respectively representing the robot and the background scene by black and white; the step of cutting comprises the steps of cutting a foreground region sequence from an image sequence subjected to color removal processing by utilizing a rectangular boundary based on coordinate data of a robot color region, wherein the rectangular boundary completely contains the robot color region, and the robot color region is a color region representing the robot;
matching, namely matching each marking point from the foreground image sequence by taking the image of the marking point as a template and acquiring coordinate data of the marking point; the marking points comprise joint marking points fixedly arranged at the positions of all joints of a walking mechanism of the robot, and the matching step comprises a pre-matching step and a re-matching step;
the pre-matching step comprises traversing a foreground area of the foreground area sequence by taking the template as a reference, calculating the negative correlation degree R (x, y) between a local area taking a pixel point with a coordinate (x, y) as a center in the foreground area and the template, and acquiring a pixel point cluster to form a pre-selected mark point cluster for representing that the local area has a mark point according to the condition that the negative correlation degree is smaller than a preset threshold value as the reference;
the step of re-matching comprises representing the coordinates of the mark points in the local area by the pixel point with the minimum negative correlation degree in one pre-selected mark point cluster;
the calculation formula of the negative correlation degree R (x, y) is as follows:
Figure FDA0002598708660000041
wherein, T (x ', y') is a gray value of a pixel point with coordinates (x ', y') in the template, coordinates of the pixel point on the template in a coordinate system constructed with a central point thereof as an origin, I (x + x ', y + y') is a gray value of a pixel point with coordinates (x + x ', y + y') in the foreground region, and coordinates of the pixel point on the foreground region are coordinates of the pixel point in the image sequence;
and a calculating step, calculating the walking posture of the robot in the walking process according to the coordinate data of each mark point.
6. The system of claim 5, wherein:
the mark point comprises a circular center part and an annular part surrounding the circular center part, wherein one surface of the circular center part and the annular part is white, and the other surface of the circular center part and the annular part is black.
CN201811221729.6A 2017-12-21 2017-12-21 Method and system for acquiring walking posture of robot Active CN109523551B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811221729.6A CN109523551B (en) 2017-12-21 2017-12-21 Method and system for acquiring walking posture of robot

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711394246.1A CN107967687B (en) 2017-12-21 2017-12-21 A kind of method and system obtaining object walking posture
CN201811221729.6A CN109523551B (en) 2017-12-21 2017-12-21 Method and system for acquiring walking posture of robot

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201711394246.1A Division CN107967687B (en) 2017-12-21 2017-12-21 A kind of method and system obtaining object walking posture

Publications (2)

Publication Number Publication Date
CN109523551A CN109523551A (en) 2019-03-26
CN109523551B true CN109523551B (en) 2020-11-10

Family

ID=61995662

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201711394246.1A Active CN107967687B (en) 2017-12-21 2017-12-21 A kind of method and system obtaining object walking posture
CN201811221729.6A Active CN109523551B (en) 2017-12-21 2017-12-21 Method and system for acquiring walking posture of robot

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201711394246.1A Active CN107967687B (en) 2017-12-21 2017-12-21 A kind of method and system obtaining object walking posture

Country Status (1)

Country Link
CN (2) CN107967687B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110405777B (en) * 2018-04-28 2023-03-31 深圳果力智能科技有限公司 Interactive control method of robot
CN109102527B (en) * 2018-08-01 2022-07-08 甘肃未来云数据科技有限公司 Method and device for acquiring video action based on identification point
CN110334595B (en) * 2019-05-29 2021-11-19 北京迈格威科技有限公司 Dog tail movement identification method, device, system and storage medium
CN110969747A (en) * 2019-12-11 2020-04-07 盛视科技股份有限公司 Anti-following access control system and anti-following method
CN111491089A (en) * 2020-04-24 2020-08-04 厦门大学 Method for monitoring target object on background object by using image acquisition device
CN113916445A (en) * 2021-09-08 2022-01-11 广州航新航空科技股份有限公司 Method, system and device for measuring rotor wing common taper and storage medium
CN115530813B (en) * 2022-10-20 2024-05-10 吉林大学 Marking system for testing and analyzing multi-joint three-dimensional movement of upper body of human body
CN115880783B (en) * 2023-02-21 2023-05-05 山东泰合心康医疗科技有限公司 Child motion gesture recognition method for pediatric healthcare

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100475140C (en) * 2006-11-29 2009-04-08 华中科技大学 Computer aided gait analysis method based on monocular video
CN101853333B (en) * 2010-05-26 2012-11-07 中国科学院遥感应用研究所 Method for picking marks in medical robot navigation positioning images
US9975248B2 (en) * 2012-03-21 2018-05-22 Kenneth Dean Stephens, Jr. Replicating the remote environment of a proxy robot
CN103577795A (en) * 2012-07-30 2014-02-12 索尼公司 Detection equipment and method, detector generation equipment and method and monitoring system
CN103198492A (en) * 2013-03-28 2013-07-10 沈阳航空航天大学 Human motion capture method
CN103473539B (en) * 2013-09-23 2015-07-15 智慧城市系统服务(中国)有限公司 Gait recognition method and device
CN104408718B (en) * 2014-11-24 2017-06-30 中国科学院自动化研究所 A kind of gait data processing method based on Binocular vision photogrammetry
CN105468896B (en) * 2015-11-13 2017-06-16 上海逸动医学科技有限公司 Joint motions detecting system and method
TW201727418A (en) * 2016-01-26 2017-08-01 鴻海精密工業股份有限公司 Analysis of the ground texture combined data recording system and method for analysing
CN106373140B (en) * 2016-08-31 2020-03-27 杭州沃朴物联科技有限公司 Transparent and semitransparent liquid impurity detection method based on monocular vision
CN107273611B (en) * 2017-06-14 2020-11-10 北京航空航天大学 Gait planning method of lower limb rehabilitation robot based on lower limb walking characteristics

Also Published As

Publication number Publication date
CN107967687A (en) 2018-04-27
CN107967687B (en) 2018-11-23
CN109523551A (en) 2019-03-26

Similar Documents

Publication Publication Date Title
CN109523551B (en) Method and system for acquiring walking posture of robot
Neal et al. Measuring shape
CN107169475B (en) A kind of face three-dimensional point cloud optimized treatment method based on kinect camera
CN109176512A (en) A kind of method, robot and the control device of motion sensing control robot
CN109215063A (en) A kind of method for registering of event triggering camera and three-dimensional laser radar
CN103914699A (en) Automatic lip gloss image enhancement method based on color space
CN111462128A (en) Pixel-level image segmentation system and method based on multi-modal spectral image
CN105869115B (en) A kind of depth image super-resolution method based on kinect2.0
JP2002216129A (en) Face area detector, its method and computer readable recording medium
CN110941996A (en) Target and track augmented reality method and system based on generation of countermeasure network
CN108921881A (en) A kind of across camera method for tracking target based on homography constraint
CN110021029A (en) A kind of real-time dynamic registration method and storage medium suitable for RGBD-SLAM
KR20150105190A (en) Camera calibration method and apparatus using a color-coded structure
CN116030519A (en) Learning attention detection and assessment method for live broadcast teaching platform
Shioyama et al. Measurement of the length of pedestrian crossings and detection of traffic lights from image data
KR20220152908A (en) beauty educational content generating apparatus and method therefor
JP3919722B2 (en) Skin shape measuring method and skin shape measuring apparatus
CN108876755B (en) Improved method for constructing color background of gray level image
CN108830804B (en) Virtual-real fusion fuzzy consistency processing method based on line spread function standard deviation
CN117529758A (en) Methods, systems, and media for identifying human collaborative activity in images and videos using neural networks
CN109523594A (en) A kind of vision tray characteristic point coordinate location method and system
JPH0273471A (en) Estimating method for three-dimensional form
JPH07146937A (en) Pattern matching method
JP2005173128A (en) Contour shape extractor
RU2440609C1 (en) Method for segmentation of bitmap images based on growing and merging regions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant