CN111199198B - Image target positioning method, image target positioning device and mobile robot - Google Patents

Image target positioning method, image target positioning device and mobile robot Download PDF

Info

Publication number
CN111199198B
CN111199198B CN201911376182.1A CN201911376182A CN111199198B CN 111199198 B CN111199198 B CN 111199198B CN 201911376182 A CN201911376182 A CN 201911376182A CN 111199198 B CN111199198 B CN 111199198B
Authority
CN
China
Prior art keywords
target
image
color
limb
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911376182.1A
Other languages
Chinese (zh)
Other versions
CN111199198A (en
Inventor
罗志平
程骏
李清凤
庞建新
熊友军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Youbixuan Intelligent Robot Co ltd
Original Assignee
Ubtech Robotics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ubtech Robotics Corp filed Critical Ubtech Robotics Corp
Priority to CN201911376182.1A priority Critical patent/CN111199198B/en
Publication of CN111199198A publication Critical patent/CN111199198A/en
Application granted granted Critical
Publication of CN111199198B publication Critical patent/CN111199198B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Abstract

The application is applicable to the technical field of image processing, and provides an image target positioning method, an image target positioning device, a mobile robot and a computer readable storage medium, wherein the method comprises the following steps: acquiring an image of a person to be acquired and a shooting distance of the image, wherein the shooting distance is a distance between the person to be acquired and the mobile robot when the mobile robot shoots the image; determining a target distance interval to which the shooting distance belongs from more than two preset distance intervals; and positioning the limb part of the acquired person in the image by a limb part positioning method corresponding to the target distance interval. By the method, the time and labor cost for collecting the data set can be saved.

Description

Image target positioning method, image target positioning device and mobile robot
Technical Field
The application belongs to the technical field of image processing, and particularly relates to an image target positioning method, an image target positioning device, a mobile robot and a computer readable storage medium.
Background
Gesture recognition plays an increasingly important role in the field of robotics. By recognizing the gesture, the robot can execute corresponding instruction operation to realize contactless interaction. Gesture recognition of robots is mainly implemented based on deep learning techniques, and therefore, a large number of data sets are required for training a deep learning model in the development of gesture recognition robots.
However, the existing data set acquisition method cannot be used for acquiring images of the acquired person at different distances in real time in the moving process by the mobile robot and automatically labeling the hand areas in the images. The existing data set acquisition method is time-consuming and labor-consuming.
Disclosure of Invention
In view of this, the present application provides an image object positioning method, an image object positioning device, a mobile robot, and a computer readable storage medium, which can save time and labor cost for data set acquisition.
In a first aspect, the present application provides an image object positioning method, applied to a mobile robot, including:
acquiring an image of a person to be acquired and a shooting distance of the image, wherein the shooting distance is a distance between the person to be acquired and the mobile robot when the mobile robot shoots the image;
Determining a target distance interval to which the shooting distance belongs from more than two preset distance intervals;
and positioning the limb part of the acquired person in the image by a limb part positioning method corresponding to the target distance interval.
In a second aspect, the present application provides an image object positioning apparatus comprising:
an image acquisition unit configured to acquire an image of a person to be acquired and a shooting distance of the image, wherein the shooting distance is a distance between the person to be acquired and the mobile robot when the mobile robot shoots the image;
a section determining unit configured to determine, from among the preset two or more distance sections, a target distance section to which the shooting distance belongs;
and the limb part positioning unit is used for positioning the limb part of the acquired person in the image by a limb part positioning method corresponding to the target distance interval.
In a third aspect, the present application provides a mobile robot comprising a camera, a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the method as provided in the first aspect when executing the computer program.
In a fourth aspect, the present application provides a computer readable storage medium storing a computer program which, when executed by a processor, implements a method as provided in the first aspect.
In a fifth aspect, the present application provides a computer program product for causing a mobile robot to carry out the method provided in the first aspect above, when the computer program product is run on the mobile robot.
From the above, in the present application, firstly, an image of a person to be collected and a shooting distance of the image are obtained, where the shooting distance is a distance between the person to be collected and the mobile robot when the mobile robot shoots the image; then, determining a target distance interval to which the shooting distance belongs in more than two preset distance intervals; and finally, positioning the limb part of the acquired person in the image by a limb part positioning method corresponding to the target distance interval. According to the scheme, the images of the acquired person under different distances can be acquired in real time through the mobile robot in the moving process, and the limb parts in the images are automatically marked, so that the time and labor cost for acquiring the data set are greatly saved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the following description will briefly introduce the drawings that are needed in the embodiments or the description of the prior art, it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of an image target positioning method provided in an embodiment of the present application;
FIG. 2 is an exemplary diagram of a third positioning method provided by an embodiment of the present application;
fig. 3 is a schematic structural diagram of an image target positioning device according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a mobile robot according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in this specification and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
In addition, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
Fig. 1 shows a flowchart of an image object positioning method according to an embodiment of the present application, where the image object positioning method is applied to a mobile robot, and is described in detail as follows:
step 101, acquiring an image of a person to be acquired and a shooting distance of the image;
in this embodiment of the present application, the photographing distance is a distance between the person to be collected and the mobile robot when the mobile robot photographs the image. The depth camera is installed on the mobile robot, and the color map, the depth map and the infrared map of the collected person can be obtained under different shooting distances by shooting the collected person through the depth camera in real time in the moving process of the mobile robot. And in a certain shooting distance range, the depth camera can also track the human skeleton of the collected person to obtain a plurality of joints of the collected person, such as a head joint, a hand joint, a wrist joint and the like. For example, the depth camera may be a Kinect v2, where Kinect v2 may acquire a depth map with a resolution of 512x424, an infrared map with a resolution of 512x424, and a color map with a resolution of 1920x 1080, and Kinect v2 may automatically align pixels of the depth map, the infrared map, and the color map. The depth camera may be PMD CARMERA, softKinect, and associative Phab, etc., and is not limited thereto. The mobile robot may measure a photographing distance by a distance sensor while photographing each frame of image of the person to be collected by the depth camera.
102, determining a target distance interval to which the shooting distance belongs from more than two preset distance intervals;
in this embodiment of the present application, at least two distance intervals are divided in advance according to the performance of the depth camera. The depth camera is described as Kinect v2, and when the shooting distance of Kinect v2 is within the range of 0.8-4.5 meters, the joint points of the collected person can be accurately tracked; when the shooting distance of Kinect v2 is less than 0.5 meter, the joint points of the acquired person cannot be tracked, and the shot depth map and infrared map are not accurate enough; kinect v2 cannot track the joint points of the acquired person when the shooting distance is 0.5-0.8 meter or more than 4.5 meters, but the shot depth map and infrared map are accurate. Four distance intervals are preset for the three conditions: 0.5 m or less, 0.5 to 0.8 m, 0.8 to 4.5 m and more than 4.5 m. And comparing the shooting distance with each distance section, and determining the distance section to which the shooting distance belongs as a target distance section.
And 103, positioning the limb part of the acquired person in the image by a limb part positioning method corresponding to the target distance interval.
In the embodiment of the application, because the quality of the images of the collected person photographed at different photographing distances is different for the same depth camera, for the images of different quality, different limb part positioning methods are required to be adopted to position the limb part (such as the hand) of the collected person in the image. Therefore, in the embodiment of the application, a limb position positioning method is correspondingly arranged in each distance interval. Also, the positioning method of the limb part corresponding to the distance interval of 0.5-0.8 m and the distance interval of more than 4.5 m is described as the first positioning method by Kinect v2 in the step 102; the limb part positioning method corresponding to the distance interval of 0.8-4.5 meters is a second positioning method; the limb part positioning method corresponding to the distance interval of less than 0.5 m is a third positioning method. And positioning the limb part of the acquired person in the image by adopting a limb part positioning method corresponding to the target distance interval, namely determining the pixel coordinates of the limb part in the image. In addition, the mobile robot may take an image that does not include the person to be collected during the mobile shooting, and therefore the limb portion of the image may fail to be positioned, and the image may be discarded if the limb portion fails to be positioned. As a possible implementation manner, after obtaining the pixel coordinates of the limb portion in the image, a minimum bounding rectangular frame of the limb portion may be drawn in the image, and the point coordinates of the upper left corner of the minimum bounding rectangular frame and the side length of the minimum bounding rectangular frame may be saved.
Optionally, the limb portion is any one of a hand, a shoulder, an arm, a leg, or a foot.
Specifically, the limb portion may be any one of a hand, a shoulder, an arm, a leg, or a foot of the body of the person to be collected.
Optionally, the method for positioning the limb portion corresponding to the target distance interval is a first positioning method, and the method for positioning the limb portion of the acquired person in the image in step 103 includes the following steps:
a1, obtaining a center point of a point cloud of a first target and a point cloud of a second target according to the color map and the depth map;
a2, constructing a covariance matrix according to the center point and the point cloud of the second target;
a3, carrying out principal component analysis on the covariance matrix to obtain at least one principal component vector;
a4, determining the maximum principal component vector as the maximum principal component vector in the at least one principal component vector;
a5, under the indication of the maximum principal component vector, obtaining the point cloud of any limb in the limb part of the acquired person;
a6, obtaining pixel coordinates of the limbs in the color map according to the acquired point cloud of any limb, and completing positioning of the limbs.
The second object is divided into at least two parts by the first object, the color of the second object is skin color, the color of the first object is a first color, the first color is different from the color of other areas and is not black, and the other areas are areas except the first object in the color chart. The image includes a color map and a depth map, with pixels of the color map and the depth map aligned.
Specifically, the second target includes a limb portion of the person to be collected, and the first target divides the second target into two or more parts: the limb portion and other portions other than the limb portion. For example, the second object may be a skin color portion of the person being collected in the color chart, and the first object may be a wrist band worn on a wrist of the person being collected. Since the pixels of the color map and the depth map are aligned, a center point of the point cloud of the first target and a point cloud of the second target can be obtained according to the color map and the depth map. The center point of the point cloud of the first target and the point cloud of the second target are three-dimensional coordinates, so that a covariance matrix can be constructed according to the center point and the point cloud of the second target located in a certain range around the center point. The covariance matrix is analyzed to obtain a plurality of principal component vectors by performing principal component analysis on the covariance matrix, and the largest principal component vector among the plurality of principal component vectors is taken as the largest principal component vector. And under the indication of the maximum principal component vector, the point cloud of any limb (such as a hand) in the limb part of the acquired person can be obtained. According to preset camera parameters (such as focal length of a depth camera), the acquired point cloud of any limb can be converted into pixel coordinates and depth values of the limb in the depth map, and the pixel coordinates of the limb in the color map are equal to the pixel coordinates of the limb in the depth map due to the alignment of the pixels of the depth map and the color map.
Optionally, the step A1 specifically includes:
b1, obtaining pixel coordinates of a first target in the color map through color detection;
b2, acquiring a depth value of the first target from the depth map according to the pixel coordinates of the first target;
b3, obtaining pixel coordinates of a second target in the color chart through a skin color model;
b4, acquiring a depth value of the second target from the depth map according to the pixel coordinates of the second target;
b5, obtaining a center point of the point cloud of the first target according to the pixel coordinates of the first target and the depth value of the first target;
and B6, obtaining the point cloud of the second target according to the pixel coordinates of the second target and the depth value of the second target.
Specifically, since the first color is different from the color of the other region, the pixel having the first color in the color map is extracted through color detection (such as OpenCV-based color detection), so as to obtain the pixel coordinate of the first object in the color map. Since the pixels of the color map and the depth map are aligned, the pixel coordinates of the first object in the depth map are equal to the pixel coordinates of the first object in the color map. According to the pixel coordinates of the first object in the depth map, a depth value of the first object can be obtained from the depth map. Based on the pixel coordinates of the first object in the depth map and the depth value of the first object, a center point of the point cloud of the first object may be calculated.
Similarly, detecting pixels in the color map having skin tones through the skin tone model may obtain pixel coordinates of the second target in the color map. The pixel coordinates of the second object in the depth map are equal to the pixel coordinates of the second object in the color map. According to the pixel coordinates of the second object in the depth map, a depth value of the second object can be obtained from the depth map. And calculating a point cloud of the second target based on the pixel coordinates of the second target in the depth map and the depth value of the second target.
Optionally, the step B3 specifically includes:
c1, converting the color map into an HSV color space image and a YCbCr color space image respectively;
c2, segmenting a second target from the HSV color space image through a skin color model to obtain a first skin color map;
c3, segmenting a second target from the YCbCr color space image through a skin color model to obtain a second skin color map;
c4, carrying out logical OR operation on pixel values of the first skin color image and the second skin color image pixel by pixel to obtain a final skin color image;
and C5, obtaining the pixel coordinates of the second target in the color map according to the final skin color map.
Specifically, the above color map is converted from an original color space (such as BGR color space) to an HSV color space to obtain the above HSV color space image. Meanwhile, the color image is converted from an original color space to a YCbCr color space to obtain the YCbCr color space image. And extracting pixels with colors being skin colors from the HSV color space image through a skin color model so as to realize the segmentation of the second target and obtain the first skin color map. Similarly, pixels with colors being skin colors are extracted from the YCbCr color space image through a skin color model, so that the second target is segmented, and the second skin color map is obtained. And carrying out logical OR operation on pixel values of the pixels in the first skin color chart and the pixels in the second skin color chart, and obtaining the final skin color chart. For example, if a pixel value of a certain pixel of the first skin tone map is 1 and a pixel value of a pixel of the second skin tone map corresponding to the certain pixel is 0, the value obtained by logical or operation is 1. And the pixel coordinate of the second target in the final skin color chart is the pixel coordinate of the second target in the color chart.
Optionally, the step A5 specifically includes:
And searching the point cloud of the second target along the direction of the maximum principal component vector by taking the central point as a starting point to obtain the point cloud of any limb in the limb part of the acquired person.
Specifically, the center point of the point cloud of the first target is used as a starting point, and the point cloud separated by the point cloud of the first target in the point cloud of the second target is searched along the direction of the maximum principal component vector, so that the point cloud of any limb in the limb part of the person to be collected can be obtained.
Optionally, the method for positioning the limb portion corresponding to the target distance interval is a second positioning method, and the method for positioning the limb portion of the acquired person in the image in step 103 includes the following steps:
d1, obtaining the point cloud of the acquired person based on the depth map;
d2, acquiring a limb part joint point and a wrist joint point in the point cloud of the acquired person;
d3, determining a three-dimensional space range based on preset conditions according to the limb position articulation points and the wrist articulation points;
d4, determining the point cloud in the three-dimensional space range as the point cloud of the limb part of the person to be acquired in the point cloud of the person to be acquired;
And D5, obtaining pixel coordinates of the limb part in the color map according to the point cloud of the limb part, and completing positioning of the limb part.
Specifically, the limb portion is a hand, and the image includes a color map and a depth map, and pixels of the color map and the depth map are aligned. Because the depth camera has the function of human skeleton tracking, the point cloud of the acquired person can be calculated based on the pixel coordinates and the depth values in the depth map. And acquiring the hand joint point and the wrist joint point detected by the depth camera. A three-dimensional space range may be determined based on preset conditions. The preset conditions are determined according to priori knowledge of the human body structure. And determining the point cloud in the three-dimensional space range as the point cloud of the limb part (hand) of the person to be acquired. And converting the point cloud of the hand into pixel coordinates and depth values of the hand in the depth map according to preset camera parameters of the depth camera. Since the pixels of the color map and the depth map are aligned, the pixel coordinates of the hand in the color map are the pixel coordinates of the hand in the depth map.
Optionally, if the limb portion positioning method corresponding to the target distance interval is a third positioning method, the method for positioning the limb portion of the person to be acquired in the image in step 103 includes the following steps:
e1, binarizing the depth map to obtain a depth binary map;
e2, binarizing the infrared image to obtain an infrared binary image;
e3, carrying out logical AND operation on the depth binary image and the infrared binary image to obtain a contour image of the limb part of the acquired person;
and E4, obtaining pixel coordinates of the limb part in the color map according to the contour map, and finishing positioning of the limb part.
Specifically, the image includes a color map, a depth map, and an infrared map, with pixels of the color map, the depth map, and the infrared map aligned. As shown in fig. 2, the depth map is binarized to obtain a depth binary map, and the infrared map is binarized to obtain an infrared binary map. And performing logical AND operation on the depth binary image and the infrared binary image to obtain a contour image of the limb part of the acquired person. The pixel coordinates of the limb part in the outline map are the pixel coordinates of the limb part in the color map.
From the above, in the present application, firstly, an image of a person to be collected and a shooting distance of the image are obtained, where the shooting distance is a distance between the person to be collected and the mobile robot when the mobile robot shoots the image; then, determining a target distance interval to which the shooting distance belongs in more than two preset distance intervals; and finally, positioning the limb part of the acquired person in the image by a limb part positioning method corresponding to the target distance interval. According to the scheme, the images of the acquired person under different distances can be acquired in real time through the mobile robot in the moving process, and the limb parts in the images are automatically marked, so that the time and labor cost for acquiring the data set are greatly saved.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic of each process, and should not limit the implementation process of the embodiment of the present application in any way.
Fig. 3 shows a schematic structural diagram of an image object positioning apparatus provided in an embodiment of the present application, which is applicable to a mobile robot, and only a portion related to the embodiment of the present application is shown for convenience of explanation.
The image object positioning apparatus 300 includes:
an image acquisition unit 301 configured to acquire an image of a person to be acquired and a shooting distance of the image, where the shooting distance is a distance between the person to be acquired and the mobile robot when the mobile robot shoots the image;
a section determining unit 302, configured to determine, from among the preset two or more distance sections, a target distance section to which the shooting distance belongs;
and a limb portion positioning unit 303, configured to position the limb portion of the person to be acquired in the image by using a limb portion positioning method corresponding to the target distance zone.
Optionally, the image includes a color map and a depth map, pixels of the color map and pixels of the depth map are aligned, and a limb portion positioning method corresponding to the target distance interval is a first positioning method, and the limb portion positioning unit 303 further includes:
a target point cloud obtaining subunit, configured to obtain, according to the color map and the depth map, a center point of a point cloud of a first target and a point cloud of a second target, where the second target is divided into at least two parts by the first target, a color of the second target is skin color, the first target is a first color, the first color is different from a color of another region, and the other region is a region in the color map other than the first target;
A matrix construction subunit, configured to construct a covariance matrix according to the center point and the point cloud of the second target;
a principal component analysis subunit, configured to perform principal component analysis on the covariance matrix to obtain at least one principal component vector;
a maximum principal component determination subunit configured to determine, from among the at least one principal component vector, a maximum principal component vector as a maximum principal component vector;
a limb point cloud obtaining subunit, configured to obtain a point cloud of any limb in the limb part of the person to be collected under the instruction of the maximum principal component vector;
and the first coordinate positioning subunit is used for obtaining pixel coordinates of the limb in the color map according to the acquired point cloud of any limb and completing the positioning of the limb.
Optionally, the target point cloud acquiring subunit further includes:
a color detection subunit, configured to obtain, by color detection, a pixel coordinate of a first target in the color map;
a first depth obtaining subunit, configured to obtain a depth value of the first target from the depth map according to a pixel coordinate of the first target;
the skin color detection subunit is used for obtaining the pixel coordinates of the second target in the color chart through a skin color model;
A second depth obtaining subunit, configured to obtain a depth value of the second target from the depth map according to a pixel coordinate of the second target;
a central point obtaining subunit, configured to obtain a central point of a point cloud of the first target according to the pixel coordinate of the first target and the depth value of the first target;
and the second target point cloud subunit is used for obtaining the point cloud of the second target according to the pixel coordinates of the second target and the depth value of the second target.
Optionally, the skin tone detection subunit further includes:
a color space conversion subunit, configured to convert the color map into an HSV color space image and a YCbCr color space image, respectively;
the HSV segmentation subunit is used for segmenting a second target from the HSV color space image through a skin color model to obtain a first skin color image;
the YCbCr segmentation subunit is used for segmenting a second target from the YCbCr color space image through a skin color model to obtain a second skin color image;
a logic OR subunit, configured to perform a logical OR operation on pixel values of the first skin color map and the second skin color map on a pixel-by-pixel basis, to obtain a final skin color map;
and the final skin color subunit is used for obtaining the pixel coordinates of the second target in the color chart according to the final skin color chart.
Optionally, the hand point cloud obtaining subunit further includes:
and the point cloud searching subunit is used for searching the point cloud of any limb in the limb part of the acquired person from the point cloud of the second target along the direction of the maximum principal component vector by taking the central point as a starting point.
Optionally, the image includes a color chart and a depth chart, pixels of the color chart and the depth chart are aligned, the limb portion is a hand, and a limb portion positioning method corresponding to the target distance interval is a second positioning method, and the limb portion positioning unit 303 further includes:
the acquired person point cloud acquisition subunit is used for acquiring the point cloud of the acquired person based on the depth map;
the joint acquisition subunit is used for acquiring limb part joint points and wrist joint points in the point cloud of the acquired person;
the three-dimensional space determining subunit is used for determining a three-dimensional space range based on preset conditions according to the limb position articulation point and the wrist articulation point;
a limb portion site cloud determining subunit configured to determine, as a point cloud of a limb portion of the person to be collected, a point cloud within the three-dimensional space range from among the point clouds of the person to be collected;
And the second coordinate positioning subunit is used for obtaining pixel coordinates of the limb part in the color map according to the point cloud of the limb part to finish positioning the limb part.
Optionally, the image includes a color chart, a depth chart, and an infrared chart, pixels of the color chart, the depth chart, and the infrared chart are aligned, and if the limb portion positioning method corresponding to the target distance interval is a third positioning method, the limb portion positioning unit 303 further includes:
a depth binarization subunit, configured to binarize the depth map to obtain a depth binary map;
the infrared binarization subunit is used for binarizing the infrared image to obtain an infrared binary image;
the logic AND subunit is used for carrying out logic AND operation on the depth binary image and the infrared binary image to obtain a contour image of the limb part of the acquired person;
and the third coordinate positioning subunit is used for obtaining the pixel coordinates of the limb part in the color map according to the contour map and completing the positioning of the limb part.
From the above, in the present application, firstly, an image of a person to be collected and a shooting distance of the image are obtained, where the shooting distance is a distance between the person to be collected and the mobile robot when the mobile robot shoots the image; then, determining a target distance interval to which the shooting distance belongs in more than two preset distance intervals; and finally, positioning the limb part of the acquired person in the image by a limb part positioning method corresponding to the target distance interval. According to the scheme, the images of the acquired person under different distances can be acquired in real time through the mobile robot in the moving process, and the limb parts in the images are automatically marked, so that the time and labor cost for acquiring the data set are greatly saved.
Fig. 4 is a schematic structural diagram of a mobile robot according to an embodiment of the present application. As shown in fig. 4, the mobile robot 4 of this embodiment includes: at least one processor 40 (only one is shown in fig. 4), a memory 41, a computer program 42 stored in the memory 41 and executable on the at least one processor 40, and a camera 43, the processor 40 executing the computer program 42 to perform the steps of:
acquiring an image of a person to be acquired and a shooting distance of the image, wherein the shooting distance is a distance between the person to be acquired and the mobile robot when the mobile robot shoots the image;
determining a target distance interval to which the shooting distance belongs from more than two preset distance intervals;
and positioning the limb part of the acquired person in the image by a limb part positioning method corresponding to the target distance interval.
In a second possible implementation manner provided by the first possible implementation manner, assuming that the first possible implementation manner is the first possible implementation manner, the image includes a color map and a depth map, pixels of the color map and the depth map are aligned, a limb portion positioning method corresponding to the target distance zone is a first positioning method, and the method for positioning the limb portion of the person to be acquired in the image includes the following steps:
Obtaining a center point of a point cloud of a first target and a point cloud of a second target according to the color map and the depth map, wherein the second target is divided into at least two parts by the first target, the color of the second target is skin color, the first target is a first color, the first color is different from the colors of other areas, and the other areas are areas except the first target in the color map;
constructing a covariance matrix according to the center point and the point cloud of the second target;
performing principal component analysis on the covariance matrix to obtain at least one principal component vector;
determining the maximum principal component vector as the maximum principal component vector in the at least one principal component vector;
obtaining the point cloud of any limb in the limb part of the acquired person under the indication of the maximum principal component vector;
and obtaining pixel coordinates of the limbs in the color map according to the acquired point cloud of any limb, and completing positioning of the limbs.
In a third possible implementation manner provided by the second possible implementation manner, the obtaining, according to the color map and the depth map, a center point of a point cloud of a first target and a point cloud of a second target includes:
Obtaining pixel coordinates of a first target in the color map through color detection;
acquiring a depth value of the first target from the depth map according to the pixel coordinates of the first target;
obtaining pixel coordinates of a second target in the color chart through a skin color model;
acquiring a depth value of the second target from the depth map according to the pixel coordinates of the second target;
obtaining a center point of a point cloud of the first target according to the pixel coordinates of the first target and the depth value of the first target;
and obtaining the point cloud of the second target according to the pixel coordinates of the second target and the depth value of the second target.
In a fourth possible implementation manner provided by the third possible implementation manner, the obtaining, by the skin color model, the pixel coordinates of the second object in the color map includes:
converting the color map into an HSV color space image and a YCbCr color space image respectively;
a second target is segmented from the HSV color space image through a skin color model to obtain a first skin color image;
a second target is segmented from the YCbCr color space image through a skin color model to obtain a second skin color map;
Performing logical OR operation on pixel values of the first skin color image and the second skin color image pixel by pixel to obtain a final skin color image;
and obtaining the pixel coordinates of the second target in the color map according to the final skin color map.
In a fifth possible embodiment provided by the second possible embodiment, the obtaining a point cloud of any limb in the limb portion of the person under acquisition under the instruction of the maximum principal component vector includes:
and searching the point cloud of the second target along the direction of the maximum principal component vector by taking the central point as a starting point to obtain the point cloud of any limb in the limb part of the acquired person.
In a sixth possible embodiment provided by the first possible embodiment, the image includes a color map and a depth map, pixels of the color map and the depth map are aligned, the limb portion is a hand, the limb portion positioning method corresponding to the target distance zone is a second positioning method, and the method for positioning the limb portion of the person to be collected in the image includes:
obtaining the point cloud of the acquired person based on the depth map;
Acquiring limb position articulation points and wrist articulation points in the point cloud of the acquired person;
determining a three-dimensional space range based on preset conditions according to the limb position articulation point and the wrist articulation point;
determining the point cloud in the three-dimensional space range from the point cloud of the acquired person as the point cloud of the limb part of the acquired person;
and obtaining pixel coordinates of the limb part in the color map according to the point cloud of the limb part, and completing the positioning of the limb part.
In a seventh possible embodiment provided by the first possible embodiment, the image includes a color map, a depth map, and an infrared map, pixels of the color map, the depth map, and the infrared map are aligned, and if the limb portion positioning method corresponding to the target distance zone is a third positioning method, the method for positioning the limb portion of the person to be collected in the image includes the following steps:
binarizing the depth map to obtain a depth binary map;
binarizing the infrared image to obtain an infrared binary image;
performing logical AND operation on the depth binary image and the infrared binary image to obtain a contour image of the limb part of the person to be acquired;
And obtaining pixel coordinates of the limb part in the color map according to the contour map, and completing positioning of the limb part.
It will be appreciated by those skilled in the art that fig. 4 is merely an example of mobile robot 4 and is not meant to be limiting of mobile robot 4, and may include more or fewer components than shown, or may combine certain components, or may include different components, such as input-output devices, network access devices, etc.
The processor 40 may be a central processing unit (Central Processing Unit, CPU), the processor 40 may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 41 may in some embodiments be an internal storage unit of the mobile robot 4, such as a hard disk or a memory of the mobile robot 4. The memory 41 may also be an external storage device of the mobile robot 4 in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card) or the like, which are provided on the mobile robot 4. Further, the memory 41 may include both an internal storage unit and an external storage device of the mobile robot 4. The memory 41 is used for storing an operating system, an application program, a boot loader (BootLoader), data, other programs, and the like, such as program codes of the computer programs. The above-described memory 41 may also be used to temporarily store data that has been output or is to be output.
From the above, in the present application, firstly, an image of a person to be collected and a shooting distance of the image are obtained, where the shooting distance is a distance between the person to be collected and the mobile robot when the mobile robot shoots the image; then, determining a target distance interval to which the shooting distance belongs in more than two preset distance intervals; and finally, positioning the limb part of the acquired person in the image by a limb part positioning method corresponding to the target distance interval. According to the scheme, the images of the acquired person under different distances can be acquired in real time through the mobile robot in the moving process, and the limb parts in the images are automatically marked, so that the time and labor cost for acquiring the data set are greatly saved.
It should be noted that, because the content of information interaction and execution process between the above devices/units is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
The embodiments of the present application also provide a computer readable storage medium storing a computer program, where the computer program is executed by a processor to implement steps in each of the method embodiments described above.
Embodiments of the present application provide a computer program product enabling a mobile robot to carry out the steps of the various method embodiments described above when the computer program product is run on the mobile robot.
The integrated units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application implements all or part of the flow of the method of the above embodiments, and may be implemented by a computer program to instruct related hardware, where the above computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of each of the method embodiments described above. The computer program comprises computer program code, and the computer program code can be in a source code form, an object code form, an executable file or some intermediate form and the like. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a mobile robot, a recording medium, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunication signal, and a software distribution medium. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other manners. For example, the apparatus/network device embodiments described above are merely illustrative, e.g., the division of modules or elements described above is merely a logical functional division, and there may be additional divisions in actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (10)

1. An image target positioning method, which is applied to a mobile robot, comprising:
acquiring an image of a person to be acquired and a shooting distance of the image, wherein the shooting distance is a distance between the person to be acquired and the mobile robot when the mobile robot shoots the image, the image comprises a color image and a depth image, and pixels of the color image and the depth image are aligned;
Determining a target distance interval to which the shooting distance belongs from more than two preset distance intervals;
positioning the limb part of the acquired person in the image by a limb part positioning method corresponding to the target distance interval;
the limb part positioning method corresponding to the target distance interval is a first positioning method, and the method for positioning the limb part of the acquired person in the image comprises the following steps:
obtaining a center point of a point cloud of a first target and a point cloud of a second target according to the color map and the depth map, wherein the second target is divided into at least two parts by the first target, the color of the second target is skin color, the first target is a first color, the first color is different from the colors of other areas, and the other areas are areas except the first target in the color map;
constructing a covariance matrix according to the center point and the point cloud of the second target;
performing principal component analysis on the covariance matrix to obtain at least one principal component vector;
determining a maximum principal component vector among the at least one principal component vector as a maximum principal component vector;
Obtaining the point cloud of any limb in the limb part of the acquired person under the indication of the maximum principal component vector;
and obtaining pixel coordinates of the limbs in the color map according to the acquired point cloud of any limb, and completing positioning of the limbs.
2. The method for positioning an image object according to claim 1, wherein the obtaining a center point of a point cloud of a first object and a point cloud of a second object according to the color map and the depth map includes:
obtaining pixel coordinates of a first target in the color map through color detection;
acquiring a depth value of the first target from the depth map according to the pixel coordinates of the first target;
obtaining pixel coordinates of a second target in the color map through a skin color model;
acquiring a depth value of the second target from the depth map according to the pixel coordinates of the second target;
obtaining a center point of a point cloud of the first target according to the pixel coordinates of the first target and the depth value of the first target;
and obtaining the point cloud of the second target according to the pixel coordinates of the second target and the depth value of the second target.
3. The method for locating an image object according to claim 2, wherein the obtaining the pixel coordinates of the second object in the color map by the skin color model includes:
Converting the color map into an HSV color space image and a YCbCr color space image respectively;
a second target is segmented from the HSV color space image through a skin color model to obtain a first skin color image;
a second target is segmented from the YCbCr color space image through a skin color model to obtain a second skin color map;
performing logical OR operation on pixel values of the first skin color image and the second skin color image pixel by pixel to obtain a final skin color image;
and obtaining the pixel coordinates of the second target in the color map according to the final skin color map.
4. The image target positioning method according to claim 1, wherein the obtaining a point cloud of any limb in the limb part of the acquired person under the instruction of the maximum principal component vector includes:
and searching the point cloud of the second target along the direction of the maximum principal component vector by taking the central point as a starting point to obtain the point cloud of any limb in the limb part of the acquired person.
5. The image target positioning method according to claim 1, wherein the limb portion is a hand, the limb portion positioning method corresponding to the target distance zone is a second positioning method, and the method for positioning the limb portion of the person to be acquired in the image further comprises the following steps:
Obtaining a point cloud of the acquired person based on the depth map;
acquiring limb position articulation points and wrist articulation points in the point cloud of the acquired person;
determining a three-dimensional space range based on preset conditions according to the limb position articulation point and the wrist articulation point;
determining the point cloud in the three-dimensional space range in the point cloud of the acquired person as the point cloud of the limb part of the acquired person;
and obtaining pixel coordinates of the limb part in the color map according to the point cloud of the limb part, and completing the positioning of the limb part.
6. The image target positioning method according to claim 1, wherein the image further includes an infrared image, the color image, the depth image and the pixels of the infrared image are aligned, the limb portion positioning method corresponding to the target distance interval is a third positioning method, and the method for positioning the limb portion of the person to be acquired in the image further includes the following steps:
binarizing the depth map to obtain a depth binary map;
binarizing the infrared image to obtain an infrared binary image;
performing logical AND operation on the depth binary image and the infrared binary image to obtain a contour image of the limb part of the acquired person;
And obtaining pixel coordinates of the limb part in the color map according to the contour map, and completing positioning of the limb part.
7. The image object localization method of any one of claims 1 to 6, wherein the limb portion is any one of a hand, a shoulder, an arm, a leg, or a foot.
8. An image object positioning apparatus, which is applied to a mobile robot, comprising:
an image acquisition unit, configured to acquire an image of a person to be acquired and a shooting distance of the image, where the shooting distance is a distance between the person to be acquired and the mobile robot when the mobile robot shoots the image, and the image includes a color map and a depth map, and pixels of the color map and the depth map are aligned;
a section determining unit, configured to determine, from among preset two or more distance sections, a target distance section to which the shooting distance belongs;
the limb part positioning unit is used for positioning the limb part of the acquired person in the image by a limb part positioning method corresponding to the target distance interval;
the limb part positioning method corresponding to the target distance interval is a first positioning method, and the limb part positioning unit comprises:
A target point cloud obtaining subunit, configured to obtain, according to the color map and the depth map, a center point of a point cloud of a first target and a point cloud of a second target, where the second target is divided into at least two parts by the first target, a color of the second target is skin color, the first target is a first color, the first color is different from a color of other regions, and the other regions are regions in the color map except for the first target;
a matrix construction subunit, configured to construct a covariance matrix according to the center point and the point cloud of the second target;
a principal component analysis subunit, configured to perform principal component analysis on the covariance matrix to obtain at least one principal component vector;
a maximum principal component determination subunit configured to determine, among the at least one principal component vector, a maximum principal component vector as a maximum principal component vector;
a limb point cloud acquisition subunit, configured to obtain a point cloud of any limb in a limb part of the acquired person under the instruction of the maximum principal component vector;
the first coordinate positioning subunit is used for obtaining pixel coordinates of the limbs in the color map according to the acquired point cloud of any limb and completing positioning of the limbs.
9. A mobile robot comprising a camera, a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 7 when executing the computer program.
10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the method according to any one of claims 1 to 7.
CN201911376182.1A 2019-12-27 2019-12-27 Image target positioning method, image target positioning device and mobile robot Active CN111199198B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911376182.1A CN111199198B (en) 2019-12-27 2019-12-27 Image target positioning method, image target positioning device and mobile robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911376182.1A CN111199198B (en) 2019-12-27 2019-12-27 Image target positioning method, image target positioning device and mobile robot

Publications (2)

Publication Number Publication Date
CN111199198A CN111199198A (en) 2020-05-26
CN111199198B true CN111199198B (en) 2023-08-04

Family

ID=70744391

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911376182.1A Active CN111199198B (en) 2019-12-27 2019-12-27 Image target positioning method, image target positioning device and mobile robot

Country Status (1)

Country Link
CN (1) CN111199198B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112085771B (en) * 2020-08-06 2023-12-05 深圳市优必选科技股份有限公司 Image registration method, device, terminal equipment and computer readable storage medium
CN113763333B (en) * 2021-08-18 2024-02-13 安徽帝晶光电科技有限公司 Sub-pixel positioning method, positioning system and storage medium
CN114155557B (en) * 2021-12-07 2022-12-23 美的集团(上海)有限公司 Positioning method, positioning device, robot and computer-readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447466A (en) * 2015-12-01 2016-03-30 深圳市图灵机器人有限公司 Kinect sensor based identity comprehensive identification method
CN108063859A (en) * 2017-10-30 2018-05-22 努比亚技术有限公司 A kind of automatic camera control method, terminal and computer storage media
WO2018120038A1 (en) * 2016-12-30 2018-07-05 深圳前海达闼云端智能科技有限公司 Method and device for target detection
CN109544606A (en) * 2018-11-02 2019-03-29 山东大学 Fast automatic method for registering and system based on multiple Kinect
CN109961406A (en) * 2017-12-25 2019-07-02 深圳市优必选科技有限公司 A kind of method, apparatus and terminal device of image procossing
CN110324521A (en) * 2018-04-28 2019-10-11 Oppo广东移动通信有限公司 Control method, apparatus, electronic equipment and the storage medium of camera
WO2019198446A1 (en) * 2018-04-10 2019-10-17 株式会社ニコン Detection device, detection method, information processing device, and information processing program
CN110427917A (en) * 2019-08-14 2019-11-08 北京百度网讯科技有限公司 Method and apparatus for detecting key point

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447466A (en) * 2015-12-01 2016-03-30 深圳市图灵机器人有限公司 Kinect sensor based identity comprehensive identification method
WO2018120038A1 (en) * 2016-12-30 2018-07-05 深圳前海达闼云端智能科技有限公司 Method and device for target detection
CN108063859A (en) * 2017-10-30 2018-05-22 努比亚技术有限公司 A kind of automatic camera control method, terminal and computer storage media
CN109961406A (en) * 2017-12-25 2019-07-02 深圳市优必选科技有限公司 A kind of method, apparatus and terminal device of image procossing
WO2019198446A1 (en) * 2018-04-10 2019-10-17 株式会社ニコン Detection device, detection method, information processing device, and information processing program
CN110324521A (en) * 2018-04-28 2019-10-11 Oppo广东移动通信有限公司 Control method, apparatus, electronic equipment and the storage medium of camera
CN109544606A (en) * 2018-11-02 2019-03-29 山东大学 Fast automatic method for registering and system based on multiple Kinect
CN110427917A (en) * 2019-08-14 2019-11-08 北京百度网讯科技有限公司 Method and apparatus for detecting key point

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
黄朝美 ; 杨马英 ; .基于信息融合的移动机器人目标识别与定位.计算机测量与控制.2016,(第11期),第192-195页. *

Also Published As

Publication number Publication date
CN111199198A (en) 2020-05-26

Similar Documents

Publication Publication Date Title
Liu et al. A detection and recognition system of pointer meters in substations based on computer vision
CN109978755B (en) Panoramic image synthesis method, device, equipment and storage medium
CN111199198B (en) Image target positioning method, image target positioning device and mobile robot
CN111815754B (en) Three-dimensional information determining method, three-dimensional information determining device and terminal equipment
WO2015043363A1 (en) Infrared image recognition device for ground moving object of aircraft
US11205276B2 (en) Object tracking method, object tracking device, electronic device and storage medium
CN113340334B (en) Sensor calibration method and device for unmanned vehicle and electronic equipment
CN110400338B (en) Depth map processing method and device and electronic equipment
CN112336342B (en) Hand key point detection method and device and terminal equipment
CN107527368B (en) Three-dimensional space attitude positioning method and device based on two-dimensional code
CN112308916A (en) Target pose identification method based on image target
CN105260750A (en) Dairy cow identification method and system
Chen et al. Automatic building extraction via adaptive iterative segmentation with LiDAR data and high spatial resolution imagery fusion
CN110926330A (en) Image processing apparatus, image processing method, and program
CN111191557B (en) Mark identification positioning method, mark identification positioning device and intelligent equipment
CN114155557B (en) Positioning method, positioning device, robot and computer-readable storage medium
CN105654479A (en) Multispectral image registering method and multispectral image registering device
CN109993715A (en) A kind of robot vision image preprocessing system and image processing method
CN113191189A (en) Face living body detection method, terminal device and computer readable storage medium
CN112418089A (en) Gesture recognition method and device and terminal
CN104200460A (en) Image registration method based on images characteristics and mutual information
KR101528757B1 (en) Texture-less object recognition using contour fragment-based features with bisected local regions
KR101202215B1 (en) Weighted-based target decision system for strapdown dual mode imaging seeker using location weighted or region growing moment modeling
WO2022205841A1 (en) Robot navigation method and apparatus, and terminal device and computer-readable storage medium
Karagiannis et al. Automated photogrammetric image matching with SIFT algorithm and Delaunay triangulation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231211

Address after: Room 601, 6th Floor, Building 13, No. 3 Jinghai Fifth Road, Beijing Economic and Technological Development Zone (Tongzhou), Tongzhou District, Beijing, 100176

Patentee after: Beijing Youbixuan Intelligent Robot Co.,Ltd.

Address before: 518000 16th and 22nd Floors, C1 Building, Nanshan Zhiyuan, 1001 Xueyuan Avenue, Nanshan District, Shenzhen City, Guangdong Province

Patentee before: Shenzhen Youbixuan Technology Co.,Ltd.