WO2023103285A1 - Procédé et appareil de positionnement, et robot et support d'enregistrement lisible par ordinateur - Google Patents

Procédé et appareil de positionnement, et robot et support d'enregistrement lisible par ordinateur Download PDF

Info

Publication number
WO2023103285A1
WO2023103285A1 PCT/CN2022/093037 CN2022093037W WO2023103285A1 WO 2023103285 A1 WO2023103285 A1 WO 2023103285A1 CN 2022093037 W CN2022093037 W CN 2022093037W WO 2023103285 A1 WO2023103285 A1 WO 2023103285A1
Authority
WO
WIPO (PCT)
Prior art keywords
human body
point cloud
robot
cloud information
information
Prior art date
Application number
PCT/CN2022/093037
Other languages
English (en)
Chinese (zh)
Inventor
夹磊
吴泽霖
奉飞飞
唐剑
Original Assignee
美的集团(上海)有限公司
美的集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 美的集团(上海)有限公司, 美的集团股份有限公司 filed Critical 美的集团(上海)有限公司
Publication of WO2023103285A1 publication Critical patent/WO2023103285A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques

Definitions

  • the present application relates to the technical field of computer vision and motion control, and specifically relates to a positioning method, device, robot and computer-readable storage medium.
  • the position of the human body is mainly determined by analyzing the images taken by the camera installed on the robot body.
  • the existing methods have the problem that the determined position information is inaccurate when determining the position of the human body.
  • the embodiments of the present application provide a positioning method, a device, a robot, and a computer-readable storage medium, which can solve the problem in the prior art that it is difficult to accurately position a human body through a camera.
  • the embodiment of the present application provides a positioning method applied to a robot, the robot is provided with a camera and a radar, and the positioning method includes:
  • the embodiment of the present application provides a positioning device, which is applied to a robot, the robot is provided with a camera and a radar, and the positioning device includes:
  • An image acquisition module configured to acquire at least one image captured by the camera
  • a first human body feature detection module configured to detect whether the image includes a first human body feature
  • the first position information determination module is used to calculate the first position information of the human body according to the first human body feature if the first human body feature exists;
  • a point cloud information acquisition module configured to acquire at least one point cloud information obtained by the radar scan
  • a target point cloud information determination module configured to determine target point cloud information in the point cloud information according to the first position information
  • the second human body feature detection module is used to detect whether the target point cloud information includes a second human body feature
  • the second position information determining module is configured to determine the second position information of the human body according to the target point cloud information including the second human body features if the second human body features exist.
  • the embodiment of the present application provides a robot, including a memory, a processor, and a computer program stored in the memory and operable on the processor, and the computer program is implemented when the processor executes the computer program.
  • a robot including a memory, a processor, and a computer program stored in the memory and operable on the processor, and the computer program is implemented when the processor executes the computer program. The method as described in the first aspect.
  • the embodiment of the present application provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the first aspect is implemented. the method described.
  • the embodiment of the present application provides a computer program product, which enables the robot to execute the method described in the above first aspect when the computer program product runs on the robot.
  • the embodiment of the present application provides a computer device, including a memory, a processor, and a computer program stored in the memory and operable on the processor, when the processor executes the computer program Implement the method as described in the first aspect.
  • the image includes a human body by analyzing whether there is a first human body feature in the image, and if the image includes a human body, the first position information of the human body is calculated according to the first human body feature (at this time, The first position information obtained is not accurate enough), and then according to the obtained first position information, the target point cloud information is determined from each point cloud information obtained by radar scanning. If the target point cloud information includes the second human body feature, Then, the second position information is determined according to the target point cloud information including the second human body features (at this time, the obtained second position information is accurate).
  • the image Since the accuracy of the judgment result of judging whether an image has the first human feature is high, and the accuracy of the position information determined from the point cloud information is also high, therefore, the image is recognized first, and after the existence of the human body is recognized , and then determine the location information according to the point cloud information, which can improve the accuracy of the determined location information (ie, the second location information) of the human body.
  • FIG. 1 is a schematic flowchart of a positioning method provided by an embodiment of the present application
  • Fig. 2 is a schematic structural diagram of a positioning device provided by an embodiment of the present application.
  • Fig. 3 is a schematic structural diagram of a robot provided by an embodiment of the present application.
  • references to "one embodiment” or “some embodiments” or the like in the specification of the present application means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application.
  • appearances of the phrases “in one embodiment,” “in some embodiments,” “in other embodiments,” “in other embodiments,” etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless specifically stated otherwise.
  • the embodiment of the present application provides a positioning method.
  • the human body is identified through images, and then the position information of the human body is determined through radar data. Since the image data can be accurately Whether an object is a human body is reflected, and accurate position information can also be obtained from radar data. Therefore, the accuracy of the obtained position information of the human body can be improved through the above method.
  • Fig. 1 shows a schematic flow chart of a positioning method provided by an embodiment of the present application, the positioning method is applied to a robot, and the robot is provided with a camera and a radar, and the above positioning method includes:
  • Step S11 acquiring at least one image captured by the aforementioned camera.
  • a camera (monocular camera) is installed on the upper part of the robot, and the monocular camera is used to photograph the front of the robot to obtain at least one image (or video frame) of the object in front of the robot, and send at least one resulting image (or video frame) to the robot.
  • the monocular camera is used to photograph the front of the robot to obtain at least one image (or video frame) of the object in front of the robot, and send at least one resulting image (or video frame) to the robot.
  • Step S12 detecting whether the above-mentioned image includes the first human body feature.
  • the first human body feature is a feature that can represent an object as a human body.
  • the first human body feature can be a feature corresponding to a human face (at this time, the first human body feature is called a face feature), or it can be is the feature corresponding to the shoulder of the human body (in this case, the first human body feature is called the shoulder feature).
  • the first human body feature is a face feature
  • whether there is a face feature in the image can be detected through a face detection algorithm.
  • the first neural network model is trained by using samples containing the first human body characteristics in advance to obtain the second neural network model. After the image is captured by the camera, the image can be input into the second neural network model. The neural network model is able to identify whether the first human feature is present in the image.
  • Step S13 if the above-mentioned first human body feature exists, then calculate the first position information of the human body according to the above-mentioned first human body feature.
  • the size of the first human body feature in the image is calculated, combined with the camera imaging principle, the actual size of the first human body feature is estimated, and finally the distance between the human body and the robot corresponding to the first human body feature is estimated.
  • the distance between the human body and the robot is calculated according to the imaging principle of the camera. Since the position information of the robot in the world coordinate system is known, the position information of the human body in the world coordinate system can be calculated according to the calculated distance between the human body and the robot to obtain the above-mentioned first position information.
  • the distance estimated according to the above method needs to be combined with the included angle of inclination to estimate the distance between the human body and the robot. Since in actual situations, the human body is usually not on the main optical axis of the camera, therefore, estimating the distance between the human body and the robot in combination with the aforementioned tilt angle can improve the accuracy of the estimated distance.
  • the size of the detection frame used to detect the face feature is determined, and the size is used as the size of the face feature in the image. If the human body is on the main optical axis of the camera, the distance estimated by combining the camera imaging principle is used as the estimated distance between the human body and the robot. If the human body is not on the main optical axis of the camera, the distance estimated by combining the camera imaging principle The distance also needs to determine a value according to the slope angle between the vertical foot of the optical center and the line connecting the optical center and the main optical axis (the slope angle can be determined according to the detection frame and the center of the image). as the estimated distance between the human body and the robot.
  • Step S14 acquiring at least one piece of point cloud information obtained by radar scanning.
  • At least one point cloud information is obtained by scanning the front of the robot through radar.
  • the radar is mounted on the lower half of the robot, such as on the chassis of the robot.
  • the radar scans the front of the robot with a preset scanning frequency, obtains point cloud information corresponding to objects in front of the robot, and sends the obtained point cloud information to the robot.
  • Step S15 determining target point cloud information in the above point cloud information according to the above first position information.
  • the point cloud information since information such as the position information and distance information of the object can be obtained from the point cloud information, after the first position information is predicted according to the image, it is possible to search for the information related to the point cloud information in the radar scanning.
  • the point cloud information corresponding to the first position information, the found point cloud information is used as the above-mentioned target point cloud information.
  • Step S16 detecting whether the above target point cloud information includes the second human body feature.
  • the second human body feature is a feature that can characterize the object as a human body, and the second human body feature is a feature different from the first human body feature.
  • the target point cloud information includes Secondary human characteristics.
  • Step S17 if the second human body feature exists, determine the second position information of the human body according to the target point cloud information including the second human body feature.
  • the accurate position information can be determined according to the point cloud information (the first position information calculated based on the image captured by the monocular camera is not accurate enough), therefore, if it is judged that the target point cloud information is If the second human body features are included, the corresponding accurate position information can be determined from the target point cloud information including these second human body features. It should be pointed out that if there is point cloud information that does not include the second human body features in the target point cloud information, these point cloud information that does not include the second human body features need to be discarded, that is, only based on the target point cloud information that includes the second human body features The information is used to determine the location information, so as to improve the accuracy of the obtained location information.
  • the image includes a human body by analyzing whether there is a first human body feature in the image, and if the image includes a human body, the first position information of the human body is calculated according to the first human body feature (at this time, The first position information obtained is not accurate enough), and then according to the obtained first position information, the target point cloud information is determined from each point cloud information obtained by radar scanning. If the target point cloud information includes the second human body feature, Then, the second position information is determined according to the target point cloud information including the second human body features (at this time, the obtained second position information is accurate).
  • the image Since the accuracy of the judgment result of judging whether an image has the first human feature is high, and the accuracy of the position information determined from the point cloud information is also high, therefore, the image is recognized first, and after the existence of the human body is recognized , and then determine the location information according to the point cloud information, which can improve the accuracy of the determined location information (ie, the second location information) of the human body.
  • the positioning method also includes:
  • the radar frequency and the image frame rate are different (that is, when the same number of images and point cloud information are acquired, the acquisition time of the image and the acquisition time of the point cloud information are different), the lower of the two whichever is used for alignment. For example, assuming that the radar frequency is 10 Hz and the image frame rate is 20 Hz, the image is sampled to ensure that the image frame rate of the sampled image is the same as the radar frequency, that is, to ensure that the image frame rate of the sampled image is 10 Hz.
  • each point cloud data can find its corresponding image, so as to ensure the integrity of the data as much as possible, which is conducive to improving Accuracy of the subsequently obtained second location information.
  • step S11 before step S11, it also includes:
  • the coordinate system of the camera is mapped to the coordinate system of the robot body, and the point cloud coordinate system of the radar is mapped to the coordinate system of the robot body.
  • the coordinate system of the object in the image captured by the subsequent camera and the coordinate system of the object in the point cloud information can be represented by the coordinate system of the robot body, thereby ensuring the accuracy of the second position information obtained subsequently.
  • the above-mentioned first human body features include human face features and/or shoulder features. And/or, the above-mentioned second human body features include leg features.
  • the camera since only detecting whether there is a face feature in the image, and/or only detecting whether there is a shoulder feature in the image, and the face feature and/or the shoulder feature are only part of the human body, therefore, it is different from the detected Compared with all the features of the human body, only detecting whether there are human face features in the image, and/or only detecting whether there is a human face feature in the image, can greatly improve the detection speed.
  • the camera since the camera only needs to acquire the characteristics of the upper body of the human body, and does not need to detect and identify the whole body, the camera can be set on the head of the robot, which can reduce the opening angle of the camera's field of view (fov) and installation Require.
  • the positioning method in the embodiment of the present application further includes:
  • the orientation of the human face is determined according to the above-mentioned facial features.
  • the location information of the two eyes can be extracted from the facial features, and the orientation of the face can be determined according to the relationship between the two eyes points and the central axis . For example, if both eye points are on the left side of the central axis of the face, it is determined that the orientation of the face is left.
  • step S15 includes:
  • the target point cloud information is determined in the point cloud information according to the first position information and the orientation of the human face.
  • the point cloud information whose position information is the same as the first position information and whose angle information is the same as the orientation of the face is searched for in each point cloud information obtained by radar scanning, and the found point cloud information is used as the target point cloud information . Since the position information and angle information of the object can be obtained from the point cloud information, combining the position information and the angle information to find the target point cloud information can improve the accuracy of the found target point cloud information.
  • step S16 when the second human body features include leg features, the above step S16 includes:
  • A1 Perform adjacent point clustering on the above target point cloud information to obtain a clustering result.
  • adjacent point clustering is used to group target point cloud information with close positions into one category.
  • the attribute information of the object includes the shape of the object and/or the width of the object and the like.
  • the attribute information does not meet the predetermined requirements, it is determined that the above-mentioned leg feature does not exist in the above-mentioned target point cloud information.
  • the radar can be set on the chassis of the robot (such as the position of the feet), which is beneficial to obtain more comprehensive point cloud information on the ground.
  • the attribute information of the object corresponding to each target point cloud information of the same clustering result is analyzed, for example, the shape of the object is analyzed, and whether there is a leg feature in the point cloud information is judged based on the shape of the object.
  • the attribute information includes shape and width
  • the above step A3 includes:
  • the shape of the object included in the above-mentioned target point cloud information is two cylinders, and the width of each of the two cylinders meets the preset width requirement, it is determined that the attribute information meets the predetermined requirements, that is, the above-mentioned target point is determined There are leg features in the cloud information.
  • the shape of the object included is determined according to the target point cloud information, and then it is judged whether the shape of the object is two cylinder, if there are two cylinders, and the width of each cylinder in the two cylinders meets the preset width requirements, for example, the width of each cylinder is within the preset width range, then it is determined There are leg features in these target point cloud information, which can ensure the accuracy of the judgment results obtained. That is, according to the above determination method, it can be accurately determined whether there is a leg feature in the target point cloud information.
  • step S17 including:
  • the second position information is determined according to the point cloud information, that is, the second position information is the accurate position information of the human body, therefore, when following the human body according to the second position information, the robot can be more accurate. to follow the human body.
  • the above-mentioned following the above-mentioned human body based on the above-mentioned second position information includes:
  • the distance between the plane where the second human body feature is located and the plane where the robot is located can be determined first, and the determined distance will be used as the distance between the human body and the robot.
  • the distance between the legs and the robot may be taken as the distance between the human body and the robot.
  • the robot since the robot is too close to the human body, it may give people a sense of oppression, and if the robot is too far away from the human body, the interaction may not be smooth. Therefore, the robot can follow the human body according to its distance from the human body. For example, when the distance between the robot and the human body is equal to the preset distance threshold, it stops following the human body, and when the distance between the robot and the human body is greater than the preset distance threshold, it continues to follow the human body. In this way, the distance between the robot and the human body can be avoided. When it is far and when it is near, it can realize the precise tracking of the human body.
  • above-mentioned B2 comprises:
  • the target speed is positively correlated with the distance, that is, when the distance between the robot and the human body is large, the target speed is also large, and when the distance between the robot and the human body is small, the target speed is also small. Therefore, When sampling the above target speed to follow the human body, it can be ensured that when the distance between the robot and the human body is large, the robot can shorten the distance with the human body as soon as possible, and when the distance between the robot and the human body is small, the robot can shorten the distance with the human body even if it moves slowly. And when the distance between the robot and the human body is small, the robot moves slowly, which will avoid disturbing the human body, thus greatly improving the good experience of the human body being followed.
  • FIG. 2 shows a schematic structural diagram of a positioning device provided in the embodiment of the present application. For ease of description, only the parts related to the embodiment of the present application are shown.
  • the positioning device 2 is applied to a robot, and the robot is provided with a camera and a radar.
  • the positioning device 2 includes: an image acquisition module 21, a first human body feature detection module 22, a first position information determination module 23, a point cloud information acquisition module 24, a target point cloud information determination module 25, a second human body feature detection module 26, The second location information determining module 27 . in:
  • the image acquisition module 21 is configured to acquire at least one image captured by the aforementioned camera.
  • a camera (monocular camera) is installed on the upper part of the robot, and the monocular camera is used to photograph the front of the robot to obtain images (or video frames) of objects in front of the robot, and will obtain An image (or video frame) of the image is sent to the robot.
  • the first human body feature detection module 22 is configured to detect whether the above-mentioned image includes the first human body feature.
  • the first human body feature is a feature that can represent an object as a human body.
  • the first human body feature can be a feature corresponding to a human face (at this time, the first human body feature is called a face feature), or it can be It is the feature corresponding to the shoulder of the human body.
  • the first position information determining module 23 is configured to calculate the first position information of the human body according to the first human body feature if the first human body feature exists.
  • the size of the first human body feature in the image is calculated, combined with the camera imaging principle, the actual size of the first human body feature is estimated, and finally the distance between the human body and the robot corresponding to the first human body feature is estimated.
  • the distance between the human body and the robot is calculated according to the imaging principle of the camera. Since the position information of the robot in the world coordinate system is known, the position information of the human body in the world coordinate system can be calculated according to the calculated distance between the human body and the robot to obtain the above-mentioned first position information.
  • the distance estimated according to the above method needs to be combined with the included angle of inclination to estimate the distance between the human body and the robot. Since in actual situations, the human body is usually not on the main optical axis of the camera, therefore, estimating the distance between the human body and the robot in combination with the aforementioned tilt angle can improve the accuracy of the estimated distance.
  • the point cloud information acquisition module 24 is configured to acquire at least one piece of point cloud information obtained by radar scanning.
  • the radar is mounted on the lower half of the robot, such as on the chassis of the robot.
  • the radar scans the front of the robot with a preset scanning frequency, obtains point cloud information corresponding to objects in front of the robot, and sends the obtained point cloud information to the robot.
  • the target point cloud information determination module 25 is configured to determine the target point cloud information in the above point cloud information according to the above first position information.
  • the second human body feature detection module 26 is configured to detect whether the above target point cloud information includes the second human body feature.
  • the second position information determination module 27 is configured to determine the second position information of the human body according to the target point cloud information including the second human body features if the second human body features exist.
  • the accurate position information can be determined according to the point cloud information (the first position information calculated based on the image captured by the monocular camera is not accurate enough), therefore, if it is judged that the target point cloud information is If the second human body features are included, the corresponding accurate position information can be determined from the target point cloud information including these second human body features. It should be pointed out that if there is point cloud information that does not include the second human body features in the target point cloud information, these point cloud information that does not include the second human body features need to be discarded, that is, only based on the target point cloud information that includes the second human body features The information is used to determine the location information, so as to improve the accuracy of the obtained location information.
  • the image includes a human body by analyzing whether there is a first human body feature in the image, and if the image includes a human body, the first position information of the human body is calculated according to the first human body feature (at this time, The first position information obtained is not accurate enough), and then according to the obtained first position information, the target point cloud information is determined from each point cloud information obtained by radar scanning. If the target point cloud information includes the second human body feature, Then, the second position information is determined according to the target point cloud information including the second human body features (at this time, the obtained second position information is accurate).
  • the image Since the accuracy of the judgment result of judging whether an image has the first human feature is high, and the accuracy of the position information determined from the point cloud information is also high, therefore, the image is recognized first, and after the existence of the human body is recognized , and then determine the location information according to the point cloud information, which can improve the accuracy of the determined location information (ie, the second location information) of the human body.
  • the positioning device 2 also includes:
  • a time alignment module configured to align the acquisition time of the at least one image with the acquisition time of the at least one point cloud information.
  • the radar frequency and the image frame rate are different (that is, when the same number of images and point cloud information are acquired, the acquisition time of the image and the acquisition time of the point cloud information are different), the lower of the two whichever is used for alignment. For example, assuming that the radar frequency is 10 Hz and the image frame rate is 20 Hz, the image is sampled to ensure that the image frame rate of the sampled image is the same as the radar frequency, that is, to ensure that the image frame rate of the sampled image is 10 Hz.
  • each point cloud data can find its corresponding image, so as to ensure the integrity of the data as much as possible, which is conducive to improving Accuracy of the subsequently obtained second location information.
  • the positioning device 2 also includes:
  • the spatial alignment module is used to simultaneously map the view coordinate system of the camera and the point cloud coordinate system of the radar to the coordinate system of the robot body.
  • the coordinate system of the camera is mapped to the coordinate system of the robot body, and the point cloud coordinate system of the radar is mapped to the coordinate system of the robot body.
  • the coordinate system of the object in the image acquired by the subsequent camera and the coordinate system of the point cloud information can be represented by the coordinate system of the robot body, thereby ensuring the accuracy of the second position information obtained subsequently.
  • the first human body features include face features and/or shoulder features; and/or, the second human body features include leg features.
  • the first feature can be a face feature, and/or a shoulder feature, that is, the camera only needs to acquire the features of the upper body of the human body, and does not need to detect and identify the whole body, therefore, the The camera is set on the head of the robot, so that the opening angle of the camera's field of view (fov) and installation requirements can be reduced.
  • the above positioning device 2 when the first human body features include human face features, the above positioning device 2 further includes:
  • the location information of the two eyes can be extracted from the facial features, and the orientation of the face can be determined according to the relationship between the two eyes points and the central axis . For example, if both eye points are on the left side of the central axis of the face, it is determined that the orientation of the face is left.
  • target point cloud information determination module 25 is specifically used for:
  • the point cloud information whose position information is the same as the first position information and whose angle information is the same as the orientation of the face is searched for in each point cloud information obtained by radar scanning, and the found point cloud information is used as the target point cloud information . Since the position information and angle information of the object can be obtained from the point cloud information, combining the position information and the angle information to find the target point cloud information can improve the accuracy of the found target point cloud information.
  • the second human body feature detection module 26 includes:
  • the clustering result determining unit is configured to perform clustering of adjacent points on the target point cloud information to obtain a clustering result.
  • the attribute information detection unit is configured to detect the attribute information of the object included in the target point cloud information according to the clustering result.
  • the leg feature existence determining unit is configured to determine that the leg feature exists in the target point cloud information if the attribute information meets a predetermined requirement.
  • the attribute information does not meet the predetermined requirements, it is determined that there is no leg feature in the target point cloud information.
  • the attribute information includes shape and width
  • the above-mentioned leg feature existence determination unit is specifically used for:
  • the shape of the object included in the above-mentioned target point cloud information is two cylinders, and the width of each of the two cylinders meets the preset width requirement, it is determined that the attribute information meets the predetermined requirements, that is, the above-mentioned target point is determined There are leg features in the cloud information.
  • a following module configured to follow the human body based on the second position information.
  • the second position information is determined according to the point cloud information, that is, the second position information is the accurate position information of the human body, therefore, when following the human body according to the second position information, the robot can be more accurate. to follow the human body.
  • the above-mentioned follow-up module specifically includes:
  • a distance determining unit configured to determine the distance between the above-mentioned robot and the above-mentioned human body.
  • the distance between the plane where the second human body feature is located and the plane where the robot is located can be determined first, and the determined distance will be used as the distance between the human body and the robot.
  • the distance between the legs and the robot may be taken as the distance between the human body and the robot.
  • a tracking unit configured to follow the human body based on the distance.
  • the robot since the robot is too close to the human body, it may give people a sense of oppression, and if the robot is too far away from the human body, the interaction may not be smooth. Therefore, the robot can follow the human body according to its distance from the human body. For example, when the distance between the robot and the human body is equal to the preset distance threshold, it stops following the human body, and when the distance between the robot and the human body is greater than the preset distance threshold, it continues to follow the human body. In this way, the distance between the robot and the human body can be avoided. When it is far and when it is near, it can realize the precise tracking of the human body.
  • the above-mentioned tracking unit includes:
  • the continuing tracking unit is configured to follow the human body at a target speed if the distance is greater than the distance threshold, and the target speed is positively correlated with the distance. By following the human body, the distance to the above-mentioned human body is shortened.
  • the stop tracking unit is configured to stop following the human body if the distance is not greater than the distance threshold. Since it stops following the human body, it is possible to maintain a distance from the aforementioned human body.
  • the target speed is positively correlated with the distance, that is, when the distance between the robot and the human body is large, the target speed is also large, and when the distance between the robot and the human body is small, the target speed is also small. Therefore, When sampling the above target speed to follow the human body, it can be ensured that when the distance between the robot and the human body is large, the robot can shorten the distance with the human body as soon as possible, and when the distance between the robot and the human body is small, the robot can shorten the distance with the human body even if it moves slowly. And when the distance between the robot and the human body is small, the robot moves slowly, which will avoid disturbing the human body, thus greatly improving the good experience of the human body being followed.
  • Fig. 3 is a schematic structural diagram of a robot provided by an embodiment of the present application.
  • the robot 3 of this embodiment includes: at least one processor 30 (only one processor is shown in FIG. 3 ), a memory 31 and a program stored in the memory 31 and capable of running on the at least one processor 30
  • the computer program 32 when the processor 30 executes the computer program 32, implements the steps in the first embodiment of any method above.
  • the robot 3 can be a humanoid robot (such as a robot with a head, hands and feet), or a robot with only a head and hands.
  • the robot may include, but not limited to, a processor 30 and a memory 31 .
  • FIG. 3 is only an example of the robot 3, and does not constitute a limitation to the robot 3. It may include more or less components than shown in the illustration, or combine certain components, or different components.
  • a scenario may also include input and output devices, network access devices, and the like.
  • the so-called processor 30 can be a central processing unit (Central Processing Unit, CPU), and the processor 30 can also be other general processors, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), Field Programmable Gate Array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • the storage 31 may be an internal storage unit of the robot 3 in some embodiments, such as a hard disk or a memory of the robot 3 .
  • the memory 31 may also be an external storage device of the robot 3 in other embodiments, such as a plug-in hard disk equipped on the robot 3, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, Flash Card (Flash Card), etc.
  • the storage 31 may also include both an internal storage unit of the robot 3 and an external storage device.
  • the memory 31 is used to store operating systems, application programs, bootloader programs (BootLoader), data, and other programs, such as program codes of computer programs.
  • the memory 31 can also be used to temporarily store data that has been output or will be output.
  • the embodiment of the present application also provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps in each of the foregoing method embodiments can be realized.
  • An embodiment of the present application provides a computer device, including a memory, a processor, and a computer program stored in the memory and operable on the processor, wherein, when the processor executes the computer program, it can realize Steps in each of the above method embodiments.
  • the integrated unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a computer-readable storage medium. Based on this understanding, all or part of the procedures in the method of the above-mentioned embodiments in the present application can be completed by instructing related hardware through a computer program.
  • the computer program can be stored in a computer-readable storage medium.
  • the computer program When executed by a processor, the steps in the above-mentioned various method embodiments can be realized.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file or some intermediate form.
  • the computer-readable medium may at least include: any entity or device capable of carrying computer program codes to a photographing device/terminal device, a recording medium, a computer memory, a read-only memory (ROM, Read-Only Memory), a random access memory (RAM, Random Access Memory), electrical carrier signals, telecommunication signals, and software distribution media.
  • ROM read-only memory
  • RAM random access memory
  • electrical carrier signals telecommunication signals
  • software distribution media Such as U disk, mobile hard disk, magnetic disk or CD, etc.
  • computer readable media may not be electrical carrier signals and telecommunication signals under legislation and patent practice.
  • the disclosed device/network device and method may be implemented in other ways.
  • the device/network device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)

Abstract

La présente demande est applicable au domaine technique de la vision par ordinateur et de la commande de mouvement. L'invention concerne un procédé et un appareil de positionnement, et un robot et un support d'enregistrement lisible par ordinateur. Le procédé de positionnement consiste à : acquérir au moins une image, qui est capturée par une caméra; détecter si l'image comprend un premier membre de corps humain; si le premier membre de corps humain est présent, calculer des premières informations de position d'un corps humain selon le premier membre de corps humain; acquérir au moins un élément d'informations de nuage de points, qui est balayé par un radar; déterminer des informations de nuage de points cibles à partir des informations de nuage de points selon les premières informations de position; détecter si les informations de nuage de points cibles comprennent un second membre de corps humain; et, si le second membre de corps humain est présent, déterminer des secondes informations de position du corps humain selon les informations de nuage de points cibles, qui comprennent le second membre de corps humain. Au moyen du procédé, la précision d'informations de position obtenues d'un corps humain peut être améliorée.
PCT/CN2022/093037 2021-12-07 2022-05-16 Procédé et appareil de positionnement, et robot et support d'enregistrement lisible par ordinateur WO2023103285A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111488890.1 2021-12-07
CN202111488890.1A CN114155557B (zh) 2021-12-07 2021-12-07 定位方法、装置、机器人及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2023103285A1 true WO2023103285A1 (fr) 2023-06-15

Family

ID=80453180

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/093037 WO2023103285A1 (fr) 2021-12-07 2022-05-16 Procédé et appareil de positionnement, et robot et support d'enregistrement lisible par ordinateur

Country Status (2)

Country Link
CN (1) CN114155557B (fr)
WO (1) WO2023103285A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113253735B (zh) * 2021-06-15 2021-10-08 同方威视技术股份有限公司 跟随目标的方法、装置、机器人及计算机可读存储介质
CN114155557B (zh) * 2021-12-07 2022-12-23 美的集团(上海)有限公司 定位方法、装置、机器人及计算机可读存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108022205A (zh) * 2016-11-04 2018-05-11 杭州海康威视数字技术股份有限公司 目标跟踪方法、装置及录播系统
CN109949347A (zh) * 2019-03-15 2019-06-28 百度在线网络技术(北京)有限公司 人体跟踪方法、装置、系统、电子设备和存储介质
CN112528781A (zh) * 2020-11-30 2021-03-19 广州文远知行科技有限公司 一种障碍物检测方法、装置、设备和计算机可读存储介质
CN114155557A (zh) * 2021-12-07 2022-03-08 美的集团(上海)有限公司 定位方法、装置、机器人及计算机可读存储介质

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109484935B (zh) * 2017-09-13 2020-11-20 杭州海康威视数字技术股份有限公司 一种电梯轿厢监控方法、装置及系统
CN108229332B (zh) * 2017-12-08 2020-02-14 华为技术有限公司 骨骼姿态确定方法、装置及计算机可读存储介质
CN109357633B (zh) * 2018-09-30 2022-09-30 先临三维科技股份有限公司 三维扫描方法、装置、存储介质和处理器
CN110728196B (zh) * 2019-09-18 2024-04-05 平安科技(深圳)有限公司 一种人脸识别的方法、装置及终端设备
CN111199198B (zh) * 2019-12-27 2023-08-04 深圳市优必选科技股份有限公司 一种图像目标定位方法、图像目标定位装置及移动机器人
CN112348777B (zh) * 2020-10-19 2024-01-12 深圳市优必选科技股份有限公司 人体目标的检测方法、装置及终端设备
CN112651380A (zh) * 2021-01-13 2021-04-13 深圳市一心视觉科技有限公司 人脸识别方法、人脸识别装置、终端设备及存储介质
CN112907657A (zh) * 2021-03-05 2021-06-04 科益展智能装备有限公司 一种机器人重定位方法、装置、设备及存储介质
CN112883920A (zh) * 2021-03-22 2021-06-01 清华大学 基于点云深度学习的三维人脸扫描特征点检测方法和装置
CN113160328A (zh) * 2021-04-09 2021-07-23 上海智蕙林医疗科技有限公司 一种外参标定方法、系统、机器人和存储介质
CN113366491B (zh) * 2021-04-26 2022-07-22 华为技术有限公司 眼球追踪方法、装置及存储介质
CN113450334B (zh) * 2021-06-30 2022-08-05 珠海云洲智能科技股份有限公司 一种水上目标检测方法、电子设备及存储介质
CN113723369B (zh) * 2021-11-01 2022-02-08 北京创米智汇物联科技有限公司 控制方法、装置、电子设备及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108022205A (zh) * 2016-11-04 2018-05-11 杭州海康威视数字技术股份有限公司 目标跟踪方法、装置及录播系统
CN109949347A (zh) * 2019-03-15 2019-06-28 百度在线网络技术(北京)有限公司 人体跟踪方法、装置、系统、电子设备和存储介质
CN112528781A (zh) * 2020-11-30 2021-03-19 广州文远知行科技有限公司 一种障碍物检测方法、装置、设备和计算机可读存储介质
CN114155557A (zh) * 2021-12-07 2022-03-08 美的集团(上海)有限公司 定位方法、装置、机器人及计算机可读存储介质

Also Published As

Publication number Publication date
CN114155557A (zh) 2022-03-08
CN114155557B (zh) 2022-12-23

Similar Documents

Publication Publication Date Title
US10810734B2 (en) Computer aided rebar measurement and inspection system
US10417503B2 (en) Image processing apparatus and image processing method
US11715227B2 (en) Information processing apparatus, control method, and program
WO2017181899A1 (fr) Procédé et dispositif de vérification faciale in vivo
CN112528831B (zh) 多目标姿态估计方法、多目标姿态估计装置及终端设备
US20200116498A1 (en) Visual assisted distance-based slam method and mobile robot using the same
WO2023103285A1 (fr) Procédé et appareil de positionnement, et robot et support d'enregistrement lisible par ordinateur
Williams et al. An image-to-map loop closing method for monocular SLAM
US20120076428A1 (en) Information processing device, information processing method, and program
US10991124B2 (en) Determination apparatus and method for gaze angle
CN111178161A (zh) 一种基于fcos的车辆追踪方法及系统
WO2022217794A1 (fr) Procédé de positionnement de robot mobile dans un environnement dynamique
CN114267041A (zh) 场景中对象的识别方法及装置
WO2023015938A1 (fr) Procédé et appareil de détection de point tridimensionnel, dispositif électronique et support de stockage
CN116563376A (zh) 基于深度学习的lidar-imu紧耦合语义slam方法及相关装置
JP2008026999A (ja) 障害物検出システム、及び障害物検出方法
CN106406507B (zh) 图像处理方法以及电子设备
US11887331B2 (en) Information processing apparatus, control method, and non-transitory storage medium
WO2022174603A1 (fr) Procédé de prédiction de pose, appareil de prédiction de pose, et robot
US11763601B2 (en) Techniques for detecting a three-dimensional face in facial recognition
WO2022205841A1 (fr) Procédé et appareil de navigation de robot, et dispositif terminal et support de stockage lisible par ordinateur
WO2021093744A1 (fr) Procédé et appareil de mesure du diamètre d'une pupille et support de stockage lisible par ordinateur
KR20180111150A (ko) 영상에서 전경을 추출하는 장치 및 그 방법
JP2013029996A (ja) 画像処理装置
CN112749664A (zh) 一种手势识别方法、装置、设备、系统及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22902723

Country of ref document: EP

Kind code of ref document: A1