WO2023160301A1 - 物体信息确定方法、移动机器人系统及电子设备 - Google Patents

物体信息确定方法、移动机器人系统及电子设备 Download PDF

Info

Publication number
WO2023160301A1
WO2023160301A1 PCT/CN2023/072179 CN2023072179W WO2023160301A1 WO 2023160301 A1 WO2023160301 A1 WO 2023160301A1 CN 2023072179 W CN2023072179 W CN 2023072179W WO 2023160301 A1 WO2023160301 A1 WO 2023160301A1
Authority
WO
WIPO (PCT)
Prior art keywords
target object
mobile robot
image
frame
camera
Prior art date
Application number
PCT/CN2023/072179
Other languages
English (en)
French (fr)
Inventor
苏辉
蒋海青
Original Assignee
杭州萤石软件有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州萤石软件有限公司 filed Critical 杭州萤石软件有限公司
Publication of WO2023160301A1 publication Critical patent/WO2023160301A1/zh

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0242Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0259Control of position or course in two dimensions specially adapted to land vehicles using magnetic or electromagnetic means
    • G05D1/0263Control of position or course in two dimensions specially adapted to land vehicles using magnetic or electromagnetic means using magnetic strips
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/0285Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using signals transmitted via a public communication network, e.g. GSM network

Definitions

  • the present application relates to the technical field of mobile robots, in particular to a method for determining object information, a mobile robot system and electronic equipment.
  • the obstacle avoidance function is the basic function that the mobile robot needs to realize, and the realization of the obstacle avoidance function requires the mobile robot to be able to perceive the contour information of objects in the environment. Therefore, how to determine the contour information of objects in the environment is an urgent technical problem to be solved.
  • the purpose of the embodiments of the present application is to provide a method for determining object information, a mobile robot system and electronic equipment, so as to determine the contour information of objects in the environment.
  • the specific technical solution is as follows:
  • the embodiment of the present application provides a method for determining object information, the method includes: acquiring the information that contains the target object collected by the mobile robot during the process of orbiting the target object Multi-frame images; for each frame of the acquired image, identify the relative distance between the mobile robot and the target object when collecting the frame of image; The relative distance of the target object and the position information in the world coordinate system determine the outline information of the target object.
  • the embodiment of the present application provides a mobile robot system, including: an image sensor, configured to acquire multiple frames of images including the target object collected by the mobile robot during the process of orbiting the target object;
  • the processor is configured to identify the relative distance between the mobile robot and the target object when acquiring the frame of image for each frame of the acquired image;
  • the relative distance of the target object and the position information in the world coordinate system are used to determine the outline information of the target object.
  • the embodiment of the present application provides an object information determining device, including: an image acquisition module, configured to acquire multiple frames containing the target object collected by the mobile robot during the process of orbiting the target object Image; an information calculation module, for identifying the relative distance between the mobile robot and the target object when capturing the frame of image for each frame of the acquired image; an information determination module, for based on the moving The robot determines the outline information of the target object based on the relative distance from the target object and the position information in the world coordinate system when the robot collects each frame of image.
  • an embodiment of the present application provides an electronic device, including a processor, a communication interface, a memory, and a communication bus, wherein the processor, the communication interface, and the memory complete communication with each other through the communication bus; the memory is used to store computer Program; processor, when executing the program stored on the memory, realize any one of the first aspect Method steps.
  • the embodiment of the present application provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the method described in any one of the first aspect is implemented step.
  • multiple frames of images including the target object collected by the mobile robot in the process of moving around the target object can be obtained; for each frame of the image obtained, identify The relative distance between the mobile robot and the target object when collecting the frame of image; determine the outline information of the target object based on the relative distance between the mobile robot and the target object and the position information in the world coordinate system when collecting each frame of image.
  • the acquired multi-frame images are collected by the mobile robot in the process of moving around the target object
  • the acquired multi-frame images are images of different orientations of the target object collected by the mobile robot
  • the mobile robot collects each The relative distance between the frame image and the target object is the distance between the mobile robot and the contour edge of the target object when the frame image is collected, and then combined with the position information of the mobile robot in the world coordinate system when the frame image is collected, Determine the contour information of the target object. It can be seen that through this solution, the contour information of objects in the environment can be determined.
  • FIG. 1 is a first flow chart of a method for determining object information provided in an embodiment of the present application
  • Figure 2(a) is a schematic diagram of a circular motion provided by the embodiment of the present application.
  • Figure 2(b) is a schematic diagram of a multi-segment arc motion provided by the embodiment of the present application.
  • FIG. 3 is a second flow chart of the method for determining object information provided in the embodiment of the present application.
  • FIG. 4 is a schematic diagram of an image including a target object provided by an embodiment of the present application.
  • FIG. 5 is a third flow chart of a method for determining object information provided in an embodiment of the present application.
  • Fig. 6 is a schematic diagram of a global side view provided by the embodiment of the present application.
  • FIG. 7 is a fourth flowchart of the method for determining object information provided in the embodiment of the present application.
  • Fig. 8 is a schematic diagram of another global side view provided by the embodiment of the present application.
  • FIG. 9 is a schematic diagram of the first structure of the mobile robot system provided by the embodiment of the present application.
  • Fig. 10 is a second structural schematic diagram of the mobile robot system provided by the embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of an object information determining device provided in an embodiment of the present application.
  • FIG. 12 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the obstacle avoidance function is the basic function that the mobile robot needs to realize, and the realization of the obstacle avoidance function requires the mobile robot to be able to perceive the contour information of the objects in the environment. Therefore, how to determine the contour information of objects in the environment is an urgent technical problem to be solved.
  • embodiments of the present application provide a method for determining object information, a mobile robot system, and electronic equipment.
  • the method for determining object information provided in the embodiment of the present application can be applied to a mobile robot, such as a sweeping robot or a welcome robot.
  • the object information determination method provided in the embodiment of the present application can also be applied to other types of electronic devices, such as smart phones, personal computers, servers, and other devices with data processing capabilities.
  • the electronic device incorporated can communicate with the mobile robot, so that images required for processing can be obtained from the mobile robot.
  • the method for determining object information provided in the embodiments of the present application may be implemented by software, hardware, or a combination of software and hardware.
  • the method for determining object information provided in the embodiment of the present application may include:
  • For each frame of the acquired image identify the relative distance between the mobile robot and the target object when acquiring the frame of image;
  • the outline information of the target object is determined based on the relative distance between the mobile robot and the target object and the position information in the world coordinate system when each frame of image is collected.
  • the acquired multi-frame images are collected by the mobile robot in the process of moving around the target object
  • the acquired multi-frame images are different from the target object collected by the mobile robot.
  • oriented image, and the relative distance between the mobile robot and the target object when collecting each frame of image is the distance between the mobile robot and the outline edge of the target object when collecting the frame of image, and then it can be combined with the mobile robot when collecting the frame
  • the position information of the image in the world coordinate system can determine the contour information of the target object. It can be seen that through this solution, the contour information of objects in the environment can be determined.
  • a method for determining object information may include steps S101-S103, wherein:
  • the target object can be an object detected by the mobile robot during its movement and has not yet obtained object information.
  • the mobile robot is a sweeping robot. or slippers and other obstacles, the obstacles that prevent the sweeping robot from moving in accordance with the original path can be considered as the target objects referred to in this application.
  • the target object may also be an artificially designated object, which is all possible.
  • the above-mentioned circling movement refers to the movement mode in which the mobile robot moves around the target object at least one circle.
  • the above-mentioned circling movement can also be that the mobile robot performs a partial circling movement on the target object.
  • the above process of the mobile robot moving around the stool may refer to the process of the mobile robot moving around the outer edge of the stool.
  • the multiple frames of images acquired above may be images acquired by shooting the target object during the moving of the mobile robot.
  • the mobile robot can adopt various motion modes to realize the moving around the target object.
  • the mobile robot can perform a continuous circular motion aiming at the target object.
  • the continuous circular movement refers to the uninterrupted movement around the target object, as shown in Figure 2 (a), the embodiment of the present application provides a schematic diagram of a circular movement, the irregular body in the figure is the target object, and the arc refers to The direction indicates that the mobile robot C performs a counterclockwise circular motion on the target object. During the circular motion, the mobile robot C collects images of the target object at different positions of the target object.
  • the mobile robot needs to collect the image of the target object during the circular motion, the camera orientation of the mobile robot and its moving direction are constantly changing.
  • Direction-independent that is, in the case that the camera orientation of the mobile robot has nothing to do with the moving direction of the mobile robot, the mobile robot can perform continuous circular motion for the target object.
  • the present application also provides a multi-segment arc movement mode.
  • the embodiment of the present application provides a schematic diagram of a multi-segment arc movement.
  • the mobile robot can move around the target object in a circular arc, then adjust the camera orientation to face the target object, collect an image of the target object, and then continue to move the next circular arc, repeating the above process until it moves at least one circle around the target object .
  • the mobile robot performs multi-segment arc motions for the target object.
  • the orientation of the camera of the mobile robot has nothing to do with the moving direction of the mobile robot, it is also possible to use a multi-segment arc motion to complete the circular motion to the target object.
  • S102 for each frame of the acquired image, identify the relative distance between the mobile robot and the target object when acquiring the frame of image;
  • the acquired image can be used to determine that the mobile robot is in the same frame of image acquisition as The relative distance between the target objects is the distance d1.
  • the mobile robot when the mobile robot includes a depth camera, for example, the mobile robot includes a structured light depth camera, a binocular camera, a TOF (Time of Flight, time of flight) depth camera, and a binocular stereo vision camera, etc., the acquired image contains a depth Information, so that for each frame of the acquired image, the depth information recorded in the image can be used to identify the relative distance between the mobile robot and the target object when the frame of image is collected.
  • a depth camera for example, the mobile robot includes a structured light depth camera, a binocular camera, a TOF (Time of Flight, time of flight) depth camera, and a binocular stereo vision camera, etc.
  • the acquired image contains a depth Information, so that for each frame of the acquired image, the depth information recorded in the image can be used to identify the relative distance between the mobile robot and the target object when the frame of image is collected.
  • the embodiment of the present application also provides a method of identifying the relative distance between the mobile robot and the target object when capturing the frame of image based on the image collected by the monocular camera. Details will be described in subsequent embodiments, and will not be repeated here.
  • the outline of the target object can be determined based on the relative distance between the mobile robot and the target object when collecting each frame of image and the position information in the world coordinate system information.
  • the position information of the mobile robot in the world coordinate system may be the three-dimensional coordinates of the mobile robot in the world coordinate system.
  • the world coordinate system is the coordinate system established by the mobile robot when it is in a new environment.
  • the initial point of its movement (such as the charging position of the sweeping robot) is taken as the coordinate origin.
  • the distance, direction and other information during the movement process will update its own position in the world coordinate system in real time.
  • the above-mentioned position information of the mobile robot in the world coordinate system can also be simplified to the two-dimensional coordinates of the mobile robot in the world coordinate system, ignoring the height information of the mobile robot, thus It is also possible to use two-dimensional coordinates to characterize the projected position of the mobile robot on its moving plane.
  • the mobile robot when the mobile robot collects each frame of image, it can simultaneously record its own position information in the world coordinate system when the frame of image is collected.
  • the executor of the embodiment of the present application is an electronic device that communicates with the mobile robot, the electronic device obtains the position information in the world coordinate system when the mobile robot captures each frame of image while acquiring the image captured by the mobile robot.
  • the position of the target edge point of the target object can be determined as the frame The edge position corresponding to the image, and then based on the edge position corresponding to each frame image, the contour information of the target object is determined.
  • the target edge point is the edge point of the shooting range of the camera when the mobile robot collects the frame of image.
  • the determined distance d1 is the distance between the mobile robot and the edge point directly below the target object, and the edge point is moving at this time
  • the robot is the edge point of the shooting range of the camera.
  • the contour information of the target object can be determined based on the edge position corresponding to each frame image.
  • the options include at least the following two ways to determine the contour information, including:
  • the first way to determine the contour information use the edge position corresponding to each frame image as the contour information of the target object;
  • the edge position corresponding to each frame image can be directly used as the outline information of the target object, and the outline information of the target object can be a set of edge positions, and each determined edge position can be used as an element in the set of edge positions.
  • the edge positions corresponding to each frame image are respectively position 1, position 2, position 3, position 4, position 5 and position 6, then the contour information of the target object is: ⁇ position 1, position 2, position 3, position 4, position 5, position 6 ⁇ .
  • the second way to determine the contour information perform curve fitting on the edge positions corresponding to each frame image to obtain at least one fitting curve, and determine the position of each point on the at least one fitting curve, and use the determined positions as Contour information of the target object.
  • the above-mentioned curve fitting method can be least square method curve fitting, RBF (Radial Basis Function, Radial basis function) curve fitting and cubic spline curve fitting and other methods. After at least one fitting curve is obtained, the positions of each point on the at least one fitting curve can be used as contour information of the target object.
  • RBF Random Basis Function, Radial basis function
  • the acquired multi-frame images are collected by the mobile robot in the process of moving around the target object
  • the acquired multi-frame images are different from the target object collected by the mobile robot.
  • oriented image, and the relative distance between the mobile robot and the target object when collecting each frame of image is the distance between the mobile robot and the outline edge of the target object when collecting the frame of image, and then it can be combined with the mobile robot when collecting the frame
  • the position information of the image in the world coordinate system can determine the contour information of the target object. It can be seen that through this solution, the contour information of objects in the environment can be determined.
  • the above step S102 may include S102A-S102B, wherein:
  • S102A for each frame of the acquired image, determine the pixel distance in the vertical direction between the first pixel point and the second pixel point in the frame image as the first distance; wherein, the first pixel point is the target object The bottom pixel, the second pixel is the center pixel;
  • the first pixel point may be any pixel point on the bottom line segment of the target object in the figure, and the above-mentioned second pixel point is the central pixel point in the figure.
  • the bottom pixel point of the target object is a pixel point belonging to the bottom edge of the target object, for example, a pixel point with the smallest ordinate among pixel coordinates in the imaging area of the target object.
  • the first pixel point is (x1, y1)
  • the second pixel coordinate is (x2, y2)
  • the first distance is the absolute value of y1-y2.
  • the mobile robot is collecting The relative distance to the target object in this frame of image.
  • the above-mentioned first internal and external parameter information may include: a vertical viewing angle of the camera, an image size of an image collected by the camera, and an optical center height of the camera.
  • the above-mentioned vertical viewing angle of the camera refers to the maximum viewing angle of the camera in the vertical direction, for example, 150 degrees.
  • the above-mentioned image size may be an image resolution, for example, 1024*768, which means that the image contains 1024 pixels on a straight line in the horizontal direction and 768 pixels in a straight line in the vertical direction.
  • the image resolution may include a horizontal resolution and a horizontal resolution, still described as 1024*768, where the horizontal resolution is 1024 and the vertical resolution is 768.
  • the optical center height of the above camera is the distance between the optical center of the camera in the mobile robot and the motion plane of the mobile robot.
  • step S102B may include S102B1-S102B2, wherein:
  • the vertical resolution of the frame image, and the vertical viewing angle of the camera determine the angle of the lower line of sight; wherein, the angle of the lower line of sight is the angle between the lower line of sight and the optical axis of the camera, and the lower line of sight is: The line between the optical center of the camera and the bottom of the target object;
  • FIG. 6 it is a schematic diagram of a global side view provided by the embodiment of the present application.
  • the included angle between the line connecting the optical center of the camera in the mobile robot and the bottom of the target object and the optical axis of the camera is the angle of the lower line of sight.
  • the above-mentioned determination of the lower line-of-sight angle based on the first distance, the vertical resolution of the frame image, and the vertical viewing angle of the camera may include steps A1-A2:
  • Step A1 calculating the ratio of the first distance to the vertical resolution as the first ratio
  • Step A2 based on the first ratio and the vertical viewing angle, determine the included angle of the lower line of sight.
  • the proportion of the first distance in the vertical direction in the image is linearly related to the ratio of the angle between the lower line of sight to the vertical viewing angle.
  • the product of the first ratio and the vertical viewing angle can be calculated as the lower line of sight or, this application provides another method for determining the included angle of the lower line of sight, which can determine the target adjustment corresponding to the specified pixel in the frame image according to the preset correspondence between each pixel and the adjustment coefficient coefficient, and calculate the product of the adjustment coefficient and the first ratio as the second ratio, and calculate the product of the second ratio and the vertical viewing angle as the angle of the lower line of sight.
  • each pixel point is a pixel point in the image collected by the camera, and the specified pixel point is a pixel point determined from the bottom pixel points of the target object.
  • each pixel and the adjustment coefficient can be determined by calibrating the camera in the mobile robot.
  • the corresponding adjustment coefficient can be the pixel and the central pixel. The distance between points, the ratio of the distance between the actual object corresponding to the pixel point and the distance between the actual object corresponding to the central pixel point.
  • the target adjustment coefficient is adjusted corresponding to the first ratio, that is, the product of the adjustment coefficient and the first ratio is calculated as the second ratio, and then the product of the second ratio and the vertical viewing angle is calculated as the lower line of sight angle.
  • the triangle formed by the light of the camera on the mobile robot and the bottom of the target object is a right triangle.
  • the following formula can be used to calculate the relative distance between the mobile robot and the target object when collecting this frame of image:
  • d is the relative distance
  • h is the optical center height of the camera.
  • the vertical angle between the optical axis and the moving plane of the mobile robot can also be considered.
  • the following formula can be used to calculate the moving The relative distance between the robot and the target object when collecting the frame image:
  • d is the relative distance
  • k is the vertical angle between the optical axis and the moving plane of the mobile robot
  • m is the angle of the lower line of sight
  • h is the height of the optical center of the camera.
  • the vertical angle between the above-mentioned optical axis and the moving plane of the mobile robot may be pre-calibrated and determined.
  • the outline information of objects in the environment can be determined. And it can calculate the angle of the lower line of sight, and use the angle of the lower line of sight and the height of the optical center of the camera to calculate the relative distance between the mobile robot and the target object when collecting the frame of image, thus providing the basis for determining the contour information of the object in the environment .
  • the method for determining object information may perform object type identification on the target object before performing step S101, and if the object type of the target object is the type to be identified, then perform step S101.
  • the type to be recognized includes: an unknown object type or an object type in a non-fixed form.
  • the object type identification model can be selected by training the neural network model. Processing, determine the object category of the target object contained in the frame image.
  • the above-mentioned types to be identified include unknown object types or non-fixed object types, wherein the unknown object type is the type of unrecognized object type, and the above-mentioned non-fixed object type is that the shape of the object is variable, such as socks , clothes and other objects, their shape is not fixed, but changeable.
  • step S101 can be executed.
  • the method for determining object information provided in another embodiment of the present application may include steps S701-S707:
  • the target object if the target object is a fixed-shaped object such as a trash can or a shoe, its outline can basically be obtained by preset, so in order to reduce the amount of calculation, for such objects, it can be obtained by
  • the contour information of such objects can be determined in a predetermined way, and for unknown object types or object types with non-fixed forms, the object information of the target object can be determined by using the object information determination method shown in Figure 1 .
  • object type identification may be performed on the target object. If the object type of the target object is a type to be recognized, execute step S702; otherwise, if the object type of the target object is a fixed-form object type, execute step S705.
  • step S101 The implementation of this step is the same or similar to that of step S101, and its implementation can refer to the relevant description of step S101, I won't repeat them here.
  • S703 for each frame of the acquired image, identify the relative distance between the mobile robot and the target object when acquiring the frame of image;
  • step S102 The implementation manner of this step is the same as or similar to that of step S102.
  • step S102 reference may be made to the relevant description of step S102, which will not be repeated here.
  • S704. Determine the outline information of the target object based on the relative distance between the mobile robot and the target object and the position information in the world coordinate system when collecting each frame of image.
  • step S103 The implementation manner of this step is the same as or similar to that of step S103.
  • step S103 please refer to the relevant description of step S103, which will not be repeated here.
  • S705. Determine preset initial information corresponding to the object type of the target object as information to be used; wherein, the initial information indicates the initial position of each edge point;
  • the object type of the target object is a fixed-form object type
  • preset initial information corresponding to the object type of the target object may be determined.
  • the initial information corresponding to each object type may include the initial position of each edge point of the object corresponding to the object type.
  • step S102 only one frame of image collected by the mobile robot for the target object may be obtained, and based on the one frame of image, the relative distance between the mobile robot and the target object when the obtained image is collected is identified.
  • the specific identification method may be similar to step S102, and for the specific implementation method, refer to the related description of step S102.
  • the identified relative position can be used to adjust the initial position of each edge point indicated by the information to be used , so as to obtain the adjusted position of each edge point as the contour information of the target object.
  • the outline information of objects in the environment can be determined.
  • the contour information of the target object can be determined based on the preset initial information, and the mobile robot does not need to move around the target object, and calculates for each frame image
  • the relative position can simplify the process of object information determination and improve the efficiency of object information determination.
  • the method for determining object information after identifying the relative distance between the mobile robot and the target object when collecting the frame of image, it can also be based on the distance between the mobile robot and the target object when collecting the frame of image The relative distance between them, the second distance and the second internal and external parameter information of the camera determine the height of the target object;
  • the second distance is: the pixel distance in the vertical direction between the top pixel point of the target object in the frame image and the center pixel point of the frame image.
  • the second distance is the vertical pixel distance between the top pixel point of the target object and the center pixel point of the frame image.
  • the top pixel can be any pixel on the top line segment of the target object.
  • the top pixel point You can first select any pixel point from the top line segment of the target object as the top pixel point, and then determine the top pixel point The top pixel coordinates of the point, and then determine the center pixel coordinates of the center pixel point. Furthermore, the difference between the ordinate in the top pixel coordinates and the ordinate in the center pixel coordinates can be calculated, and the absolute value of the difference can be used as the second distance. Exemplarily, the coordinates of the top pixel are (x3, y3), and the coordinates of the center pixel are (x4, y4), then the first distance is the absolute value of y3-y4.
  • the above-mentioned second internal and external parameter information includes: a vertical viewing angle of the camera and a vertical resolution of images collected by the camera.
  • FIG. 8 another schematic diagram of a global side view provided by the embodiment of the present application.
  • n ⁇ *(dv2_pixels/V), dv2_pixels is the second distance, V is the vertical resolution,
  • the height of the target object can be determined using the following formula:
  • x is the height of the target object
  • is the vertical viewing angle
  • dv2_pixels is the second distance
  • V is the vertical resolution
  • d is the relative distance between the mobile robot and the target object when collecting the frame of image.
  • the target object After calculating the height of the target object, if the height of the target object is less than the preset height, the target object is judged as a crossable object, and if the height of the target object is not less than the preset height, the target object is judged as an uncrossable object, A detour is required.
  • the outline information of objects in the environment can be determined.
  • the height of the target object can also be calculated, so that the object information of the target object is richer.
  • the horizontal deflection angle is the horizontal angle between the horizontal line of sight and the optical axis of the camera in the mobile robot
  • the horizontal line of sight is the line between the optical center of the camera and the outside of the target object.
  • the above identification of the horizontal deflection angle between the mobile robot and the target object when the frame of image is collected may include step B1-step B2, wherein:
  • Step B1 determine the pixel distance in the horizontal direction between the third pixel point and the fourth pixel point in the frame image as the third distance.
  • the third pixel point is an outer pixel point of the target object
  • the fourth pixel point is a central pixel point.
  • any pixel point can be selected from the line segment outside the target object as the third pixel point, and then the third pixel coordinate of the third pixel point can be determined, and then the fourth pixel coordinate of the fourth pixel point can be determined.
  • the difference between the ordinate in the third pixel coordinate and the abscissa in the fourth pixel coordinate can be calculated, and the absolute value of the difference can be used as the third distance.
  • the coordinates of the third pixel are (x5, y5)
  • the coordinates of the fourth pixel are (x6, y6)
  • the third distance is the absolute value of x5-x6.
  • Step B2 based on the third distance, the horizontal resolution of the frame of image and the horizontal viewing angle of the camera, the horizontal deflection angle of the mobile robot and the target object when capturing the frame of image.
  • the ratio of the third distance to the horizontal resolution can be calculated as the third ratio, and then based on the third ratio and the horizontal viewing angle, determine the horizontal deflection of the mobile robot from the target object when capturing the frame of image angle.
  • the outline information of objects in the environment can be determined. and can be sure The horizontal deflection angle makes the object information of the target object richer.
  • the embodiment of the present application also provides a mobile robot system, including:
  • the image sensor 901 is used to acquire multiple frames of images that include the target object collected by the mobile robot while it is moving around the target object;
  • the processor 902 is configured to identify the relative distance between the mobile robot and the target object when acquiring the frame of image for each frame of the acquired image; The relative distance of the target object and the position information in the world coordinate system determine the outline information of the target object.
  • the aforementioned image sensor 901 may be a camera, such as a monocular camera, a structured light depth camera, a binocular camera, a TOF depth camera, and a binocular stereo vision camera.
  • the processor 902 identifies the relative distance between the mobile robot and the target object when acquiring the frame of image for each frame of the acquired image, which may include:
  • the first pixel point is the target The bottom pixel of the object, the second pixel is the central pixel;
  • the first internal and external parameter information includes: the vertical viewing angle of the camera, the image size of the frame image captured by the camera, and the optical center height of the camera;
  • the processor 902 based on the first distance and the first internal and external parameter information of the camera in the mobile robot, determines the relative distance between the mobile robot and the target object when capturing the frame of image, which may include:
  • the vertical resolution of the frame image and the vertical viewing angle of the camera determine the angle of the lower line of sight; wherein, the angle of the lower line of sight is the angle between the lower line of sight and the optical axis of the camera Angle, the lower line of sight is: the line between the optical center of the camera and the bottom of the target object;
  • the processor 902 determines the angle of the lower line of sight, which may include:
  • an included angle of the lower line of sight is determined.
  • the processor 902 determines the angle of the lower line of sight, which may include:
  • each pixel point determines the target adjustment coefficient corresponding to the specified pixel point in the frame image, and calculate the product of the adjustment coefficient and the first ratio, as the second Two ratios, and calculating the product of the second ratio and the vertical viewing angle, as the lower line of sight angle; wherein, each pixel is a pixel in the image collected by the camera, and the specified pixel is from About the bottom pixel of the target object determined pixels.
  • the processor 902 calculates the relative distance between the mobile robot and the target object when the frame of image is collected based on the lower line of sight angle and the optical center height of the camera, which may include:
  • d is the relative distance
  • k is the vertical angle between the optical axis and the moving plane of the mobile robot
  • m is the angle of the lower line of sight
  • h is the optical center height of the camera.
  • the processor 902 determines the outline information of the target object based on the relative distance between the mobile robot and the target object and the position information in the world coordinate system when collecting each frame of image, which may include:
  • the position of the target edge point of the target object is determined as the The edge position corresponding to the frame image; wherein, the target edge point is an edge point within the shooting range of the camera when the mobile robot collects the frame image;
  • the contour information of the target object is determined.
  • the processor 902 determines the contour information of the target object based on the edge position corresponding to each frame image, which may include:
  • Curve fitting is performed on the edge positions corresponding to each frame of images to obtain at least one fitting curve, and the positions of each point on the at least one fitting curve are determined, and the determined positions are used as the contour of the target object information.
  • the processor 902 may also be used to identify the object type of the target object before acquiring multiple frames of images that include the target object collected by the mobile robot during the orbital motion of the target object ; If the object type of the target object is the type to be identified, then perform the step of acquiring multiple frames of images that include the target object collected by the mobile robot during the process of moving around the target object; wherein, The type to be identified includes: an unknown object type or an object type in a non-fixed form.
  • the processor 902 is further configured to determine preset initial information corresponding to the object type of the target object as the information to be used if the object type of the target object is a fixed form object type; , the initial information indicates the initial position of each edge point; acquire the image collected by the mobile robot for the target object, and identify the relative distance between the mobile robot and the target object when capturing the acquired image ; Based on the identified relative position, adjust the initial position of each edge point indicated by the information to be used, and obtain the adjusted position of each edge point as the contour information of the target object.
  • the manner in which the mobile robot moves around the target object includes:
  • the mobile robot performs a continuous circular motion towards the target object;
  • the mobile robot performs a multi-section arc motion towards the target object.
  • the processor 902 identifies the relative distance between the mobile robot and the target object when capturing the frame of image, it is further configured to The relative distance between objects, the second distance, and the second internal and external parameter information of the camera determine the height of the target object; wherein, the second distance is: the top pixel point of the target object in the frame image The pixel distance in the vertical direction from the center pixel of the frame image.
  • the second internal and external parameter information includes: a vertical viewing angle of the camera and a vertical resolution of images collected by the camera;
  • the processor 902 determines the height of the target object based on the relative distance between the mobile robot and the target object, the second distance, and the second internal and external parameter information of the camera when collecting the frame of image, which may include:
  • x is the height of the target object
  • is the vertical viewing angle
  • dv2_pixels is the second distance
  • V is the vertical resolution
  • d is the distance between the mobile robot and the target when collecting the frame image. The relative distance between objects.
  • the processor 902 is further used for each frame of the acquired image after acquiring multiple frames of images that include the target object during the process of the mobile robot moving around the target object. , identifying the horizontal deflection angle between the mobile robot and the target object when capturing the frame of image; wherein, the horizontal deflection angle is the horizontal angle between the horizontal line of sight and the optical axis of the camera in the mobile robot,
  • the horizontal line of sight is: a line connecting the optical center of the camera and the outside of the target object.
  • the embodiment of the present application also provides a mobile robot system, which further includes:
  • the power module 903 is used to drive the mobile robot to move around the target object.
  • the aforementioned power module 903 may be a moving part carried by the mobile robot, and may include motors, tires and other components.
  • the power module 903 may drive the mobile robot to perform a continuous circular motion against the target object; or, in the case that the camera orientation of the mobile robot is related to the moving direction of the mobile robot, the power module 903 may drive the mobile robot to perform multi-segment arc motions towards the target object.
  • the acquired multi-frame images are collected by the mobile robot in the process of moving around the target object
  • the acquired multi-frame images are different from the target object collected by the mobile robot.
  • oriented image, and the relative distance between the mobile robot and the target object when collecting each frame of image is the distance between the mobile robot and the outline edge of the target object when collecting the frame of image, and then it can be combined with the mobile robot when collecting the frame
  • the position information of the image in the world coordinate system can determine the contour information of the target object. It can be seen that through this solution, the contour information of objects in the environment can be determined.
  • the embodiment of the present application further provides an apparatus for determining object information, including:
  • An image acquisition module 1101, configured to acquire multiple frames of images including the target object collected by the mobile robot during the process of moving around the target object;
  • An information calculation module 1102 configured to identify the relative distance between the mobile robot and the target object when acquiring the frame of image for each frame of the acquired image;
  • the information determination module 1103 is configured to determine the outline information of the target object based on the relative distance between the mobile robot and the target object and the position information in the world coordinate system when each frame of image is collected by the mobile robot.
  • the information calculation module includes:
  • the first sub-module is used to determine the pixel distance between the first pixel point and the second pixel point in the frame image in the vertical direction for each frame of the acquired image as the first distance; wherein, the first pixel point One pixel is the bottom pixel of the target object, and the second pixel is the center pixel;
  • the second sub-module is configured to determine the relative distance between the mobile robot and the target object when the frame of image is collected based on the first distance and the first internal and external parameter information of the camera in the mobile robot.
  • the first internal and external parameter information includes: the vertical viewing angle of the camera, the image size of the image captured by the camera, and the optical center height of the camera;
  • the second submodule includes:
  • An included angle determination unit configured to determine an included angle of the lower line of sight based on the first distance, the vertical resolution of the frame image, and the vertical viewing angle of the camera; wherein, the included angle of the lower line of sight is the angle between the lower line of sight and the camera
  • the included angle between the optical axes, the lower line of sight is: the line between the optical center of the camera and the bottom of the target object;
  • the distance determining unit is configured to calculate the relative distance between the mobile robot and the target object when the frame of image is collected based on the lower line-of-sight angle and the optical center height of the camera.
  • the included angle determination unit includes:
  • a ratio calculation subunit configured to calculate a ratio of the first distance to the vertical resolution as a first ratio
  • the included angle determining subunit is configured to determine the included angle of the lower line of sight based on the first ratio and the vertical viewing angle.
  • the included angle determination subunit is specifically configured to calculate the product of the first ratio and the vertical viewing angle as the lower line of sight angle; or, according to the preset relationship between each pixel point and the adjustment coefficient The corresponding relationship, determine the target adjustment coefficient corresponding to the specified pixel in the frame image, and calculate the product of the adjustment coefficient and the first ratio, as the second ratio, and calculate the second ratio and the The product of the vertical viewing angle, as the angle of the lower line of sight; wherein, each pixel point is a pixel point in the image collected by the camera, and the specified pixel point is determined from the bottom pixel point about the target object pixel.
  • the distance determining unit is specifically configured to use the following formula to calculate the relative distance between the mobile robot and the target object when collecting the frame of image:
  • d is the relative distance
  • k is the vertical angle between the optical axis and the moving plane of the mobile robot
  • m is the angle of the lower line of sight
  • h is the optical center height of the camera.
  • the information determination module includes:
  • the position determination submodule is used to determine the target of the target object based on the position information of the mobile robot in the world coordinate system and the relative distance to the target object when the frame of image is collected by the mobile robot.
  • the position of the edge point is used as the edge position corresponding to the frame image; wherein, the target edge point is an edge point within the shooting range of the camera when the mobile robot collects the frame image;
  • the information determination sub-module is configured to determine the outline information of the target object based on the edge position corresponding to each frame image.
  • the information determination sub-module is specifically configured to use the edge position corresponding to each frame image as the contour information of the target object; or, perform curve fitting on the edge position corresponding to each frame image to obtain at least a fitting curve, and determine the positions of the points on the at least one fitting curve, and use the determined positions as the contour information of the target object.
  • the device further includes: a type identification module, configured to capture multiple images containing the target object when the image acquisition module performs the acquisition while the mobile robot is moving around the target object. Before the frame image, carry out object type identification to the target object; if the object type of the target object is the type to be identified, then call the image acquisition module to execute the acquisition. , the step of collecting multiple frames of images containing the target object; wherein, the type to be identified includes: an unknown object type or an object type in a non-fixed form.
  • the type identification module is further configured to: if the object type of the target object is a fixed-form object type, then determine preset initial information corresponding to the object type of the target object as the object type to be used Information; acquire the image collected by the mobile robot for the target object, and identify the relative distance between the mobile robot and the target object when the acquired image is captured; based on the identified relative position, the The initial position of each edge point indicated by the information to be used is adjusted to obtain the adjusted position of each edge point as the contour information of the target object; wherein the initial information indicates the initial position of each edge point.
  • the manner in which the mobile robot moves around the target object includes: when the orientation of the camera of the mobile robot has nothing to do with the moving direction of the mobile robot, the mobile robot aims at the target object performing a continuous circular motion; or, in the case that the camera orientation of the mobile robot is related to the moving direction of the mobile robot, the mobile robot performs a multi-section arc motion with respect to the target object.
  • the information calculation module is further configured to, after identifying the relative distance between the mobile robot and the target object when collecting the frame of image, based on the mobile robot collecting the frame of image The relative distance between time and the target object, the second distance and the second internal and external reference information of the camera to determine the height of the target object; wherein, the second distance is: the target in the frame image The pixel distance between the top pixel of the object and the center pixel of the frame image in the vertical direction.
  • the second internal and external parameter information includes: a vertical viewing angle of the camera and a vertical resolution of images collected by the camera;
  • the information calculation module includes:
  • the height calculation submodule is used to determine the height of the target object using the following formula:
  • x is the height of the target object
  • is the vertical viewing angle
  • dv2_pixels is the second distance
  • V is the vertical resolution
  • d is the distance between the mobile robot and the target when collecting the frame image. The relative distance between objects.
  • the angle recognition module is configured to, after the image acquisition module executes the acquisition of multiple frames of images that include the target object during the process of the mobile robot moving around the target object, for each One frame of the acquired image, identifying the horizontal deflection angle between the mobile robot and the target object when capturing the frame of image; wherein, the horizontal deflection angle is the distance between the horizontal line of sight and the optical axis of the camera in the mobile robot The horizontal angle between them, the horizontal line of sight is: the line between the optical center of the camera and the outside of the target object.
  • the angle recognition module includes:
  • the distance determination sub-module is used to determine the pixel distance between the third pixel point and the fourth pixel point in the frame image in the horizontal direction as the third distance; wherein, the third pixel point is about the target object Outer pixel points, the fourth pixel point is the central pixel point;
  • the angle determination submodule is configured to determine the horizontal deflection angle between the mobile robot and the target object when capturing the frame of image based on the third distance, the horizontal resolution of the frame of image and the horizontal viewing angle of the camera.
  • the angle determination submodule is specifically configured to calculate a ratio of the third distance to the horizontal resolution as a third ratio; and determine the movement based on the third ratio and the horizontal viewing angle.
  • the acquired multi-frame images are collected by the mobile robot in the process of moving around the target object
  • the acquired multi-frame images are different from the target object collected by the mobile robot.
  • oriented image, and the relative distance between the mobile robot and the target object when collecting each frame of image is the distance between the mobile robot and the outline edge of the target object when collecting the frame of image, and then it can be combined with the mobile robot when collecting the frame
  • the position information of the image in the world coordinate system can determine the contour information of the target object. It can be seen that through this solution, the contour information of objects in the environment can be determined.
  • the embodiment of the present application also provides an electronic device, as shown in FIG. 12 , including a processor 1201, a communication interface 1202, a memory 1203, and a communication bus 1204. complete the mutual communication,
  • Memory 1203 for storing computer programs
  • the processor 1201 is configured to implement the steps of the method for determining object information provided in the foregoing embodiments of the present application when executing the program stored in the memory 1203 .
  • the communication bus mentioned above for the electronic device may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus or the like.
  • PCI Peripheral Component Interconnect
  • EISA Extended Industry Standard Architecture
  • the communication bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is used in the figure, but it does not mean that there is only one bus or one type of bus.
  • the communication interface is used for communication between the electronic device and other devices.
  • the memory may include a random access memory (Random Access Memory, RAM), and may also include a non-volatile memory (Non-Volatile Memory, NVM), such as at least one disk memory.
  • RAM Random Access Memory
  • NVM non-Volatile Memory
  • the memory may also be at least one storage device located far away from the aforementioned processor.
  • the above-mentioned processor can be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; it can also be a digital signal processor (Digital Signal Processing, DSP), dedicated integrated Circuit (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • CPU Central Processing Unit
  • NP Network Processor
  • DSP Digital Signal Processing
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • FPGA Field-Programmable Gate Array
  • a computer-readable storage medium is also provided.
  • a computer program is stored in the computer-readable storage medium.
  • the computer program is executed by a processor, any of the above-mentioned object information determinations can be realized. method steps.
  • a computer program product including instructions is also provided, which, when run on a computer, causes the computer to execute any method for determining object information in the above embodiments.
  • all or part of them may be implemented by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present application will be generated in whole or in part.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server or data center Transmission to another website site, computer, server, or data center by wired (eg, coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrated with one or more available media.
  • the available medium may be a magnetic medium (for example, a floppy disk, a hard disk, or a magnetic tape), an optical medium (for example, DVD), or a semiconductor medium (for example, a Solid State Disk (SSD)).
  • SSD Solid State Disk

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

本申请实施例提供了物体信息确定方法、移动机器人系统及电子设备,应用于机器人技术领域。该方法包括:获取移动机器人在针对目标物体进行环绕运动的过程中,所采集的包含有目标物体的多帧图像;针对每一帧所获取的图像,识别移动机器人在采集该帧图像时,与目标物体之间的相对距离;基于移动机器人在采集各帧图像时与目标物体的相对距离以及在世界坐标系中的位置信息,确定目标物体的轮廓信息。通过本方案,可以确定环境中物体的轮廓信息。

Description

物体信息确定方法、移动机器人系统及电子设备
本申请要求于2022年2月23日提交中国专利局、申请号为202210168135.3发明名称为“物体信息确定方法、移动机器人系统及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及移动机器人技术领域,特别是涉及物体信息确定方法、移动机器人系统及电子设备。
背景技术
近年来,随着机器人技术的不断发展,移动机器人在生活中扮演着越来越重要的角色,例如家用或商用的扫地机器人、迎宾机器人等。
为了避免在移动过程中碰撞物体,避障功能是移动机器人所需实现的基础功能,而避障功能的实现,需要移动机器人能够感知环境中物体的轮廓信息。因此,如何确定环境中物体的轮廓信息是亟需解决的技术问题。
发明内容
本申请实施例的目的在于提供物体信息确定方法、移动机器人系统及电子设备,以确定环境中物体的轮廓信息。具体技术方案如下:第一方面,本申请实施例提供一种物体信息确定方法,所述方法包括:获取移动机器人在针对目标物体进行环绕运动的过程中,所采集的包含有所述目标物体的多帧图像;针对每一帧所获取的图像,识别所述移动机器人在采集该帧图像时,与所述目标物体之间的相对距离;基于所述移动机器人在采集各帧图像时与所述目标物体的相对距离以及在世界坐标系中的位置信息,确定所述目标物体的轮廓信息。
第二方面,本申请实施例提供一种移动机器人系统,包括:图像传感器,用于获取移动机器人在针对目标物体进行环绕运动的过程中,所采集的包含有所述目标物体的多帧图像;处理器,用于针对每一帧所获取的图像,识别所述移动机器人在采集该帧图像时,与所述目标物体之间的相对距离;基于所述移动机器人在采集各帧图像时与所述目标物体的相对距离以及在世界坐标系中的位置信息,确定所述目标物体的轮廓信息。
第三方面,本申请实施例提供一种物体信息确定装置,包括:图像获取模块,用于获取移动机器人在针对目标物体进行环绕运动的过程中,所采集的包含有所述目标物体的多帧图像;信息计算模块,用于针对每一帧所获取的图像,识别所述移动机器人在采集该帧图像时,与所述目标物体之间的相对距离;信息确定模块,用于基于所述移动机器人在采集各帧图像时与所述目标物体的相对距离以及在世界坐标系中的位置信息,确定所述目标物体的轮廓信息。
第四方面,本申请实施例提供一种电子设备,包括处理器、通信接口、存储器和通信总线,其中,处理器,通信接口,存储器通过通信总线完成相互间的通信;存储器,用于存放计算机程序;处理器,用于执行存储器上所存放的程序时,实现第一方面任一所述的 方法步骤。
第五方面,本申请实施例提供一种计算机可读存储介质,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现第一方面任一项所述的方法步骤。
本申请实施例有益效果:
本申请实施例所提供的物体信息确定方法中,可以获取移动机器人在针对目标物体进行环绕运动的过程中,所采集的包含有目标物体的多帧图像;针对每一帧所获取的图像,识别移动机器人在采集该帧图像时,与目标物体之间的相对距离;基于移动机器人在采集各帧图像时与目标物体的相对距离以及在世界坐标系中的位置信息,确定目标物体的轮廓信息。由于所获取的多帧图像是移动机器人在对目标物体进行环绕运动的过程中所采集的,使得所获取的多帧图像是移动机器人采集的目标物体不同朝向的图像,而移动机器人在采集每一帧图像时与目标物体的相对距离,是移动机器人在采集该帧图像时与目标物体的轮廓边缘之间的距离,进而可以结合移动机器人在采集该帧图像时在世界坐标系中的位置信息,确定出目标物体的轮廓信息。可见,通过本方案,可以确定环境中物体的轮廓信息。
当然,实施本申请的任一产品或方法并不一定需要同时达到以上所述的所有优点。
附图说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。
图1为本申请实施例所提供的物体信息确定方法的第一种流程图;
图2(a)为本申请实施例所提供的一种环形运动的示意图;
图2(b)为本申请实施例所提供的一种多段弧形运动的示意图;
图3为本申请实施例所提供的物体信息确定方法的第二种流程图;
图4为本申请实施例所提供的一种包含目标物体的图像的示意图;
图5为本申请实施例所提供的物体信息确定方法的第三种流程图;
图6为本申请实施例所提供的一种全局侧视的示意图;
图7为本申请实施例所提供的物体信息确定方法的第四种流程图;
图8为本申请实施例所提供的另一种全局侧视的示意图;
图9为本申请实施例所提供的移动机器人系统的第一种结构示意图;
图10为本申请实施例所提供的移动机器人系统的第二种结构示意图;
图11为本申请实施例所提供的物体信息确定装置的结构示意图;
图12为本申请实施例所提供的电子设备的结构示意图。
具体实施方式
为使本申请的目的、技术方案、及优点更加清楚明白,以下参照附图并举实施例,对本申请进一步详细说明。显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
避障功能是移动机器人所需实现的基础功能,而避障功能的实现,需要移动机器人能够感知环境中物体的轮廓信息。因此,如何确定环境中物体的轮廓信息是亟需解决的技术问题。
为了确定环境中物体的轮廓信息,本申请实施例提供了一种物体信息确定方法、移动机器人系统及电子设备。
需要说明的,在具体应用中,本申请实施例所提供的物体信息确定方法可以应用于移动机器人,例如扫地机器人或迎宾机器人。或者,本申请实施例所提供的物体信息确定方法还可以应用于其他各类电子设备,例如,智能手机、个人电脑、服务器以及其他具有数据处理能力的设备,当应用于其他各类电子设备时,该电子设备合并可以与移动机器人相互通信,从而可以从移动机器人中获取处理所需的图像。另外,可以理解的是,本申请实施例提供的物体信息确定方法可以通过软件、硬件或软硬件结合的方式实现。
其中,本申请实施例所提供的物体信息确定方法,可以包括:
获取移动机器人在针对目标物体进行环绕运动的过程中,所采集的包含有所述目标物体的多帧图像;
针对每一帧所获取的图像,识别所述移动机器人在采集该帧图像时,与所述目标物体之间的相对距离;
基于所述移动机器人在采集各帧图像时与所述目标物体的相对距离以及在世界坐标系中的位置信息,确定所述目标物体的轮廓信息。
本申请实施例所提供的上述方案中,由于所获取的多帧图像是移动机器人在对目标物体进行环绕运动的过程中所采集的,使得所获取的多帧图像是移动机器人采集的目标物体不同朝向的图像,而移动机器人在采集每一帧图像时与目标物体的相对距离,是移动机器人在采集该帧图像时与目标物体的轮廓边缘之间的距离,进而可以结合移动机器人在采集该帧图像时在世界坐标系中的位置信息,可以确定出目标物体的轮廓信息。可见,通过本方案,可以确定环境中物体的轮廓信息。
下面结合说明书附图,对本申请实施例所提供的物体信息确定方法进行详尽的阐述。
如图1所示,本申请实施例提供的一种物体信息确定方法,可以包括步骤S101-S103,其中:
S101,获取移动机器人在针对目标物体进行环绕运动的过程中,所采集的包含有目标物体的多帧图像;
其中,目标物体可以为移动机器人在运动过程中所探测到的尚未获取到物体信息的物体,例如,移动机器人为扫地机器人,当扫地机器人在移动过程中探测到移动路径被垃圾桶、玩具、凳子或拖鞋等障碍物阻碍时,则阻碍扫地机器人按照原移动路径进行运动的障碍物可被认为本申请所指的目标物体。当然,目标物体也可以为人工指定的物体,这都是可以的。
上述环绕运动指移动机器人环绕目标物体移动至少一圈的运动方式,当然,当目标物体存在被阻挡的侧面时,上述环绕运动也可以是移动机器人对目标物体进行部分环绕运动, 例如当凳子贴墙放置时,上述移动机器人针对凳子进行环绕运动的过程可以指移动机器人围绕凳子外侧边缘进行移动的过程。上述所获取的多帧图像可以为移动机器人在环绕运动的过程中,针对目标物体进行拍摄所获取的图像。
为了在针对目标物体进行环绕运动的过程中,采集到包含有目标物体的多帧图像,移动机器人可以采用多种运动方式,实现针对目标物体的环绕运动。
可选的,移动机器人可以针对目标物体进行持续的环形运动。
其中,持续的环形运动指不间断的围绕目标物体进行运动,如图2(a)所示,本申请实施例提供一种环形运动的示意图,图中不规则体为目标物体,圆弧所指方向指示移动机器人C对目标物体进行逆时针的环形运动,在环形运动过程中,移动机器人C在目标物体的不同方位采集目标物体的图像。
由于移动机器人需要在进行环形运动的过程中采集目标物体的图像,因此,移动机器人的相机朝向与其移动方向是不断变化的,从而采用环形运动的移动机器人,移动机器人的相机朝向与移动机器人的移动方向无关的,即在移动机器人的相机朝向与移动机器人的移动方向无关的情况下,移动机器人可以针对目标物体进行持续的环形运动。
而若移动机器人的相机朝向与移动机器人的移动方向相关,例如移动机器人的相机朝向为移动机器人的正前方,此时,若移动机器人仍采用环形运动,将导致移动机器人在运动过程中,无法采集到目标物体的图像,为了解决该问题,本申请还提供一种多段弧形运动的运动方式。
如图2(b)所示,本申请实施例提供一种多段弧形运动的示意图。移动机器人可以围绕目标物体运动一段圆弧后,将相机朝向调整至面对目标物体,并采集一次目标物体的图像,再继续运动下一段圆弧,重复上述过程,直至围绕目标物体运动至少一圈。
可见,在移动机器人的相机朝向与移动机器人的移动方向相关的情况下,移动机器人针对目标物体进行多段弧形运动。当然,在移动机器人的相机朝向与移动机器人的移动方向无关的情况下,也可以采用多段弧形运动的方式完成对目标物体的环绕运动。
S102,针对每一帧所获取的图像,识别移动机器人在采集该帧图像时,与目标物体之间的相对距离;
其中,以图2(a)举例说明,当移动机器人所获取的图像是移动机器人移动至目标物体正下方时所采集的,则可以利用所获取的图像,确定移动机器人在采集该帧图像时与目标物体之间的相对距离为距离d1。
其中,当移动机器人包含深度相机,例如移动机器人包含结构光深度相机、双目相机、TOF(Time of Flight,光飞时间)深度相机以及双目立体视觉相机等,则所获取的图像中包含深度信息,从而针对每一帧所获取的图像,可以利用图像中记录的深度信息,识别移动机器人在采集该帧图像时与目标物体之间的相对距离。
当然,深度相机的硬件成本较高,为了节约硬件成本,本申请实施例还提供一种基于单目相机采集图像识别移动机器人在采集该帧图像时,与目标物体之间的相对距离的方式,将在后续实施例进行详细说明,在此不再赘述。
S103,基于移动机器人在采集各帧图像时与目标物体的相对距离以及在世界坐标系中的位置信息,确定目标物体的轮廓信息。
在识别移动机器人在采集各帧图像时与目标物体的相对距离之后,则可以基于移动机器人在采集各帧图像时与目标物体的相对距离以及在世界坐标系中的位置信息,确定目标物体的轮廓信息。
其中,移动机器人在世界坐标系中的位置信息可以为移动机器人在世界坐标系中的三维坐标。其中,世界坐标系是移动机器人在处于新环境时所建立的坐标系,一般情况下,以其移动初始点(例如扫地机器人的充电位置)为坐标原点,移动机器人在移动过程中,可以结合其移动过程中的距离、方向等信息,实时更新其自身在世界坐标系中的位置。当然,在移动机器人的移动面为平面或近似平面时,上述移动机器人在世界坐标系中的位置信息也可以简化为移动机器人在世界坐标系中的二维坐标,忽略移动机器人的高度信息,从而利用二维坐标表征移动机器人在其移动平面上的投影位置,这也是可以的。
本申请实施例中,移动机器人在采集每一帧图像时,可以同时记录在采集该帧图像时,自身在世界坐标系中的位置信息。若本申请实施例的执行主体为与移动机器人进行通信的电子设备,则该电子设备在获取移动机器人采集图像的同时,获取移动机器人采集各帧图像时在世界坐标系中的位置信息。
可选的,可以针对每一帧图像,基于移动机器人在采集该帧图像时,在世界坐标系中的位置信息以及与目标物体的相对距离,确定目标物体的目标边缘点的位置,作为该帧图像对应的边缘位置,进而基于各帧图像对应的边缘位置,确定目标物体的轮廓信息。
其中,目标边缘点为移动机器人在采集该帧图像时位于相机的拍摄范围的边缘点。示例性的,图2(a)中,当移动机器人位于目标物体的正下方时,则所确定的距离d1是移动机器人与目标物体正下方边缘点之间的距离,该边缘点是此时移动机器人即为相机的拍摄范围的边缘点。
在确定出每一目标物体的目标边缘点之后,可以基于各帧图像对应的边缘位置,确定目标物体的轮廓信息。
其中,基于各帧图像对应的边缘位置,确定目标物体的轮廓信息的方式有多种,可选的至少包括以下两种轮廓信息确定方式,包括:
第一种轮廓信息确定方式:将各帧图像对应的边缘位置,作为目标物体的轮廓信息;
本方式中,可以直接将各帧图像对应的边缘位置,作为目标物体的轮廓信息,目标物体的轮廓信息可以为边缘位置集合,将每一确定出的边缘位置作为边缘位置集合中的元素。示例性的,各帧图像对应的边缘位置分别为位置1、位置2、位置3、位置4、位置5和位置6,则目标物体的轮廓信息为:{位置1,位置2,位置3,位置4,位置5,位置6}。
第二种轮廓信息确定方式:对各帧图像对应的边缘位置进行曲线拟合,以得到至少一条拟合曲线,并确定至少一条拟合曲线上各点的位置,将所确定的各位置,作为目标物体的轮廓信息。
可选的,上述曲线拟合方式可以为最小二乘法曲线拟合、RBF(Radial Basis Function, 径向基函数)的曲线拟合和三次样条曲线拟合等方法。在得到至少一条拟合曲线之后,则可以将该至少一条拟合曲线上各点的位置作为目标物体的轮廓信息。
本申请实施例所提供的上述方案中,由于所获取的多帧图像是移动机器人在对目标物体进行环绕运动的过程中所采集的,使得所获取的多帧图像是移动机器人采集的目标物体不同朝向的图像,而移动机器人在采集每一帧图像时与目标物体的相对距离,是移动机器人在采集该帧图像时与目标物体的轮廓边缘之间的距离,进而可以结合移动机器人在采集该帧图像时在世界坐标系中的位置信息,可以确定出目标物体的轮廓信息。可见,通过本方案,可以确定环境中物体的轮廓信息。
在图1所示实施例的基础上,如图3所示,本申请另一实施例所提供的物体信息确定方法,上述步骤S102,可以包括S102A-S102B,其中:
S102A,针对每一帧所获取的图像,确定该帧图像中的第一像素点与第二像素点,在垂直方向上的像素距离,作为第一距离;其中,第一像素点为目标物体的底部像素点,第二像素点为中心像素点;
如图4所示,为本申请实施例提供的一种包含目标物体的图像的示意图。其中,第一像素点可以为图中目标物体底部线段上的任意像素点,上述第二像素点为图中的中心像素点。其中,目标物体的底部像素点为属于目标物体的底部边缘的像素点,例如目标物体成像区域内像素坐标中纵坐标最小的像素点。
可以先从目标物体底部线段上选取任意一像素点,作为第一像素点,再确定第一像素点的第一像素坐标,进而确定第二像素点的第二像素坐标。进而可以计算第一像素坐标中的纵坐标与第二像素坐标中的纵坐标的差值,并将该差值的绝对值作为第一距离。示例性的,第一像素坐标为(x1,y1)、第二像素坐标为(x2,y2),则第一距离为y1-y2的绝对值。
S102B,基于第一距离和移动机器人中相机的第一内外参信息,确定移动机器人在采集该帧图像时,与目标物体的相对距离。
其中,在确定出第一距离之后,可以依据相机成像原理和成像过程中相机与目标物体之间的几何关系,基于第一距离和移动机器人中相机的第一内外参信息,确定移动机器人在采集该帧图像时,与目标物体的相对距离。
可选的,上述第一内外参信息可以包括:相机的垂直视角、相机采集图像的图像尺寸以及相机的光心高度。
其中,上述相机的垂直视角指相机在垂直方向的最大视角,例如150度。
上述图像尺寸可以为图像分辨率,例如1024*768,其表示图像在水平方向的直线上包含1024个像素点,在垂直方向的直线上包含768个像素点。可选的,图像分辨率可以包括水平分辨率和横向分辨率,仍以1024*768说明,其水平分辨率为1024,垂直分辨率为768。
上述相机的光心高度为移动机器人中相机的光心与移动机器人运动平面之间的距离。
此时,如图5所示,上述步骤S102B,可以包括S102B1-S102B2,其中:
S102B1,基于第一距离、该帧图像的垂直分辨率以及相机的垂直视角,确定下视线夹角;其中,下视线夹角为下视线与相机的光轴线之间的夹角,下视线为:相机的光心与目标物体的底部之间的连线;
如图6所示,为本申请实施例提供的一种全局侧视的示意图。移动机器人中相机的光心与目标物体的底部之间的连线与相机的光轴线的夹角为下视线夹角。
为了计算移动机器人与目标物体的相对距离,需要确定下视线夹角,从而可以利用三角函数基于下视线夹角和相机的光心高度,计算出移动机器人与目标物体的相对距离。
一种实现方式中,上述基于第一距离、该帧图像的垂直分辨率以及相机的垂直视角,确定下视线夹角,可以包括步骤A1-步骤A2:
步骤A1,计算第一距离与垂直分辨率的比值,作为第一比值;
结合图4和图6可知,第一距离在图像中垂直方向的占比,与下视线夹角占据垂直视角的比例正相关,当下视线夹角占据垂直视角的比例越大,第一距离在图像中垂直方向的占比也越大,因此,为了确定下视线夹角,可以先计算第一距离与垂直分辨率的比值,作为第一比值。
步骤A2,基于第一比值和垂直视角,确定下视线夹角。
可选的,可以近似认为,第一距离在图像中垂直方向的占比,与下视线夹角占据垂直视角的比例线性相关,此时,可以计算第一比值与垂直视角的乘积,作为下视线夹角;或者,本申请提供另一种下视线夹角确定方式,可以根据预设的关于各像素点与调整系数之间的对应关系,确定与该帧图像中的指定像素点对应的目标调整系数,并计算调整系数与第一比值的乘积,作为第二比值,以及计算第二比值与垂直视角的乘积,作为下视线夹角。
其中,各像素点为相机所采集图像中的像素点,指定像素点为从目标物体的底部像素点中所确定的像素点。
上述预设的关于各像素点与调整系数之间的对应关系,可以通过对移动机器人中的相机进行标定确定,针对每一像素点而言,其对应的调整系数可以为该像素点与中心像素点之间的距离、与该像素点对应的实际物体与中心像素点对应实际物体之间的距离的比值。
从而,当需要确定下视线夹角时,可以先从关于目标物体的底部像素点中选择任意一点像素点作为指定像素点,进而将该指定像素点对应的调整系数作为目标调整系数,在利用该目标调整系数对应第一比值进行调整,即计算调整系数与第一比值的乘积,作为第二比值,进而计算第二比值与垂直视角的乘积,作为下视线夹角。
S102B2,基于下视线夹角和相机的光心高度,计算移动机器人在采集该帧图像时,与目标物体的相对距离。
可选的,可以近似认为,移动机器人上相机的光线与目标物体底部所构成的三角形为直角三角形,此时,可以采用如下公式计算移动机器人在采集该帧图像时与目标物体的相对距离:
d=tan(90+m)*h
其中,d为相对距离,h为相机的光心高度。
可选的,为了准确地计算移动机器人在采集该帧图像时与目标物体的相对距离,还可以考虑光轴线与移动机器人的移动平面之间的垂直夹角,此时,可以采用如下公式计算移动机器人在采集该帧图像时与目标物体的相对距离:
d=tan(90+k-m)*h
其中,d为相对距离,k为光轴线与移动机器人的移动平面之间的垂直夹角,m为下视线夹角,h为相机的光心高度。
上述光轴线与移动机器人的移动平面之间的垂直夹角可以为预先标定确定的。
本申请实施例所提供的上述方案中,可以确定环境中物体的轮廓信息。并且可以计算下视线夹角,并利用下视线夹角和相机的光心高度,计算移动机器人在采集该帧图像时与目标物体的相对距离,从而为确定环境中物体的轮廓信息提供了实现基础。
本申请另一实施例所提供的物体信息确定方法,可以在执行步骤S101之前,对目标物体进行物体类型识别,若目标物体的物体类型为待识别类型,则执行步骤S101。其中,待识别类型包括:未知的物体类型或非固定形态的物体类型。
其中,进行物体类型识别的方式可以有多种,例如可以通过训练神经网络模型的方式,选了物体类型识别模型,进而在获取每一帧图像之后,可以利用物体类型识别模型对该帧图像进行处理,确定该帧图像中包含的目标物体的物体类别。
上述待识别类型包括未知的物体类型或非固定形态的物体类型,其中,未知的物体类型为未识别出物体类型的类型,上述非固定形态的物体类型为物体的形态是可变的,例如袜子、衣服等物体,其形态不是固定的,而是可变的。
在识别出目标物体的物体类型为待识别类型,则说明需要确定目标物体的轮廓信息,从而可以执行步骤S101。
在此情况下,如图7所示,本申请另一实施例所提供的物体信息确定方法,可以包括步骤S701-S707:
S701,对目标物体进行物体类型识别;
其中,对于目标物体而言,若目标物体为垃圾桶,鞋子等固定形态的物体类型的物体而言,其轮廓基本可以通过预设得出,从而为了减少计算量,对于此类物体可以通过预设的方式确定此类物体的轮廓信息,而对于未知的物体类型或非固定形态的物体类型,则可以采用图1所示的物体信息确定方法的方式,确定出目标物体的物体信息。
此时,在确定出目标物体之后,可以对目标物体进行物体类型识别。若目标物体的物体类型为待识别类型,则执行步骤S702,否则,若目标物体的物体类型为固定形态的物体类型,则执行步骤S705。
S702,获取移动机器人在针对目标物体进行环绕运动的过程中,所采集的包含有目标物体的多帧图像;
本步骤实现方式与步骤S101相同或相似,其实现方式可以参见步骤S101的相关描述, 在此不再赘述。
S703,针对每一帧所获取的图像,识别移动机器人在采集该帧图像时,与目标物体之间的相对距离;
本步骤实现方式与步骤S102相同或相似,其实现方式可以参见步骤S102的相关描述,在此不再赘述。
S704,基于移动机器人在采集各帧图像时与目标物体的相对距离以及在世界坐标系中的位置信息,确定目标物体的轮廓信息。
本步骤实现方式与步骤S103相同或相似,其实现方式可以参见步骤S103的相关描述,在此不再赘述。
S705,确定预设的与目标物体的物体类型对应的初始信息,作为待利用信息;其中,初始信息指示各边缘点的初始位置;
若目标物体的物体类型为固定形态的物体类型,则可以确定预设的与目标物体的物体类型对应的初始信息。每一物体类型对应的初始信息可以包含该物体类型对应的物体各边缘点的初始位置。
S706,获取移动机器人针对目标物体采集的图像,并识别在采集所获取图像时,移动机器人与目标物体之间的相对距离;
可选的,本步骤中可以仅获取移动机器人针对目标物体采集的一帧图像,进而基于该一帧图像,识别在采集所获取图像时,移动机器人与目标物体之间的相对距离。其具体识别方式可以与步骤S102相似,具体实现方式,参见步骤S102相关描述。
S707,基于识别到的相对位置,对待利用信息所指示的各边缘点的初始位置进行调整,得到各边缘点的调整后位置,作为目标物体的轮廓信息。
由于已确定目标物体各边缘点的初始位置,进而在确定出移动机器人与目标物体之间的相对距离之后,可以利用识别到的相对位置,对待利用信息所指示的各边缘点的初始位置进行调整,从而得到各边缘点的调整后位置,作为目标物体的轮廓信息。
本申请实施例所提供的上述方案中,可以确定环境中物体的轮廓信息。并且可以在目标物体的物体类型为固定形态的物体类型的情况下,基于预设的初始信息确定目标物体的轮廓信息,不需要移动机器人针对目标对象进行环绕运动,并针对各帧图像均计算出相对位置,从而可以简化物体信息确定的流程,提高物体信息确定的效率。
本申请另一实施例所提供的物体信息确定方法,在识别移动机器人在采集该帧图像时,与目标物体之间的相对距离之后,还可以基于移动机器人在采集该帧图像时与目标物体之间的相对距离、第二距离和相机的第二内外参信息,确定目标物体的高度;
其中,第二距离为:该帧图像中目标物体的顶部像素点与该帧图像的中心像素点,在垂直方向上的像素距离。
如图4所示,第二距离为目标物体的顶部像素点与该帧图像的中心像素点,在垂直方向上的像素距离。该顶部像素点可以为目标物体顶部线段上的任意一像素点。
可以先从目标物体顶部线段上选取任意一像素点,作为顶部像素点,再确定顶部像素 点的顶部像素坐标,进而确定中心像素点的中心像素坐标。进而可以计算顶部像素坐标中的纵坐标与中心像素坐标中的纵坐标的差值,并将该差值的绝对值作为第二距离。示例性的,顶部像素坐标为(x3,y3)、中心像素坐标为(x4,y4),则第一距离为y3-y4的绝对值。
可选的,上述第二内外参信息包括:相机的垂直视角以及相机采集图像的垂直分辨率。
如图8所示,本申请实施例提供的另一种全局侧视的示意图。光轴线和上视线夹角为n,则有tan(n)=(h-x)/d,其中x为目标物体的高度,x为目标物体的高度。而夹角n=θ*(dv2_pixels/V),dv2_pixels为第二距离,V为垂直分辨率,
此时,可以采用如下公式确定目标物体的高度:
其中,x为目标物体的高度,θ为垂直视角,dv2_pixels为第二距离,V为垂直分辨率,d为移动机器人在采集该帧图像时与目标物体之间的相对距离。
在计算出目标物体的高度之后,若目标物体的高度小于预设高度,则将目标物体判定为可跨越物体,若目标物体的高度不小于预设高度,则将目标物体判定为不可跨越物体,需要绕行。
本申请实施例所提供的上述方案中,可以确定环境中物体的轮廓信息。并且还可以计算出目标物体的高度,从而使得目标物体的物体信息更丰富。
本申请另一实施例所提供的物体信息确定方法,在获取移动机器人在针对目标物体进行环绕运动的过程中,所采集的包含有目标物体的多帧图像之后,还可以针对每一帧所获取的图像,识别移动机器人在采集该帧图像时,与目标物体的水平偏转角度;
其中,水平偏转角度为水平视线与移动机器人中相机的光轴线之间的水平夹角,水平视线为:相机的光心与目标物体的外侧之间的连线。可选的,上述识别移动机器人在采集该帧图像时,与目标物体的水平偏转角度,可以包括步骤B1-步骤B2,其中:
步骤B1,确定该帧图像中的第三像素点与第四像素点,在水平方向上的像素距离,作为第三距离。
其中,第三像素点为关于目标物体的外侧像素点,第四像素点为中心像素点。此时,可以先从目标物体外侧线段上选取任意一像素点,作为第三像素点,再确定第三像素点的第三像素坐标,进而确定第四像素点的第四像素坐标。进而可以计算第三像素坐标中的纵坐标与第四像素坐标中的横坐标的差值,并将该差值的绝对值作为第三距离。示例性的,第三像素坐标为(x5,y5)、第四像素坐标为(x6,y6),则第三距离为x5-x6的绝对值。
步骤B2,基于第三距离、该帧图像的水平分辨率以及相机的水平视角,移动机器人在采集该帧图像时,与目标物体的水平偏转角度。
与计算下视角夹角相似,可以计算第三距离与水平分辨率的比值,作为第三比值,进而基于第三比值和水平视角,确定移动机器人在采集该帧图像时,与目标物体的水平偏转角度。
本申请实施例所提供的上述方案中,可以确定环境中物体的轮廓信息。并且可以确定 水平偏转角度,从而使得目标物体的物体信息更丰富。
相应于本申请上述实施例所提供的物体信息确定方法,如图9所示,本申请实施例还提供了一种移动机器人系统,包括:
图像传感器901,用于获取移动机器人在针对目标物体进行环绕运动的过程中,所采集的包含有所述目标物体的多帧图像;
处理器902,用于针对每一帧所获取的图像,识别所述移动机器人在采集该帧图像时,与所述目标物体之间的相对距离;基于所述移动机器人在采集各帧图像时与所述目标物体的相对距离以及在世界坐标系中的位置信息,确定所述目标物体的轮廓信息。
需要说明的是,上述图像传感器901可以为相机,例如单目相机、结构光深度相机、双目相机、TOF深度相机以及双目立体视觉相机等。
可选的,处理器902针对每一帧所获取的图像,识别所述移动机器人在采集该帧图像时,与所述目标物体之间的相对距离,可以包括:
针对每一帧所获取的图像,确定该帧图像中的第一像素点与第二像素点,在垂直方向上的像素距离,作为第一距离;其中,所述第一像素点为所述目标物体的底部像素点,所述第二像素点为中心像素点;
基于所述第一距离和所述移动机器人中相机的第一内外参信息,确定所述移动机器人在采集该帧图像时,与所述目标物体的相对距离。
可选的,所述第一内外参信息包括:所述相机的垂直视角、所述相机采集该帧图像的图像尺寸以及所述相机的光心高度;
处理器902基于所述第一距离和所述移动机器人中相机的第一内外参信息,确定所述移动机器人在采集该帧图像时,与所述目标物体的相对距离,可以包括:
基于所述第一距离、该帧图像的垂直分辨率以及所述相机的垂直视角,确定下视线夹角;其中,所述下视线夹角为下视线与所述相机的光轴线之间的夹角,所述下视线为:所述相机的光心与所述目标物体的底部之间的连线;
基于所述下视线夹角和所述相机的光心高度,计算所述移动机器人在采集该帧图像时,与所述目标物体的相对距离。
可选的,处理器902基于所述第一距离、该帧图像的垂直分辨率以及所述相机的垂直视角,确定下视线夹角,可以包括:
计算所述第一距离与所述垂直分辨率的比值,作为第一比值;
基于所述第一比值和所述垂直视角,确定下视线夹角。
可选的,处理器902基于所述第一比值和所述垂直视角,确定下视线夹角,可以包括:
计算所述第一比值与所述垂直视角的乘积,作为下视线夹角;或者,
根据预设的关于各像素点与调整系数之间的对应关系,确定与该帧图像中的指定像素点对应的目标调整系数,并计算所述调整系数与所述第一比值的乘积,作为第二比值,以及计算所述第二比值与所述垂直视角的乘积,作为下视线夹角;其中,所述各像素点为所述相机所采集图像中的像素点,所述指定像素点为从关于所述目标物体的底部像素点中所 确定的像素点。
可选的,处理器902基于所述下视线夹角和所述相机的光心高度,计算所述移动机器人在采集该帧图像时,与所述目标物体的相对距离,可以包括:
采用如下公式计算所述移动机器人在采集该帧图像时,与所述目标物体的相对距离:
d=tan(90+k-m)*h
其中,d为所述相对距离,k为所述光轴线与所述移动机器人的移动平面之间的垂直夹角,m为所述下视线夹角,h为所述相机的光心高度。
可选的,处理器902基于所述移动机器人在采集各帧图像时与所述目标物体的相对距离以及在世界坐标系中的位置信息,确定所述目标物体的轮廓信息,可以包括:
针对每一帧图像,基于所述移动机器人在采集该帧图像时,在世界坐标系中的位置信息以及与所述目标物体的相对距离,确定所述目标物体的目标边缘点的位置,作为该帧图像对应的边缘位置;其中,所述目标边缘点为所述移动机器人在采集该帧图像时位于所述相机的拍摄范围的边缘点;
基于各帧图像对应的边缘位置,确定所述目标物体的轮廓信息。
可选的,处理器902基于各帧图像对应的边缘位置,确定所述目标物体的轮廓信息,可以包括:
将各帧图像对应的边缘位置,作为所述目标物体的轮廓信息;或者,
对各帧图像对应的边缘位置进行曲线拟合,以得到至少一条拟合曲线,并确定所述至少一条拟合曲线上各点的位置,将所确定的各位置,作为所述目标物体的轮廓信息。
可选的,处理器902在获取移动机器人在针对目标物体进行环绕运动的过程中,所采集的包含有所述目标物体的多帧图像之前,还可以用于对所述目标物体进行物体类型识别;若所述目标物体的物体类型为待识别类型,则执行所述获取移动机器人在针对目标物体进行环绕运动的过程中,所采集的包含有所述目标物体的多帧图像的步骤;其中,所述待识别类型包括:未知的物体类型或非固定形态的物体类型。
可选的,处理器902,还用于若所述目标物体的物体类型为固定形态的物体类型,则确定预设的与所述目标物体的物体类型对应的初始信息,作为待利用信息;其中,所述初始信息指示各边缘点的初始位置;获取所述移动机器人针对所述目标物体采集的图像,并识别在采集所获取图像时,所述移动机器人与所述目标物体之间的相对距离;基于识别到的相对位置,对所述待利用信息所指示的各边缘点的初始位置进行调整,得到所述各边缘点的调整后位置,作为所述目标物体的轮廓信息。
可选的,所述移动机器人针对所述目标物体进行环绕运动的方式包括:
在所述移动机器人的相机朝向与所述移动机器人的移动方向无关的情况下,所述移动机器人针对所述目标物体进行持续的环形运动;或,
在所述移动机器人的相机朝向与所述移动机器人的移动方向相关的情况下,所述移动机器人针对所述目标物体进行多段弧形运动。
可选的,处理器902在识别所述移动机器人在采集该帧图像时,与所述目标物体之间的相对距离之后,还用于基于所述移动机器人在采集该帧图像时与所述目标物体之间的相对距离、第二距离和所述相机的第二内外参信息,确定所述目标物体的高度;其中,所述第二距离为:该帧图像中所述目标物体的顶部像素点与该帧图像的中心像素点,在垂直方向上的像素距离。
可选的,所述第二内外参信息包括:所述相机的垂直视角以及所述相机采集图像的垂直分辨率;
处理器902基于所述移动机器人在采集该帧图像时与所述目标物体之间的相对距离、第二距离和所述相机的第二内外参信息,确定所述目标物体的高度,可以包括:
采用如下公式确定所述目标物体的高度:
其中,x为所述目标物体的高度,θ为所述垂直视角,dv2_pixels为所述第二距离,V为所述垂直分辨率,d为所述移动机器人在采集该帧图像时与所述目标物体之间的相对距离。
可选的,处理器902在所述获取移动机器人在针对目标物体进行环绕运动的过程中,所采集的包含有所述目标物体的多帧图像之后,还用于针对每一帧所获取的图像,识别所述移动机器人在采集该帧图像时,与所述目标物体的水平偏转角度;其中,所述水平偏转角度为水平视线与所述移动机器人中相机的光轴线之间的水平夹角,所述水平视线为:所述相机的光心与所述目标物体的外侧之间的连线。
可选的,在图9所示实施例的基础上,如图10所示,本申请实施例还提供一种移动机器人系统移动机器人系统,还包括:
动力模块903,用于驱动所述移动机器人针对所述目标物体进行环绕运动。
其中,上述动力模块903可以为移动机器人所携带的移动部件,可以包括电机、轮胎等部件。
可选的,在所述移动机器人的相机朝向与所述移动机器人的移动方向无关的情况下,上述动力模块903可以驱动所述移动机器人针对所述目标物体进行持续的环形运动;或,在所述移动机器人的相机朝向与所述移动机器人的移动方向相关的情况下,上述动力模块903可以驱动所述移动机器人针对所述目标物体进行多段弧形运动。
本申请实施例所提供的上述方案中,由于所获取的多帧图像是移动机器人在对目标物体进行环绕运动的过程中所采集的,使得所获取的多帧图像是移动机器人采集的目标物体不同朝向的图像,而移动机器人在采集每一帧图像时与目标物体的相对距离,是移动机器人在采集该帧图像时与目标物体的轮廓边缘之间的距离,进而可以结合移动机器人在采集该帧图像时在世界坐标系中的位置信息,可以确定出目标物体的轮廓信息。可见,通过本方案,可以确定环境中物体的轮廓信息。
相应于本申请上述实施例所提供的物体信息确定方法,如图11所示,本申请实施例还提供了一种物体信息确定装置,包括:
图像获取模块1101,用于获取移动机器人在针对目标物体进行环绕运动的过程中,所采集的包含有所述目标物体的多帧图像;
信息计算模块1102,用于针对每一帧所获取的图像,识别所述移动机器人在采集该帧图像时,与所述目标物体之间的相对距离;
信息确定模块1103,用于基于所述移动机器人在采集各帧图像时与所述目标物体的相对距离以及在世界坐标系中的位置信息,确定所述目标物体的轮廓信息。
可选的,所述信息计算模块,包括:
第一子模块,用于针对每一帧所获取的图像,确定该帧图像中的第一像素点与第二像素点,在垂直方向上的像素距离,作为第一距离;其中,所述第一像素点为所述目标物体的底部像素点,所述第二像素点为中心像素点;
第二子模块,用于基于所述第一距离和所述移动机器人中相机的第一内外参信息,确定所述移动机器人在采集该帧图像时,与所述目标物体的相对距离。
可选的,所述第一内外参信息包括:所述相机的垂直视角、所述相机采集图像的图像尺寸以及所述相机的光心高度;
所述第二子模块,包括:
夹角确定单元,用于基于所述第一距离、该帧图像的垂直分辨率以及所述相机的垂直视角,确定下视线夹角;其中,所述下视线夹角为下视线与所述相机的光轴线之间的夹角,所述下视线为:所述相机的光心与所述目标物体的底部之间的连线;
距离确定单元,用于基于所述下视线夹角和所述相机的光心高度,计算所述移动机器人在采集该帧图像时,与所述目标物体的相对距离。
可选的,所述夹角确定单元,包括:
比值计算子单元,用于计算所述第一距离与所述垂直分辨率的比值,作为第一比值;
夹角确定子单元,用于基于所述第一比值和所述垂直视角,确定下视线夹角。
可选的,所述夹角确定子单元,具体用于计算所述第一比值与所述垂直视角的乘积,作为下视线夹角;或者,根据预设的关于各像素点与调整系数之间的对应关系,确定与该帧图像中的指定像素点对应的目标调整系数,并计算所述调整系数与所述第一比值的乘积,作为第二比值,以及计算所述第二比值与所述垂直视角的乘积,作为下视线夹角;其中,所述各像素点为所述相机所采集图像中的像素点,所述指定像素点为从关于所述目标物体的底部像素点中所确定的像素点。
可选的,所述距离确定单元,具体用于采用如下公式计算所述移动机器人在采集该帧图像时,与所述目标物体的相对距离:
d=tan(90+k-m)*h
其中,d为所述相对距离,k为所述光轴线与所述移动机器人的移动平面之间的垂直夹角,m为所述下视线夹角,h为所述相机的光心高度。
可选的,所述信息确定模块,包括:
位置确定子模块,用于针对每一帧图像,基于所述移动机器人在采集该帧图像时,在世界坐标系中的位置信息以及与所述目标物体的相对距离,确定所述目标物体的目标边缘点的位置,作为该帧图像对应的边缘位置;其中,所述目标边缘点为所述移动机器人在采集该帧图像时位于所述相机的拍摄范围的边缘点;
信息确定子模块,用于基于各帧图像对应的边缘位置,确定所述目标物体的轮廓信息。
可选的,所述信息确定子模块,具体用于将各帧图像对应的边缘位置,作为所述目标物体的轮廓信息;或者,对各帧图像对应的边缘位置进行曲线拟合,以得到至少一条拟合曲线,并确定所述至少一条拟合曲线上各点的位置,将所确定的各位置,作为所述目标物体的轮廓信息。
可选的,所述装置还包括:类型识别模块,用于在所述图像获取模块执行所述获取移动机器人在针对目标物体进行环绕运动的过程中,所采集的包含有所述目标物体的多帧图像之前,对所述目标物体进行物体类型识别;若所述目标物体的物体类型为待识别类型,则调用所述图像获取模块执行所述获取移动机器人在针对目标物体进行环绕运动的过程中,所采集的包含有所述目标物体的多帧图像的步骤;其中,所述待识别类型包括:未知的物体类型或非固定形态的物体类型。
可选的,所述类型识别模块,还用于:若所述目标物体的物体类型为固定形态的物体类型,则确定预设的与所述目标物体的物体类型对应的初始信息,作为待利用信息;获取所述移动机器人针对所述目标物体采集的图像,并识别在采集所获取图像时,所述移动机器人与所述目标物体之间的相对距离;基于识别到的相对位置,对所述待利用信息所指示的各边缘点的初始位置进行调整,得到所述各边缘点的调整后位置,作为所述目标物体的轮廓信息;其中,所述初始信息指示各边缘点的初始位置。
可选的,所述移动机器人针对所述目标物体进行环绕运动的方式包括:在所述移动机器人的相机朝向与所述移动机器人的移动方向无关的情况下,所述移动机器人针对所述目标物体进行持续的环形运动;或,在所述移动机器人的相机朝向与所述移动机器人的移动方向相关的情况下,所述移动机器人针对所述目标物体进行多段弧形运动。
可选的,所述信息计算模块,还用于在所述识别所述移动机器人在采集该帧图像时,与所述目标物体之间的相对距离之后,基于所述移动机器人在采集该帧图像时与所述目标物体之间的相对距离、第二距离和所述相机的第二内外参信息,确定所述目标物体的高度;其中,所述第二距离为:该帧图像中所述目标物体的顶部像素点与该帧图像的中心像素点,在垂直方向上的像素距离。
可选的,所述第二内外参信息包括:所述相机的垂直视角以及所述相机采集图像的垂直分辨率;
所述信息计算模块,包括:
高度计算子模块,用于采用如下公式确定所述目标物体的高度:
其中,x为所述目标物体的高度,θ为所述垂直视角,dv2_pixels为所述第二距离,V为所述垂直分辨率,d为所述移动机器人在采集该帧图像时与所述目标物体之间的相对距离。
可选的,角度识别模块,用于在所述图像获取模块执行所述获取移动机器人在针对目标物体进行环绕运动的过程中,所采集的包含有所述目标物体的多帧图像之后,针对每一帧所获取的图像,识别所述移动机器人在采集该帧图像时,与所述目标物体的水平偏转角度;其中,所述水平偏转角度为水平视线与所述移动机器人中相机的光轴线之间的水平夹角,所述水平视线为:所述相机的光心与所述目标物体的外侧之间的连线。
可选的,所述角度识别模块,包括:
距离确定子模块,用于确定该帧图像中的第三像素点与第四像素点,在水平方向上的像素距离,作为第三距离;其中,所述第三像素点为关于所述目标物体的外侧像素点,所述第四像素点为中心像素点;
角度确定子模块,用于基于所述第三距离、该帧图像的水平分辨率以及所述相机的水平视角,所述移动机器人在采集该帧图像时,与所述目标物体的水平偏转角度。
可选的,所述角度确定子模块,具体用于计算所述第三距离与所述水平分辨率的比值,作为第三比值;基于所述第三比值和所述水平视角,确定所述移动机器人在采集该帧图像时,与所述目标物体的水平偏转角度。
本申请实施例所提供的上述方案中,由于所获取的多帧图像是移动机器人在对目标物体进行环绕运动的过程中所采集的,使得所获取的多帧图像是移动机器人采集的目标物体不同朝向的图像,而移动机器人在采集每一帧图像时与目标物体的相对距离,是移动机器人在采集该帧图像时与目标物体的轮廓边缘之间的距离,进而可以结合移动机器人在采集该帧图像时在世界坐标系中的位置信息,可以确定出目标物体的轮廓信息。可见,通过本方案,可以确定环境中物体的轮廓信息。
本申请实施例还提供了一种电子设备,如图12所示,包括处理器1201、通信接口1202、存储器1203和通信总线1204,其中,处理器1201,通信接口1202,存储器1203通过通信总线1204完成相互间的通信,
存储器1203,用于存放计算机程序;
处理器1201,用于执行存储器1203上所存放的程序时,实现本申请上述实施例所提供的物体信息确定方法步骤。
上述电子设备提到的通信总线可以是外设部件互连标准(Peripheral Component Interconnect,PCI)总线或扩展工业标准结构(Extended Industry Standard Architecture,EISA)总线等。该通信总线可以分为地址总线、数据总线、控制总线等。为便于表示,图中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
通信接口用于上述电子设备与其他设备之间的通信。
存储器可以包括随机存取存储器(Random Access Memory,RAM),也可以包括非易失性存储器(Non-Volatile Memory,NVM),例如至少一个磁盘存储器。可选的,存储器还可以是至少一个位于远离前述处理器的存储装置。
上述的处理器可以是通用处理器,包括中央处理器(Central Processing Unit,CPU)、网络处理器(Network Processor,NP)等;还可以是数字信号处理器(Digital Signal Processing,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。
在本申请提供的又一实施例中,还提供了一种计算机可读存储介质,该计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现上述任一物体信息确定方法的步骤。
在本申请提供的又一实施例中,还提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述实施例中任一物体信息确定方法。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
本说明书中的各个实施例均采用相关的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于移动机器人系统、装置、电子设备、计算机可读存储介质及计算机程序产品实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
以上所述仅为本申请的较佳实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本申请保护的范围之内。

Claims (18)

  1. 一种物体信息确定方法,其特征在于,所述方法包括:
    获取移动机器人在针对目标物体进行环绕运动的过程中,所采集的包含有所述目标物体的多帧图像;
    针对每一帧所获取的图像,识别所述移动机器人在采集该帧图像时,与所述目标物体之间的相对距离;
    基于所述移动机器人在采集各帧图像时与所述目标物体的相对距离以及在世界坐标系中的位置信息,确定所述目标物体的轮廓信息。
  2. 根据权利要求1所述的方法,其特征在于,所述针对每一帧所获取的图像,识别所述移动机器人在采集该帧图像时,与所述目标物体之间的相对距离,包括:
    针对每一帧所获取的图像,确定该帧图像中的第一像素点与第二像素点,在垂直方向上的像素距离,作为第一距离;其中,所述第一像素点为所述目标物体的底部像素点,所述第二像素点为中心像素点;
    基于所述第一距离和所述移动机器人中相机的第一内外参信息,确定所述移动机器人在采集该帧图像时,与所述目标物体的相对距离。
  3. 根据权利要求2所述的方法,其特征在于,所述第一内外参信息包括:所述相机的垂直视角、所述相机采集该帧图像的图像尺寸以及所述相机的光心高度;
    所述基于所述第一距离和所述移动机器人中相机的第一内外参信息,确定所述移动机器人在采集该帧图像时,与所述目标物体的相对距离,包括:
    基于所述第一距离、该帧图像的垂直分辨率以及所述相机的垂直视角,确定下视线夹角;其中,所述下视线夹角为下视线与所述相机的光轴线之间的夹角,所述下视线为:所述相机的光心与所述目标物体的底部之间的连线;
    基于所述下视线夹角和所述相机的光心高度,计算所述移动机器人在采集该帧图像时,与所述目标物体的相对距离。
  4. 根据权利要求3所述的方法,其特征在于,所述基于所述第一距离、该帧图像的垂直分辨率以及所述相机的垂直视角,确定下视线夹角,包括:
    计算所述第一距离与所述垂直分辨率的比值,作为第一比值;
    基于所述第一比值和所述垂直视角,确定下视线夹角。
  5. 根据权利要求4所述的方法,其特征在于,所述基于所述第一比值和所述垂直视角,确定下视线夹角,包括:
    计算所述第一比值与所述垂直视角的乘积,作为下视线夹角;或者,
    根据预设的关于各像素点与调整系数之间的对应关系,确定与该帧图像中的指定像素点对应的目标调整系数,并计算所述调整系数与所述第一比值的乘积,作为第二比值,以及计算所述第二比值与所述垂直视角的乘积,作为下视线夹角;其中,所述各像素点为所述相机所采集图像中的像素点,所述指定像素点为从关于所述目标物体的底部像素点中所 确定的像素点。
  6. 根据权利要求3所述的方法,其特征在于,所述基于所述下视线夹角和所述相机的光心高度,计算所述移动机器人在采集该帧图像时,与所述目标物体的相对距离,包括:
    采用如下公式计算所述移动机器人在采集该帧图像时,与所述目标物体的相对距离:
    d=tan(90+k-m)*h
    其中,d为所述相对距离,k为所述光轴线与所述移动机器人的移动平面之间的垂直夹角,m为所述下视线夹角,h为所述相机的光心高度。
  7. 根据权利要求1-6任一项所述的方法,其特征在于,所述基于所述移动机器人在采集各帧图像时与所述目标物体的相对距离以及在世界坐标系中的位置信息,确定所述目标物体的轮廓信息,包括:
    针对每一帧图像,基于所述移动机器人在采集该帧图像时,在世界坐标系中的位置信息以及与所述目标物体的相对距离,确定所述目标物体的目标边缘点的位置,作为该帧图像对应的边缘位置;其中,所述目标边缘点为所述移动机器人在采集该帧图像时位于所述相机的拍摄范围的边缘点;
    基于各帧图像对应的边缘位置,确定所述目标物体的轮廓信息。
  8. 根据权利要求7所述的方法,其特征在于,所述基于各帧图像对应的边缘位置,确定所述目标物体的轮廓信息,包括:
    将各帧图像对应的边缘位置,作为所述目标物体的轮廓信息;或者,
    对各帧图像对应的边缘位置进行曲线拟合,以得到至少一条拟合曲线,并确定所述至少一条拟合曲线上各点的位置,将所确定的各位置,作为所述目标物体的轮廓信息。
  9. 根据权利要求1-6任一项所述的方法,其特征在于,在获取移动机器人在针对目标物体进行环绕运动的过程中,所采集的包含有所述目标物体的多帧图像之前,所述方法还包括:
    对所述目标物体进行物体类型识别;
    若所述目标物体的物体类型为待识别类型,则执行所述获取移动机器人在针对目标物体进行环绕运动的过程中,所采集的包含有所述目标物体的多帧图像的步骤;其中,所述待识别类型包括:未知的物体类型或非固定形态的物体类型。
  10. 根据权利要求9所述的方法,其特征在于,所述方法还包括:
    若所述目标物体的物体类型为固定形态的物体类型,则确定预设的与所述目标物体的物体类型对应的初始信息,作为待利用信息;其中,所述初始信息指示各边缘点的初始位置;
    获取所述移动机器人针对所述目标物体采集的图像,并识别在采集所获取图像时,所述移动机器人与所述目标物体之间的相对距离;
    基于识别到的相对位置,对所述待利用信息所指示的各边缘点的初始位置进行调整,得到所述各边缘点的调整后位置,作为所述目标物体的轮廓信息。
  11. 根据权利要求1-6任一项所述的方法,其特征在于,所述移动机器人针对所述目标物体进行环绕运动的方式包括:
    在所述移动机器人的相机朝向与所述移动机器人的移动方向无关的情况下,所述移动机器人针对所述目标物体进行持续的环形运动;或,
    在所述移动机器人的相机朝向与所述移动机器人的移动方向相关的情况下,所述移动机器人针对所述目标物体进行多段弧形运动。
  12. 根据权利要求1-6任一项所述的方法,其特征在于,在所述识别所述移动机器人在采集该帧图像时,与所述目标物体之间的相对距离之后,所述方法还包括:
    基于所述移动机器人在采集该帧图像时与所述目标物体之间的相对距离、第二距离和所述相机的第二内外参信息,确定所述目标物体的高度;
    其中,所述第二距离为:该帧图像中所述目标物体的顶部像素点与该帧图像的中心像素点,在垂直方向上的像素距离。
  13. 根据权利要求12所述的方法,其特征在于,所述第二内外参信息包括:所述相机的垂直视角以及所述相机采集图像的垂直分辨率;
    所述基于所述移动机器人在采集该帧图像时与所述目标物体之间的相对距离、第二距离和所述相机的第二内外参信息,确定所述目标物体的高度,包括:
    采用如下公式确定所述目标物体的高度:
    其中,x为所述目标物体的高度,θ为所述垂直视角,dv2_pixels为所述第二距离,V为所述垂直分辨率,d为所述移动机器人在采集该帧图像时与所述目标物体之间的相对距离。
  14. 根据权利要求1-6任一项所述的方法,其特征在于,在所述获取移动机器人在针对目标物体进行环绕运动的过程中,所采集的包含有所述目标物体的多帧图像之后,所述方法还包括:
    针对每一帧所获取的图像,识别所述移动机器人在采集该帧图像时,与所述目标物体的水平偏转角度;其中,所述水平偏转角度为水平视线与所述移动机器人中相机的光轴线之间的水平夹角,所述水平视线为:所述相机的光心与所述目标物体的外侧之间的连线。
  15. 一种移动机器人系统,其特征在于,包括:
    图像传感器,用于获取移动机器人在针对目标物体进行环绕运动的过程中,所采集的包含有所述目标物体的多帧图像;
    处理器,用于针对每一帧所获取的图像,识别所述移动机器人在采集该帧图像时,与所述目标物体之间的相对距离;基于所述移动机器人在采集各帧图像时与所述目标物体的相对距离以及在世界坐标系中的位置信息,确定所述目标物体的轮廓信息。
  16. 根据权利要求15所述的移动机器人系统,其特征在于,还包括:
    动力模块,用于驱动所述移动机器人针对所述目标物体进行环绕运动。
  17. 一种电子设备,其特征在于,包括处理器、通信接口、存储器和通信总线,其中, 处理器,通信接口,存储器通过通信总线完成相互间的通信;
    存储器,用于存放计算机程序;
    处理器,用于执行存储器上所存放的程序时,实现权利要求1-14任一所述的方法步骤。
  18. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1-14任一项所述的方法步骤。
PCT/CN2023/072179 2022-02-23 2023-01-13 物体信息确定方法、移动机器人系统及电子设备 WO2023160301A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210168135.3 2022-02-23
CN202210168135.3A CN114564014A (zh) 2022-02-23 2022-02-23 物体信息确定方法、移动机器人系统及电子设备

Publications (1)

Publication Number Publication Date
WO2023160301A1 true WO2023160301A1 (zh) 2023-08-31

Family

ID=81713321

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/072179 WO2023160301A1 (zh) 2022-02-23 2023-01-13 物体信息确定方法、移动机器人系统及电子设备

Country Status (2)

Country Link
CN (1) CN114564014A (zh)
WO (1) WO2023160301A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114564014A (zh) * 2022-02-23 2022-05-31 杭州萤石软件有限公司 物体信息确定方法、移动机器人系统及电子设备

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106683173A (zh) * 2016-12-22 2017-05-17 西安电子科技大学 一种基于邻域块匹配提高三维重建点云稠密程度的方法
CN108510515A (zh) * 2017-02-24 2018-09-07 佳能株式会社 信息处理装置、信息处理方法、控制系统和物品制造方法
CN108805940A (zh) * 2018-06-27 2018-11-13 亿嘉和科技股份有限公司 一种变倍相机在变倍过程中跟踪定位的快速算法
CN109544633A (zh) * 2017-09-22 2019-03-29 华为技术有限公司 目标测距方法、装置及设备
CN110310371A (zh) * 2019-05-27 2019-10-08 太原理工大学 一种基于车载单目聚焦序列图像构建对象三维轮廓的方法
CN112070782A (zh) * 2020-08-31 2020-12-11 腾讯科技(深圳)有限公司 识别场景轮廓的方法、装置、计算机可读介质及电子设备
WO2021223124A1 (zh) * 2020-05-06 2021-11-11 深圳市大疆创新科技有限公司 位置信息获取方法、设备及存储介质
CN114564014A (zh) * 2022-02-23 2022-05-31 杭州萤石软件有限公司 物体信息确定方法、移动机器人系统及电子设备

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106683173A (zh) * 2016-12-22 2017-05-17 西安电子科技大学 一种基于邻域块匹配提高三维重建点云稠密程度的方法
CN108510515A (zh) * 2017-02-24 2018-09-07 佳能株式会社 信息处理装置、信息处理方法、控制系统和物品制造方法
CN109544633A (zh) * 2017-09-22 2019-03-29 华为技术有限公司 目标测距方法、装置及设备
CN108805940A (zh) * 2018-06-27 2018-11-13 亿嘉和科技股份有限公司 一种变倍相机在变倍过程中跟踪定位的快速算法
CN110310371A (zh) * 2019-05-27 2019-10-08 太原理工大学 一种基于车载单目聚焦序列图像构建对象三维轮廓的方法
WO2021223124A1 (zh) * 2020-05-06 2021-11-11 深圳市大疆创新科技有限公司 位置信息获取方法、设备及存储介质
CN112070782A (zh) * 2020-08-31 2020-12-11 腾讯科技(深圳)有限公司 识别场景轮廓的方法、装置、计算机可读介质及电子设备
CN114564014A (zh) * 2022-02-23 2022-05-31 杭州萤石软件有限公司 物体信息确定方法、移动机器人系统及电子设备

Also Published As

Publication number Publication date
CN114564014A (zh) 2022-05-31

Similar Documents

Publication Publication Date Title
CN108765498B (zh) 单目视觉跟踪方法、装置及存储介质
TW202036480A (zh) 影像定位方法及其系統
US20220156954A1 (en) Stereo matching method, image processing chip and mobile vehicle
WO2021136386A1 (zh) 数据处理方法、终端和服务器
JP7280385B2 (ja) 視覚的ポジショニング方法および関連装置、機器並びにコンピュータ可読記憶媒体
CN110136207B (zh) 鱼眼相机标定系统、方法、装置、电子设备及存储介质
US10529081B2 (en) Depth image processing method and depth image processing system
CN113052907B (zh) 一种动态环境移动机器人的定位方法
WO2023160301A1 (zh) 物体信息确定方法、移动机器人系统及电子设备
US12067741B2 (en) Systems and methods of measuring an object in a scene of a captured image
US11514608B2 (en) Fisheye camera calibration system, method and electronic device
WO2023236508A1 (zh) 一种基于亿像素阵列式相机的图像拼接方法及系统
CN109902675B (zh) 物体的位姿获取方法、场景重构的方法和装置
CN110120098A (zh) 场景尺度估计及增强现实控制方法、装置和电子设备
TW202242716A (zh) 用於目標匹配的方法、裝置、設備及儲存媒體
CN111383264A (zh) 一种定位方法、装置、终端及计算机存储介质
WO2021142843A1 (zh) 图像扫描方法及装置、设备、存储介质
CN110726971B (zh) 可见光定位方法、装置、终端及存储介质
CN112102415A (zh) 基于标定球的深度相机外参数标定方法、装置及设备
CN112446251A (zh) 图像处理方法及相关装置
WO2022088613A1 (zh) 一种机器人的定位方法及装置、设备、存储介质
CN111353945B (zh) 鱼眼图像校正方法、装置及存储介质
WO2022174603A1 (zh) 一种位姿预测方法、位姿预测装置及机器人
WO2024021340A1 (zh) 机器人跟随方法、装置、机器人及计算机可读存储介质
CN110675445A (zh) 一种视觉定位方法、装置及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23758926

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023758926

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2023758926

Country of ref document: EP

Effective date: 20240923