WO2023142678A1 - 投影位置修正方法、投影定位方法及控制装置、机器人 - Google Patents

投影位置修正方法、投影定位方法及控制装置、机器人 Download PDF

Info

Publication number
WO2023142678A1
WO2023142678A1 PCT/CN2022/135943 CN2022135943W WO2023142678A1 WO 2023142678 A1 WO2023142678 A1 WO 2023142678A1 CN 2022135943 W CN2022135943 W CN 2022135943W WO 2023142678 A1 WO2023142678 A1 WO 2023142678A1
Authority
WO
WIPO (PCT)
Prior art keywords
wall
projection
image
area
target area
Prior art date
Application number
PCT/CN2022/135943
Other languages
English (en)
French (fr)
Inventor
夹磊
唐剑
奉飞飞
Original Assignee
美的集团(上海)有限公司
美的集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 美的集团(上海)有限公司, 美的集团股份有限公司 filed Critical 美的集团(上海)有限公司
Publication of WO2023142678A1 publication Critical patent/WO2023142678A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3141Constructional details thereof
    • H04N9/3173Constructional details thereof wherein the projection device is specially adapted for enhanced portability
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B21/00Projectors or projection-type viewers; Accessories therefor
    • G03B21/14Details
    • G03B21/142Adjusting of projection optics

Definitions

  • the present disclosure relates to but not limited to the field of robots, and specifically relates to a projection position correction method, a projection positioning method, a control device, and a robot based on a robot.
  • Simultaneous localization and mapping means that the robot starts to move from an unknown position in an unknown environment, and locates itself according to the position and map during the movement process, and builds increments based on its own positioning. map to realize autonomous positioning and navigation of the robot.
  • Some robot products are equipped with a projector, which can project the picture to be played on the wall.
  • SLAM positioning and motion control to achieve projection positioning, it is difficult to obtain a good projection effect.
  • An embodiment of the present disclosure provides a projection position correction method, which is applied to a robot equipped with a projector, and the method includes:
  • Control the motion mechanism to travel to the first positioning point of projection, and control the image acquisition device to collect images on the wall;
  • the motion mechanism is controlled to correct the projection position according to the target area, and after correction, the projected picture generated by the projector projected onto the wall is located in the target area.
  • An embodiment of the present disclosure also provides a projection positioning method, which is applied to a robot equipped with a projector, and the method includes:
  • the projection position correction is performed according to the projection position correction method described in any embodiment of the present disclosure.
  • An embodiment of the present disclosure also provides a robot control device, including a processor and a memory storing a computer program, wherein, when the processor executes the computer program, the projection as described in any embodiment of the present disclosure can be realized
  • the position correction method, or the projection positioning method described in any embodiment of the present disclosure can be implemented.
  • An embodiment of the present disclosure also provides a robot, including a robot body, the robot body is provided with a motion mechanism, and also includes a projector, an image acquisition device and a control device arranged on the robot body, wherein:
  • the image acquisition device is configured to be able to acquire images of walls
  • the projector is set to be able to project a picture to be played on the wall;
  • the control device is configured to be capable of executing the projection position correction method described in any embodiment of the present disclosure, or capable of executing the projection positioning method described in any embodiment of the present disclosure.
  • An embodiment of the present disclosure also provides a projection position correction device applied to a robot equipped with a projector, wherein the projection position correction device includes:
  • the control module is configured to control the motion mechanism to travel to the first positioning point of the projection, and control the image acquisition device to collect images on the wall;
  • the determination module is configured to determine whether there is a projectable area on the wall according to the image of the wall, and if there is a projectable area, to determine the target area of the projected picture on the wall;
  • the correction module is configured to control the motion mechanism to correct the projection position according to the target area, and after correction, the projected picture generated by the projector projected on the wall is located in the target area.
  • An embodiment of the present disclosure also provides a computer program product, including a computer program, wherein, when the computer program is executed by a processor, it can realize the projection position correction method as described in any embodiment of the present disclosure, or can realize the method as described in any embodiment of the present disclosure.
  • the projection positioning method described in any embodiment of the present disclosure is not limited to any embodiment of the present disclosure.
  • An embodiment of the present disclosure also provides a non-transitory computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the computer program described in any embodiment of the present disclosure can be implemented.
  • the projection position correction method or the projection positioning method is also provided.
  • FIG. 1 is a flowchart of a projection position correction method according to an embodiment of the present disclosure
  • FIG. 2 is a schematic diagram of determining a target area of a projection screen according to an embodiment of the present disclosure
  • Fig. 3 is a schematic diagram of determining a root point and a normal vector according to an embodiment of the present disclosure
  • FIG. 4 is a flowchart of a projection positioning method according to an embodiment of the present disclosure.
  • FIG. 5 is a schematic structural diagram of a robot control device according to an embodiment of the present disclosure.
  • FIG. 6 is a flowchart of a projection positioning process according to an embodiment of the present disclosure.
  • FIG. 7 is a block diagram of a projection correction device according to an embodiment of the disclosure.
  • words such as “exemplary” or “for example” are used to mean an example, illustration or illustration. Any embodiment described in this disclosure as “exemplary” or “for example” should not be construed as preferred or advantageous over other embodiments.
  • "And/or” in this article is a description of the relationship between associated objects, which means that there can be three relationships, for example, A and/or B, which can mean: A exists alone, A and B exist simultaneously, and there exists alone B these three situations.
  • “A plurality” means two or more than two.
  • words such as “first” and “second” are used to distinguish the same or similar items with basically the same function and effect. Those skilled in the art can understand that words such as “first” and “second” do not limit the quantity and execution order, and words such as “first” and “second” do not necessarily limit the difference.
  • Embodiments of the present disclosure relate to a robot provided with a projector.
  • the robot includes a base, a torso disposed above the base, a rotatable and telescopic neck disposed on the upper end of the torso, and a head disposed on the neck. department. Wheels are arranged under the base, a control device is arranged in the torso, and an image acquisition device and a projector are arranged in the head. The neck can be stretched and rotated under the control of the control device.
  • the robot is also provided with a drive mechanism for driving the wheels and neck movement, such as a motor, a reducer, etc., and these movement-related components together constitute the movement mechanism of the robot.
  • the driving of the robot, the lifting of the projector and the rotation of each direction can be realized through the motion mechanism.
  • the control device can control the image acquisition device to collect images, convert the images into digital signals through the image sensor and then transmit them to the control device.
  • Positioning and Mapping select landmarks and path planning, control the motion mechanism to drive to the predetermined location, adjust the spatial position of the image acquisition device and the projector (including the adjustment of the rotation position) and control the projector to play screen and so on.
  • the image acquisition device includes a 3D lidar and a vision sensor (such as a monocular camera).
  • SLAM There are many classification methods for SLAM. According to different sensors, it can be divided into 2D/3D SLAM based on lidar, RGBD SLAM based on depth camera, SLAM based on visual sensor and so on. Through the lidar SLAM program, the information of the inertial sensor can be fused to process 2D and 3D SLAM in a unified manner. Vision sensors include monocular cameras, binocular cameras, etc. Vision sensors can be used both indoors and outdoors. According to the extraction method of visual features, it can be divided into feature method and direct method.
  • the robot equipped with a projector can be automatically positioned near the wall for projection through SLAM, due to the SLAM positioning and motion control accuracy, the position and orientation cannot be accurately positioned, and there will be angle and position errors, resulting in incorrect projection images.
  • the projection screen is blocked, the projection screen is beyond the range of the wall, etc., and the projection effect is difficult to satisfy users.
  • An embodiment of the present disclosure provides a projection position correction method, which is applied to a robot equipped with a projector. As shown in FIG. 1 , the method includes:
  • Step 110 controlling the motion mechanism to travel to the first positioning point of the projection, and controlling the image acquisition device to acquire images on the wall;
  • Step 120 determine whether there is a projectable area on the wall according to the image of the wall, if there is a projectable area, perform step 130, and if there is no projectable area, perform step 140;
  • Step 130 determine the target area of the projected picture on the wall, and control the motion mechanism to correct the projection position according to the target area. After correction, the projected picture generated by the projector projected on the wall is located at the Target area, end.
  • Step 140 ending the correction of the projection position.
  • the embodiment of the present disclosure can determine the target area of the projected picture based on the image of the wall surface collected at the first positioning point, and correct the projection position based on the target area, so that the projected picture generated by projection is located in the target area, thus achieving good results. projection effect.
  • the determining whether there is a projectable area on the wall according to the image of the wall includes: determining the width of the wall according to the 3D image of the wall. If the width of the wall is smaller than the width required for projection, it is determined that there is no projectable area on the wall.
  • the determining whether there is a projectable area on the wall according to the image of the wall further includes:
  • the texture image of the wall can be generated by controlling the visual sensor to take pictures of the wall, and the visual sensor can be a monocular camera, a binocular camera, and the like.
  • the texture image may be an RGB image, or an image containing color features in other formats such as YUV.
  • the radar point cloud and vision are fused to perform perception and motion control, and the radar point cloud information is used as the input to realize the detection and recognition of the wall surface, combined with the vision to make a secondary judgment, and the two are fused to achieve positioning, and a suitable location can be found. projection on the wall.
  • the point cloud image of the wall can be generated by controlling the 3D lidar to scan the wall, and the point cloud image can be processed by the adjacent point clustering algorithm to extract the The contour lines on both sides of the wall surface, and the width of the wall surface is determined according to the distance between the contour lines on both sides.
  • a depth camera is used to collect a depth image of the wall, and the contour lines on both sides of the wall are identified according to the depth difference, and then the width of the wall is calculated.
  • Building modeling or vector information extraction can be performed based on laser point cloud images, and building surface and ridge information can be quickly identified.
  • building faces and ridges can be extracted based on the adjacent point clustering algorithm.
  • texture images collected by visual sensors such as monocular cameras and binocular cameras can also recognize the contour of the wall and calculate the width of the wall, the texture image is not sensitive to the contour of the wall, which will lead to the calculated Width precision is poor.
  • the point cloud image generated by 3D lidar scanning is a geometric image, and the points on both sides of the contour line of a wall usually have a large depth difference, so that it is easy to use algorithms such as adjacent point clustering.
  • the points on both sides of the contour line of the wall are accurately distinguished, so that the contour line of the wall can be accurately extracted.
  • the occlusions on the wall may be items placed in front of the wall, such as bookshelves, tables and chairs, electrical equipment, etc., or they may be attached to the wall.
  • a painting, decorations hanging on the wall, and some pollutants on the wall, etc., these obstructions will not cause obvious changes in the geometric shape of the wall, but will seriously affect the viewing effect of the projection.
  • it is not easy to identify these occluders by generating point clouds through 3D lidar.
  • These occluders usually have a large difference in color from the wall surface, and it is easy to identify these occluders and determine the shape, size and position of these occluders through the texture image.
  • this embodiment will regard it as an occluder, and such a wall itself is not suitable for projection. If there is white wallpaper, paper or the like on the wall, this embodiment may not regard it as a blocking object, and these items do not affect viewing. In some cases, a white curtain specially used for projection is hung on the wall, and the curtain is similar in shape and color to the wall suitable for projection, and the embodiment of the present disclosure will identify it as a projectable area existing on the wall, rather than occluders. In this embodiment, it is appropriate to identify whether there is an occluder on the wall through the color feature of the texture image.
  • this embodiment sets a preset projection screen, which can be the smallest projection screen, that is, the smallest size of the projection screen that the user can tolerate, or a projection screen with a preferred screen size, or a projection screen with other set sizes. screen. If the non-occluded area cannot accommodate the preset projected image, then give up projecting on the current wall.
  • the preset projected picture can be the default setting of the system. Since different users have different viewing habits, this parameter can also be configured and modified by user input. It should be noted that the preset projection screen in this embodiment is not necessarily the minimum projection screen.
  • the minimum projection screen can be The screen is set to the preferred screen size. If you can’t find an unobstructed area that can accommodate the preferred screen size on a certain wall, you can leave the current wall and try another wall.
  • the required width of the projection screen here may be the width of the minimum projection screen, or the width of the preferred screen size, or be between the width of the minimum projection screen and the width of the preferred screen size. Embodiments of the present disclosure do not limit this.
  • the tolerance width is set because there will always be a certain deviation in the projection. In order to prevent the picture from being projected beyond the wall, a margin is set, which can be set according to the projection accuracy of the projector used, or set to a fixed value. Values such as 10cm to 50cm, the tolerance width can also be configured and modified by the user.
  • the projection screen is rectangular, and the determining the target area of the projection screen on the wall includes:
  • FIG. 2 is a schematic diagram of determining a target area of a projection screen according to an embodiment of the present disclosure.
  • the width of the wall 101 satisfies the requirement of the width required for projection.
  • the projected picture in this embodiment is rectangular, in order to judge whether the non-occluded area can accommodate the minimum projected picture, you can first draw a largest rectangle 103 in the non-occluded area. If the rectangle 103 can accommodate the minimum projected picture, it means that the wall can accommodate An unobstructed area that accommodates the smallest projected image.
  • the size of the projected screen can be selected according to the set preferred size of the screen. On the premise that it can be accommodated by an unobstructed area, select a size closest to the preferred size of the screen. For example, the preferred picture size (width and height) is 2.5m ⁇ 1.5m, and the size (width and height) of the largest rectangle 103 that can be drawn in the unoccluded area is 35m ⁇ 2m, so the size of the target area can be determined to be 2.5m ⁇ 1.5m, such as Target area 107 in FIG. 2 . If the size of the rectangle 103 accommodates the preferred picture size such as the size of the rectangle is 2.2m x 1.8m, then it can be determined that the size of the target area 107 of the projected picture is 2.2m x 1.32m.
  • a lower limit position of the projection screen that is, the minimum height from the bottom of the projection screen to the ground. If it is too low, it may affect the perception.
  • determining the position of the projected picture in the height direction of the wall ensure that the lower edge of the projected picture is not lower than the lower limit position.
  • the way to determine whether there is a projectable area is to check whether the unoccluded area can accommodate the preset projected image, and then locate the target area in the unoccluded area.
  • a different judging method is adopted.
  • the width of the wall is greater than or equal to the width required for projection, determine one or A plurality of pre-selected areas; then identify the occluder according to the texture image of the wall, and in the case that there are occluders in the pre-selected areas, it is determined that there is no projectable area on the wall; or, after determining that there is at least one pre-selected If there is no occluder in the area, it is determined that there is a projectable area on the wall. When it is determined that there is a projectable area, a pre-selected area is determined as the target area. If there are multiple pre-selected areas, these pre-selected areas can be preset.
  • the preselected area with high priority is preferred as the target area, but it can also be selected randomly.
  • the center point of the preselected area may be set on the center line in the width direction of the wall
  • the width of the preselected area may be set as a preset width
  • the height may be set as a preset height.
  • Multiple pre-selected areas can be obtained by varying the preset width and height. That is to say, in this embodiment, one or more pre-selected areas are firstly determined, and then whether there are occluders in these pre-selected areas, while the foregoing embodiments search for an area that can be projected from an area without an occluder. Both approaches can be used to determine the target area.
  • controlling the motion mechanism to perform position correction according to the target area includes:
  • a projected second positioning point is determined according to the coordinates of the central point and the projection distance, and the motion mechanism is controlled to travel to the second positioning point.
  • the determination of the root point and the normal vector is not limited to after the target area is determined, and the root point and the normal vector may be determined according to a pre-selected area before the target area. If the determined target area is not the preselected area, the root point and normal vector can be re-determined.
  • FIG 3 shows the relative positions of the wall and the robot.
  • the line segment BD in the figure corresponds to the width of the target area and its position on the wall.
  • the center point of the target area is represented by A
  • the normal vector of the wall drawn from point A is represented by vector F.
  • the projection triangle when the projector projects along the normal vector F is represented by BCD.
  • the projection distance the vertical distance from the projector lens to the wall
  • the size of the projected picture on the wall can be changed, and the calculated projected picture is located in the target area. (That is, the projection distance when the screen coincides with the target area or the screen just covers the target area), marked as H1 in the figure.
  • the second positioning point that is, point C in the figure, can be easily calculated.
  • the second positioning point is obtained by direct positioning relative to the coordinates of the wall surface, which has high precision and can achieve the most expected viewing effect.
  • the second positioning point can be expressed by the coordinates of the vertical projection of point C onto the ground.
  • the coordinates calculated according to the coordinates of the center point and the projection distance can be expressed in relative coordinates.
  • the robot can establish local coordinates based on the wall as the reference system to determine the second positioning point.
  • the coordinates of the two anchor points are used for path planning.
  • the first positioning point when the motion mechanism is controlled to travel, can be used as the starting point, and the second positioning point can be used as the end point to implement local path rules; Traveling from one anchor point to a second anchor point.
  • the local path planning only depends on the relative position relationship of the walls, not on the global map, and requires less computing power.
  • the position of the projector is adjusted so that the projection center line of the projector coincides with the normal vector, or the The projection center line of the projector has the smallest angle with the normal vector and intersects the wall at the center point.
  • the projection centerline here can be represented by the axis of the projection lens of the projector.
  • the robot in the embodiment of the present disclosure can adjust the height of the projector through a motion mechanism. Within the allowable range of the height of the projector, the projection centerline and The normal vectors overlap, so that a horizontal and vertical picture can be projected to achieve the best viewing effect.
  • the projector can be adjusted to a position such that the angle between the projection center line and the normal vector is the smallest and intersects the wall at the center point of the target area. This position is in the vertical plane where the normal vector is.
  • the position adjustment of the projector in this paper includes the attitude adjustment of the projector, and the attitude adjustment is also the adjustment of the rotation position, which can be regarded as a kind of position adjustment.
  • the above-mentioned embodiments of the present disclosure can correct the position of the robot itself for the blank wall, and make up for the angle and position errors caused by positioning and navigation, so that the projector on the robot can project a horizontal and vertical picture, so that it has the most Good picture quality.
  • An embodiment of the present disclosure provides a projection positioning method, which is applied to a robot equipped with a projector, as shown in FIG. 4 , including:
  • Step 210 collecting an image of the current environment, and identifying at least one wall according to the image;
  • Step 220 use the identified wall as a road sign for path planning, and determine the first positioning point for projection;
  • Step 230 perform projection position correction according to the projection position correction method described in any embodiment of the present disclosure.
  • the above embodiments of the present disclosure use SLAM to perform preliminary positioning of the projection position, and then collect images of the wall at the preliminary positioning point, determine the target area of the projection screen based on the image, and then correct the projection position, realizing precise control of the projection position, which can Get a good projection effect.
  • the method further includes: when it is determined that there is no projectable area on the wall, use another identified wall as a landmark for path planning, and re-determine the projected area. first locate the point and perform the projection position correction again.
  • a visual sensor When selecting a landmark, a visual sensor can be used, and the information of the visual image is relatively rich.
  • the environment can be semantically annotated, and the wall can be recognized as a landmark on the image.
  • laser scanning can also be used to identify the wall.
  • the point cloud image of the wall has significant geometric features, and a large area of flat surface will appear. Through semantic recognition, the wall can be segmented in the point cloud as a landmark.
  • the first positioning point needs to be selected when planning the path.
  • the first positioning point can be selected directly in front of the centerline of the width of the wall and at a set distance from the wall, but This preliminary positioning has limited accuracy due to viewing angle, distance, etc., and it is difficult to make an accurate judgment on whether there is an obstruction on the wall.
  • the motion control at this time will also have errors in accuracy due to the long distance, so It is difficult to achieve a good viewing effect. Therefore, this embodiment, combined with the position correction method of the present disclosure, can perform precise positioning and judgment based on images collected at short distances, select a suitable target area for projection, and can also perform precise positioning on radar point cloud and visual fusion, allowing customers to have A good experience watching movies.
  • the above-mentioned embodiments of the present disclosure can correct the position of the robot itself for the blank wall, and make up for the angle and position errors caused by positioning and navigation, so that the projector on the robot can project a horizontal and vertical picture, which has a good performance. screen sense.
  • An embodiment of the present disclosure also provides a projection position correction device, which is applied to a robot equipped with a projector. As shown in FIG. 7 , the projection position correction device includes:
  • the control module 70 is configured to control the motion mechanism to travel to the first positioning point of projection, and control the image acquisition device to acquire images on the wall;
  • the determining module 80 is configured to determine whether there is a projectable area on the wall according to the image of the wall, and if there is a projectable area, determine the target area of the projected picture on the wall;
  • the correction module 90 is configured to control the motion mechanism to correct the projection position according to the target area, and after correction, the projection picture generated by the projector projecting onto the wall is located in the target area.
  • An embodiment of the present disclosure also provides a robot control device, as shown in FIG. 5 , including a processor 60 and a memory 50 storing a computer program, wherein, when the processor 60 executes the computer program, it can realize the The projection position correction method described in any embodiment is disclosed, or the projection positioning method described in any embodiment of the present disclosure can be implemented.
  • the processor of this embodiment can have one or more, can be general-purpose processors, comprise central processing unit (Central Processing Unit, be called for short CPU), network processor (Network Processor, be called for short NP) etc.; Can also be digital signal processing DSP, application specific integrated circuit (ASIC), off-the-shelf programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • An embodiment of the present disclosure also provides a robot, including a robot body, the robot body is provided with a motion mechanism, and the robot also includes a projector, an image acquisition device and a control device arranged on the robot body, wherein:
  • the image acquisition device is configured to be able to acquire images of walls
  • the projector is set to be able to project a picture to be played on the wall;
  • the control device is configured to be capable of executing the projection position correction method described in any embodiment of the present disclosure, or capable of executing the projection positioning method described in any embodiment of the present disclosure.
  • the robot in the above-mentioned embodiments of the present disclosure can correct the projection position, realize precise positioning, and obtain a good projection effect.
  • the image acquisition device includes: a visual sensor configured to acquire a texture image of a wall; and a 3D laser radar configured to scan the wall to generate a point cloud image.
  • the robot body of this embodiment may include a bottom, a torso, a neck and a head. But it is not limited to this, for example, the bottom can be replaced with legs, or hands can be added, or the torso and neck can be integrated into one part and so on.
  • the present disclosure does not limit the structure of the robot body, as long as it can carry an image acquisition device, a projector and a control device, and can drive and adjust the position of the projector.
  • the above-mentioned projector includes any device or device capable of projecting images, and is not limited to a specific shape and structure. In addition to being installed on the head of the robot, the projector can also be installed on other parts, such as hands, torso, etc.
  • An embodiment of the present disclosure also provides a projection positioning method, which realizes accurate projection position adjustment by performing relative position correction on preliminary positioning points.
  • a projection positioning process of this embodiment includes:
  • Step 310 perform preliminary positioning according to road signs, and drive to the first positioning point
  • Step 320 determining the width of the wall through laser radar scanning
  • the 3D lidar can be controlled to scan the wall to obtain the point cloud image, and the point cloud image can be linearly clustered through the adjacent point clustering algorithm, and the radar point cloud information can be linearly clustered to distinguish the linear wall and its features. width.
  • Step 330 judge whether the width of the wall meets the projection requirements, if so, execute step 340, if not, end;
  • Step 340 selecting the root point and extending the normal vector to determine the projection distance
  • the position at (D1+2d)/2 can be used as the base point (0,0) to carry out the normal vector extension, and according to the projected triangle relationship, determine the projected picture with a width of D2 and a projected distance of H1. According to the coordinates of the root point and the projection distance, the corrected projection point, that is, the relative coordinates of the second positioning point can be obtained.
  • the aforementioned base point may be set as the center point of the preselected area of the projected image on the wall.
  • Step 350 judge whether there is an obstruction on the wall surface that affects viewing of the projected image through the image collected by the visual sensor, if not, execute step 360, if yes, end;
  • an occluder is found in the preselected area in this step, an area without an occluder can also be reframed as the target area of the projected image, and the root point and normal vector can be re-determined based on the target area. That is to say, this embodiment can first determine the pre-selected area of the projected image, and then detect whether there is an occluder in the pre-selected area. It is determined that there are no obstructions that affect the viewing of the projected screen.
  • step 340 can also be performed before step 340. If a target area can be determined, then the processing in step 340 is performed, that is, the root point is selected and the normal vector is extended to determine the projection distance.
  • Step 360 perform path planning according to the base point and the normal vector, and realize projection position correction.
  • motion control can be performed so that the robot travels to the second positioning point determined by the normal vector and the projection distance, and the pose adjustment is performed so that the projected image is located in the target area on the wall.
  • the path planning in this step can be based on the root point as the relative coordinates, without relying on the absolute map, but only relying on the real-time point cloud information for relative target judgment and relative pose adjustment, so as to achieve the purpose of position self-correction.
  • the next landmark (landmark) information can be confirmed, and the projection positioning process can be started again.
  • This embodiment provides a self-correction control strategy for projected images based on radar point cloud and visual fusion.
  • the point cloud image is obtained through lidar, and the wall width is identified through the adjacent point clustering algorithm, thereby determining the root point and normal vector. Secondary confirmation of the pre-selected area via video. Accurate relative pose adjustment is achieved through stereo inspection.
  • This embodiment can find the best projection point based on the wall, so that the width and height of the projected interface can achieve the best viewing experience, and make up for the pose error caused by the sense of positioning movement.
  • accurate relative position can be performed. Posture adjustment is confirmed. Moreover, it does not depend on the global map, but only relies on the relative position of the wall to perform local path planning, which requires less computing power.
  • An embodiment of the present disclosure also provides a computer program product, including a computer program, wherein, when the computer program is executed by a processor, the method for correcting the projection position as described in any embodiment of the present disclosure can be implemented, or the method as described in any embodiment of the present disclosure can be implemented.
  • the projection positioning method described in any embodiment of the present disclosure is not limited to any embodiment of the present disclosure.
  • An embodiment of the present disclosure also provides a non-transitory computer-readable storage medium.
  • the computer-readable storage medium stores a computer program.
  • the projection position correction method described above, or the projection positioning method described in any embodiment of the present disclosure can be implemented.
  • Computer-readable media may include computer-readable storage media that correspond to tangible media such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another, eg, according to a communication protocol.
  • a computer-readable medium generally may correspond to a non-transitory tangible computer-readable storage medium or a communication medium such as a signal or carrier wave.
  • Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure.
  • a computer program product may comprise a computer readable medium.
  • such computer-readable storage media may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk or other magnetic storage, flash memory, or may be used to store instructions or data Any other medium that stores desired program code in the form of a structure and that can be accessed by a computer.
  • any connection could also be termed a computer-readable medium. For example, if a connection is made from a website, server or other remote source for transmitting instructions, coaxial cable, fiber optic cable, dual wire, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium.
  • disk and disc include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, or blu-ray disc, etc. where disks usually reproduce data magnetically, while discs use lasers to Data is reproduced optically. Combinations of the above should also be included within the scope of computer-readable media.
  • processors can be implemented by one or more processors such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuits.
  • DSPs digital signal processors
  • ASICs application specific integrated circuits
  • FPGAs field programmable logic arrays
  • processors may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein.
  • the functionality described herein may be provided within dedicated hardware and/or software modules configured for encoding and decoding, or incorporated in a combined codec.
  • the techniques may be fully implemented in one or more circuits or logic elements.
  • the technical solutions of the embodiments of the present disclosure may be implemented in a wide variety of devices or devices, including a wireless handset, an integrated circuit (IC), or a set of ICs (eg, a chipset).
  • IC integrated circuit
  • Various components, modules, or units are described in the disclosed embodiments to emphasize functional aspects of devices configured to perform the described techniques, but do not necessarily require realization by different hardware units. Rather, as described above, the various units may be combined in a codec hardware unit or provided by a collection of interoperable hardware units (comprising one or more processors as described above) in combination with suitable software and/or firmware.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Optics & Photonics (AREA)
  • General Physics & Mathematics (AREA)
  • Projection Apparatus (AREA)

Abstract

一种投影位置修正方法、投影定位方法及控制装置、机器人,控制运动机构行驶到投影的第一定位点后,控制图像采集装置采集墙面的图像(110);根据墙面的图像确定墙面是否存在可投影区域(120),在存在可投影区域的情况下,确定投影画面在墙面上的目标区域;根据目标区域控制运动机构进行投影位置修正,修正后投影仪向墙面投影生成的投影画面位于目标区域。

Description

投影位置修正方法、投影定位方法及控制装置、机器人
交叉引用
本申请要求在2022年01月27日提交中国专利局、申请号为202210100253.0、名称为“投影位置修正方法、投影定位方法及控制装置、机器人”的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本公开涉及但不限于机器人领域,具体涉及一种基于机器人实现的投影位置修正方法、投影定位方法及控制装置、机器人。
背景技术
即时定位与地图构建(simultaneous localization and mapping,简称SLAM),是机器人在未知环境中从一个未知位置开始移动,在移动过程中根据位置和地图进行自身定位,同时在自身定位的基础上建造增量式地图,实现机器人的自主定位和导航。有些机器人产品设置有投影仪,可以向墙面投射要播放的画面。但机器人利用SLAM定位及运动控制来实现投影定位时,难以得到良好的投影效果。
发明概述
以下是对本文详细描述的主题的概述。本概述并非是为了限制权利要求的保护范围。
本公开一实施例提供了一种投影位置修正方法,应用于设置有投影仪的机器人,所述方法包括:
控制运动机构行驶到投影的第一定位点,控制图像采集装置采集墙面的图像;
根据所述墙面的图像确定所述墙面是否存在可投影区域,在存在可投影区域的情况下,确定投影画面在所述墙面上的目标区域;
根据所述目标区域控制所述运动机构进行投影位置修正,修正后所述投影仪向所述墙面投影生成的投影画面位于所述目标区域。
本公开一实施例还提供了一种投影定位方法,应用于设置有投影仪的机器人,所述方法包括:
采集当前环境的图像,根据所述图像识别出至少一个墙面;
将识别出的一个墙面作为路标进行路径规划,确定投影的第一定位点;
按照本公开任一实施例所述的投影位置修正方法进行投影位置修正。
本公开一实施例还提供了一种机器人控制装置,包括处理器以及存储有计算机程序的存储器,其中,所述处理器执行所述计算机程序时能够实现如本公开任一实施例所述的投影位置修正方法,或者能够实现如本公开任一实施例所述的投影定位方法。
本公开一实施例还提供了一种机器人,包括机器人本体,所述机器人本体设置有运动机构,还包括设置在机器人本体上的投影仪、图像采集装置和控制装置,其中:
所述图像采集装置设置为能够采集墙面的图像;
所述投影仪设置为能够向墙面投影要播放的画面;
所述控制装置设置为能够执行如本公开任一实施例所述的投影位置修正方法,或者能够执行如本公开任一实施例所述的投影定位方法。
本公开一实施例还提供了一种投影位置修正装置,应用于设置有投影仪的机器人,其中,所述投影位置修正装置包括:
控制模块,设置为控制运动机构行驶到投影的第一定位点,及控制图像采集装置采集墙面的图像;
确定模块,设置为根据所述墙面的图像确定所述墙面是否存在可投影区域,及在存在可投影区域的情况下,确定投影画面在所述墙面上的目标区域;
修正模块,设置为根据所述目标区域控制所述运动机构进行投影位置修正,修正后所述投影仪向所述墙面投影生成的投影画面位于所述目标区域。
本公开一实施例还提供了一种计算机程序产品,包括计算机程序,其中, 所述计算机程序被处理器执行时能够实现如本公开任一实施例所述的投影位置修正方法,或者能够实现如本公开任一实施例所述的投影定位方法。
本公开一实施例还提供了一种非瞬态计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如本公开任一实施例所述的投影位置修正方法或投影定位方法。
在阅读并理解了附图和详细描述后,可以明白其他方面。
附图概述
附图用来提供对本公开实施例的理解,并且构成说明书的一部分,与本公开实施例一起用于解释本公开的技术方案,并不构成对本公开技术方案的限制。
图1是本公开一实施例投影位置修正方法的流程图;
图2是本公开一实施例确定投影画面目标区域的示意图;
图3是本公开一实施例确定根基点和法向量的示意图;
图4是本公开一实施例投影定位方法的流程图;
图5是本公开一实施例机器人控制装置的结构示意图;
图6是本公开一实施例的一次投影定位过程的流程图;
图7是本公开一实施例投影修正装置的模块图。
详述
本公开描述了多个实施例,但是该描述是示例性的,而不是限制性的,并且对于本领域的普通技术人员来说显而易见的是,在本公开所描述的实施例包含的范围内可以有更多的实施例和实现方案。
本公开的描述中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本公开中被描述为“示例性的”或者“例如”的任何实施例不应被解释为比其他实施例更优选或更具优势。本文中的“和/或”是对关联对象的关联关系的一种描述,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。“多个”是指 两个或多于两个。另外,为了便于清楚描述本公开实施例的技术方案,使用了“第一”、“第二”等字样对功能和作用基本相同的相同项或相似项进行区分。本领域技术人员可以理解“第一”、“第二”等字样并不对数量和执行次序进行限定,并且“第一”、“第二”等字样也并不限定一定不同。
在描述具有代表性的示例性实施例时,说明书可能已经将方法和/或过程呈现为特定的步骤序列。然而,在该方法或过程不依赖于本文所述步骤的特定顺序的程度上,该方法或过程不应限于所述的特定顺序的步骤。如本领域普通技术人员将理解的,其它的步骤顺序也是可能的。因此,说明书中阐述的步骤的特定顺序不应被解释为对权利要求的限制。此外,针对该方法和/或过程的权利要求不应限于按照所写顺序执行它们的步骤,本领域技术人员可以容易地理解,这些顺序可以变化,并且仍然保持在本公开实施例的精神和范围内。
本公开实施例涉及一种设置有投影仪的机器人,在一实施例中,该机器人包括底座、设置在底座上方的躯干、可转动伸缩设置在躯干上端的颈部和设置在颈部上的头部。底座下方设置有车轮,躯干内设置有控制装置,头部设置有图像采集装置和投影仪。颈部可以在控制装置的控制下伸缩和转动。该机器人上还设置有用于驱动车轮和颈部运动的驱动机构如电机、减速机等,这些与运动相关的构件共同构成了机器人的运动机构。通过运动机构可以实现机器人的行驶,投影仪的升降和每一种方向的转动等。
控制装置可以控制图像采集装置采集图像,通过图像传感器将图像变成数字信号后传递给控制装置,控制装置可以包括多种处理器,具有图像处理和多种控制功能,例如根据采集的图像实现即时定位与地图构建(SLAM),选定路标(landmark)和路径规划,控制运动机构行驶到预定的地点,调节图像采集装置和投影仪的空间位置(包括对转动位置的调节)以及控制投影仪播放画面等等。在一个示例,图像采集装置包括3D激光雷达及视觉传感器(如单目相机)。
SLAM有很多种分类方法。按照传感器的不同,可以分为基于激光雷达 的2D/3D SLAM、基于深度相机的RGBD SLAM、基于视觉传感器的SLAM等等。通过激光雷达SLAM程序,可以融合惯性传感器的信息,统一处理2D与3D SLAM。视觉传感器包括单目相机、双目相机等。视觉传感器在室内室外均可以使用。按照视觉特征的提取方式又可以分为特征法、直接法。
设置有投影仪的机器人虽然可以通过SLAM,自动定位到墙面附近进行投影,但是由于SLAM定位及运动控制精度无法做到位置和朝向的精确定位,会存在角度和位置误差,导致投影画面不正、投影画面被遮挡、投影画面超出墙面范围等问题,投影的效果难以令用户满意。
本公开一实施例提供了一种投影位置修正方法,应用于设置有投影仪的机器人,如图1所示,所述方法包括:
步骤110,控制运动机构行驶到投影的第一定位点,控制图像采集装置采集墙面的图像;
步骤120,根据所述墙面的图像确定所述墙面是否存在可投影区域,在存在可投影区域的情况下,执行步骤130,在不存在可投影区域的情况下,执行步骤140;
步骤130,确定投影画面在所述墙面上的目标区域,根据所述目标区域控制所述运动机构进行投影位置修正,修正后所述投影仪向所述墙面投影生成的投影画面位于所述目标区域,结束。
步骤140,结束本次投影位置修正。
本公开实施例能够基于在第一定位点采集的墙面的图像确定投影画面的目标区域,基于所述目标区域进行投影位置修正,使得投影生成的投影画面位于所述目标区域,因而可以取得良好的投影效果。
在本公开一示例性的实施例中,所述根据所述墙面的图像确定所述墙面是否存在可投影区域,包括:根据所述墙面的3D图像确定所述墙面的宽度,在所述墙面的宽度小于投影所需宽度的情况下,确定所述墙面不存在可投影区域。
在本公开一示例性的实施例中,所述根据所述墙面的图像确定所述墙面 是否存在可投影区域,还包括:
在所述墙面的宽度大于或等于投影所需宽度的情况下,根据所述墙面的纹理图像进行遮挡物的识别,根据识别结果确定所述墙面是否存在可容纳预设投影画面的无遮挡区域:
在存在所述无遮挡区域的情况下,确定所述墙面存在可投影区域;或者,
在不存在所述无遮挡区域的情况下,确定所述墙面不存在可投影区域,结束本次投影位置修正。
本实施例中,墙面的纹理图像可以通过控制视觉传感器拍摄墙面来生成,视觉传感器可以是单目相机、双目相机等。所述纹理图像可以是RGB图像,或者是YUV等其他格式的包含色彩特征的图像。本实施例将雷达点云和视觉融合,进行感知及运动控制,通过雷达点云信息为输入端实现墙面的检测识别,结合视觉做二次判断,将两者融合实现定位,可以找到合适的墙面进行投影播放。
在本实施例的一个示例中,可以通过控制3D激光雷达扫描所述墙面,生成所述墙面的点云图像,通过相邻点聚类算法对所述点云图像进行处理,提取出所述墙面的两侧轮廓线,根据所述两侧轮廓线之间的距离确定所述墙面的宽度。在另一示例中,使用深度相机来采集墙面的深度图像,根据深度差异来识别出墙面的两侧轮廓线,进而计算出墙面宽度。
基于激光点云图像可以进行建筑物建模或矢量信息提取,快速识别出建筑物面和棱线信息。例如可以基于相邻点聚类算法进行建筑物面和棱线的提取。提取时先计算点云中每个数据点的单位法向量和点到基准面的距离,利用基于网格的相邻点聚类算法对点云进行分类,确定建筑物面点云,然后自动判别相交平面,提取建筑物棱线。虽然通过视觉传感器如单目相机、双目相机等采集的纹理图像也可以识别墙面的轮廓线并计算墙面的宽度,但是纹理图像对于墙面轮廓线不敏感,会导致计算出的墙面宽度精度较差。例如在两个墙面的深度不同时,该两个墙面之间的轮廓线在颜色上与墙面颜色差别不大,比较难以区分出来。而通过3D激光雷达扫描生成的点云图像是一种几何图像,在一个墙面的轮廓线两侧的点通常会存有很大的深度差异,从而便于通过相邻点聚类等算法容易将墙面轮廓线两侧的点准确地区分开来,从而 准确提取出墙面的轮廓线。
而对于墙面上遮挡物的识别则有所不同,墙面上的遮挡物可能是在墙面前放置的物品,如书架、桌椅、电器设备等等,还可能是贴在墙面上的一幅画,挂在墙面上的装饰物,以及墙面存在的一些污染物等等,这些遮挡物并不会导致与墙面几何形状上的明显变化,但是会严重影响投影的观看效果。此时通过3D激光雷达生成点云的方式,就不容易识别出这些遮挡物。而这些遮挡物通常在颜色上与墙面存在较大差异,通过纹理图像就很容易识别出这些遮挡物并确定这些遮挡物的形状、大小和位置。从而确定在排除这些遮挡物之后余下的无遮挡区域。这里需要说明的是,如果墙面已经被装饰成为具有彩色图案的墙面,本实施例会将其视为存在遮挡物,这种墙面本身不适于投影。如果墙面上贴有白色的壁纸、纸张之类,本实施例可能不会将其视为遮挡物,这些物品也不影响观看。在一些情况下,墙面上挂有专门用于投影的白色幕布,该幕布在形状和色彩上与适于投影的墙面相似,本公开实施例会将其识别为墙面存在的可投影区域,而不是遮挡物。本实施例通过纹理图像具有的色彩特征来识别墙面是否存在遮挡物,是合适的。
如果在墙面上呈现的投影画面太小,会影响观看的感受。因此本实施例设置了预设投影画面,该预设投影画面可以是最小投影画面即用户可以容忍的投影画面的最小尺寸,或者是具有优选画面尺寸的投影画面,或者具有其他设定尺寸的投影画面。如果无遮挡区域不能够容纳预设投影画面,则放弃在当前墙面投影。预设投影画面可以是系统默认设置的,由于不同用户的观看习惯不同,该参数也可以由用户输入来配置和修改。需要说明的是,本实施例的预设投影画面不一定是最小投影画面,例如,在本公开实施例的场景下,如果用户确定室内存在可以容纳优选画面尺寸的墙面,可以将该最小投影画面设置为优选画面尺寸,如果在某一墙面找不到可以容纳优先画面尺寸的无遮挡区域时,可以离开当前墙面,到另一墙面去进行尝试。
本实施例中,投影所需宽度可以通过下式计算:D2=D1+2d,其中,D2是投影所需宽度,D1是投影画面所需宽度,d是设定的容差宽度,可以参见图3。此处的投影画面所需宽度可以是最小投影画面的宽度,或者是优选画面尺寸的宽度,或者介于最小投影画面的宽度和优选画面尺寸的宽度之间。 本公开实施例对此不做局限。设置容差宽度是因为投影总会存在一定的偏差,为了避免画面被投射到墙面之外而设置的一个裕量,可以根据使用的投影仪投影精度等参数来设置,或者设置为一个固定的值如10cm~50cm,容差宽度也可以由用户配置和修改。
在本公开一示例性的实施例中,所述投影画面为矩形,所述确定投影画面在所述墙面上的目标区域,包括:
确定所述投影画面的尺寸,使得所述投影画面能够被所述无遮挡区域所容纳且最接近于设定的画面优选尺寸;
确定所述投影画面在所述墙面的宽度方向上的位置和高度方向上的位置,使得所述投影画面能够被所述无遮挡区域所容纳。
请参见图2,是本公开实施例投影画面的目标区域确定的一个示意图。如图所示,墙面101的宽度满足投影所需宽度的要求,在墙面101上存在一些遮挡物105,除去这些遮挡物之外的其他区域为无遮挡区域。因为本实施例的投影画面是矩形的,为了判断无遮挡区域是否能够容纳最小投影画面,可以先在无遮挡区域绘制一个最大矩形103,如果矩形103可以容纳最小投影画面,则代表墙面存在可以容纳最小投影画面的无遮挡区域。在确定投影画面在墙面上的目标区域时,一方面需要确定投影画面的尺寸,另一方面需要确定投影画面在墙面上的位置。投影画面的尺寸可以根据设定的画面优选尺寸来选择,在能够被无遮挡区域容纳的前提下,选择一个最接近于画面优选尺寸的尺寸。例如,优选画面尺寸(宽 高)是2.5m 1.5m,而无遮挡区域能够绘制出的最大矩形103的尺寸(宽 高)是35m 2m,就可以确定目标区域的尺寸为2.5m 1.5m,如图2中的目标区域107。如果该矩形103的尺寸容纳优选的画面尺寸如该矩形的尺寸为2.2m 1.8m,则可以确定投影画面的目标区域107尺寸为2.2m 1.32m。
在确定投影画面在墙面的位置时,可以设定一个投影画面的下限位置,即允许的投影画面下边距离地面的最低高度,如果过低也可能影响观感。在确定投影画面在墙面高度方向上的位置时保证投影画面的下边不低于该下限位置。此外,还可以设定一个优选投影高度如1.5m~2m,可以用画面的中心 点对地高度来表示,在能够被无遮挡区域容纳的情况下,将目标区域的高度设定为该优选投影高度或者最接近该优选投影高度的值。而在确定投影画面在墙面宽度方向上的位置时,可以在能够被无遮挡区域容纳的情况下,选择最靠近整个墙面在宽度方向上的中线的位置,或者靠近无遮挡区域可以绘制出的最大矩形103在宽度方向上的中线的位置。
本公开上述实施例确定是否存在可投影区域的方式是看无遮挡区域是否能够容纳预设投影画面,然后再在无遮挡区域中进行目标区域的定位。在本公开另一实施例中采用不同的判断方法,在该实施例中,在所述墙面的宽度大于或等于投影所需宽度的情况下,确定投影画面在所述墙面上的一个或多个预选区域;再根据所述墙面的纹理图像进行遮挡物的识别,在所述预选区域均存在遮挡物的情况下,确定墙面不存在可投影区域;或者,在确定至少有一个预选区域不存在遮挡物的情况下,确定墙面存在可投影区域,确定存在可投影区域的情况下,将一个预选区域确定为目标区域,如果有多个预选区域,这些预选区域可以预先设定优先级,优先选择优先级高的预选区域作为目标区域,但也可以随机选择。在一个示例中,该预选区域的中心点可以设置在墙面宽度方向的中心线上,该预选区域的宽度可以设置为预设的宽度,而高度可以设置为一个预设的高度。通过预设宽度和高度的变化,可以得到多个预选区域。也就是说,本实施例先确定一个或多个预选区域,再看这些预选区域是否有遮挡物,而前述实施例是从无遮挡物的区域内寻找可投影的区域。两种方式都可以用于确定目标区域。
在本公开一示例性的实施例中,根据所述目标区域控制所述运动机构进行位置修正,包括:
确定所述目标区域的中心点的坐标,以所述中心点为根基点进行所述墙面的法向量的延伸;
根据所述投影仪沿所述法向量向所述墙面投影时的投影三角,确定使投影画面位于所述目标区域所需的投影距离;
根据所述中心点的坐标和所述投影距离确定投影的第二定位点,控制所述运动机构行驶到所述第二定位点。
本公开实施例对于根基点和法向量的确定,并不局限在确定了目标区域之后,也可以在目标区域之前,先根据一个预选区域确定根基点和法向量。如果确定的目标区域不是该预选区域,可以重新确定根基点和法向量。
请参见图3,图中示出了墙面和机器人的相对位置。图中线段BD对应在的是目标区域的宽度及其在墙面上的位置,目标区域的中心点用A表示,以A点为根基点引出的墙面的法向量用向量F表示。投影仪沿法向量F投影时的投影三角用BCD表示,通过对投影距离(投影仪镜头到墙面的垂直距离)的调节,可以改变投影画面在墙面上的大小,计算投影画面位于目标区域(即画面与目标区域重合或者说画面刚好覆盖目标区域)时的投影距离,图中标记为H1。由此,可以很容易地计算出第二定位点,也即图中的C点。该第二定位点是通过相对墙面的坐标直接定位得到,具有较高的精度,可以最以预期的观影效果。
第二定位点具体可用C点垂直投影到地面的坐标来表示,根据中心点坐标和投影距离计算出的坐标可以采用相对坐标的方式表示,机器人可以基于墙面为参考系建立局部坐标来确定第二定位点的坐标,用于路径规划。在本公开一示例性的实施例中,控制运动机构行驶时,可以第一定位点为起点,第二定位点为终点,进行局部路径规则;再控制运动机构依照规则的路径从当前所在的第一定位点行驶到第二定位点。该局部路径规划仅仅依赖于墙面的相对位置关系,不依赖于全局地图,对算力要求较小。
在本公开一示例性的实施例中,行驶到所述第二定位点后,对所述投影仪的位置进行调整,使得所述投影仪的投影中心线与所述法向量重合,或者使得所述投影仪的投影中心线与所述法向量的夹角最小且与所述墙面相交于所述中心点。这里的投影中心线可以用投影仪投影镜头的轴线来表示,如本公开实施例的机器人可以通过运动机构来调节投影仪的高度,在投影仪的高度允许的范围内,优先使得投影中心线与法向量重合,这样可以投影出横平竖直的画面,取得最优的观影效果。但如果投影仪的高度无法调节到法向量所在的高度位置,则可以将投影仪调节到一个位置,使得投影中心线与法向量的夹角最小与墙面相交于目标区域中心点。这个位置处于法向量所在的垂直平面上。本文中对投影仪的位置调整,包含了对投影仪的姿态调整,姿态 调整也是对转动位置的调整,可以视为位置调整的一种。
本公开上述实施例依据传感器融合技术,能够针对空白墙面进行机器人自身位置的修正,弥补由于定位导航带来角度和位置误差,使得机器人身上投影仪投射出可以横平竖直的画面,使得具有最佳画面感。
本公开一实施例提供了一种投影定位方法,应用于设置有投影仪的机器人,如图4所示,包括:
步骤210,采集当前环境的图像,根据所述图像识别出至少一个墙面;
步骤220,将识别出的一个墙面作为路标进行路径规划,确定投影的第一定位点;
步骤230,按照本公开任一实施例所述的投影位置修正方法进行投影位置修正。
本公开上述实施例通过SLAM进行投影位置的初步定位,再在初步定位点采集墙面的图像,基于图像确定投影画面的目标区域,进而对投影位置进行修正,实现了投影位置的精准控制,可以取得良好的投影效果。
在本公开一示例性的实施例中,所述方法还包括:在确定所述墙面不存在可投影区域的情况下,将识别出的另一个墙面作为路标进行路径规划,重新确定投影的第一定位点并再次进行所述投影位置修正。
在选定路标(landmark)时,可以使用视觉传感器,相对来说视觉图像的信息比较丰富。可以对环境做语义标注,在图像上识别出墙面作为landmark。但是,激光扫描也可以用于识别墙面,墙面的点云图像具有显著的几何特征,会出现较大面积的平整面,通过语义识别,可以在点云中分割出墙面作为landmark。
在确定好landmark后,进行路径规划时,需要选定第一定位点,一般来说,该第一定位点可以选择在墙面宽度中心线的正前方,距离墙面设定距离的位置,但是这种初步的定位,由于视角、距离等原因其精度有限,对墙面是否存在遮挡物也难以做出准确判断,而且,此时的运动控制由于距离较远也会存在精度上的误差,因而难以达到良好的观影效果。因此本实施例结合 本公开的位置修正方法,能够基于近距离采集的图像进行精确定位和判断,选择适合的目标区域进行投影,还可以可以将雷达点云和视觉融合进行精确定位,让客户具有观影的良好感受。
本公开上述实施例依据传感器融合技术,能够针对空白墙面进行机器人自身位置的修正,弥补由于定位导航带来角度和位置误差,使得机器人身上的投影仪投射出可以横平竖直的画面,具有良好的画面感。
本公开一实施例还提供了一种投影位置修正装置,应用于设置有投影仪的机器人,如图7所示,所述投影位置修正装置包括:
控制模块70,设置为控制运动机构行驶到投影的第一定位点,及控制图像采集装置采集墙面的图像;
确定模块80,设置为根据所述墙面的图像确定所述墙面是否存在可投影区域,及在存在可投影区域的情况下,确定投影画面在所述墙面上的目标区域;
修正模块90,设置为根据所述目标区域控制所述运动机构进行投影位置修正,修正后所述投影仪向所述墙面投影生成的投影画面位于所述目标区域。
本公开一实施例还提供了一种机器人控制装置,如图5所示,包括处理器60以及存储有计算机程序的存储器50,其中,所述处理器60执行所述计算机程序时能够实现如本公开任一实施例所述的投影位置修正方法,或者能够实现如本公开任一实施例所述的投影定位方法。本实施例的处理器可以有一个或多个,可以是通用处理器,包括中央处理器(Central Processing Unit,简称CPU)、网络处理器(Network Processor,简称NP)等;还可以是数字信号处理器(DSP)、专用集成电路(ASIC)、现成可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本公开实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
本公开一实施例还提供了一种机器人,包括机器人本体,所述机器人本体设置有运动机构,所述机器人还包括设置在机器人本体上的投影仪、图像 采集装置和控制装置,其中:
所述图像采集装置设置为能够采集墙面的图像;
所述投影仪设置为能够向墙面投影要播放的画面;
所述控制装置设置为能够执行如本公开任一实施例所述的投影位置修正方法,或者能够执行如本公开任一实施例所述的投影定位方法。
本公开上述实施例的机器人能够对投影位置进行修正,实现精准定位,取得良好的投影效果。
在本公开一示例性的实施例中,所述图像采集装置包括:视觉传感器,设置为能够采集墙面的纹理图像;及,3D激光雷达,设置为能够扫描墙面生成点云图像。
本实施例的机器人本体可以包括底部、躯干、颈部和头部。但不局限于此,例如可以将底部更换为腿部,或者增加手部,或者将躯干和颈部集成为一个部件等等。对于机器人本体的结构本公开不做局限,只要能够承载图像采集装置、投影仪和控制装置,以及能够行驶和对投影仪进行位置调节即可。上述投影仪包括可以投射画面的任何装置或设备,不局限于特定的外形和结构。投影仪除了可以安装在机器人的头部外,也可以安装在其他部件,如手部、躯干等。
为了弥补由于定位运动带来位置误差,针对墙面寻找最佳投影点,使得机器人能够投射出预期的界面宽度和高度,达到良好的观影感受。本公开一实施例还提供了一种投影定位方法,通过对初步定位点进行相对位置修正,实现精确的投影位置调整。
本实施例的投影定位方法应用于机器人,该机器人配置有投影仪,如图6所示,本实施例的一次投影定位过程包括:
步骤310,根据路标进行初步定位,行驶到第一定位点;
步骤320,通过激光雷达扫描确定墙面宽度;
本步骤中,可以控制3D激光雷达扫描墙面得到点云图像,通过相邻点聚类算法对点云图像进行线性聚类,雷达点云信息进行线性聚类,分辨出线 性墙面及其具有的宽度。
步骤330,判断墙面宽度是否满足投影需求,如果满足,执行步骤340,如果不满足,结束;
本步骤中,如果墙面宽度D2大于投影所需宽度(D1+2d),d是容差宽度,则判断墙面宽度满足投影需求。
步骤340,选定根基点并进行法向量延伸,确定投影距离;
本步骤中,可以以(D1+2d)/2处的位置作为根基点(0,0)进行法向量延伸,并且根据投影的三角形关系确定投射宽度为D2的画面,投影距离为H1。根据根基点的坐标和投影距离可以得到修正后的投影点即第二定位点的相对坐标。上述根基点可以设定为投影画面在墙面的预选区域的中心点。
步骤350,通过视觉传感采集的图像判断墙面是否存在影响观看投影画面的遮挡物,如果不存在,执行步骤360,如果存在,结束;
本步骤中,可以利用机器人头部设置的RGB相机采集墙面的纹理图像,对墙面的垂直立面进行过滤,检查是否有挂饰等物体影响观看投影画面,通过本步骤的二次确定可以避免投影画面存在遮挡物,影响投影效果。
应当说明的是,如果在本步骤发现预选区域存在遮挡物,还可以重新框定一个无遮挡物的区域作为投影画面的目标区域,根据该目标区域再重新确定根基点和法向量。也即本实施例可以先确定投影画面的预选区域,再检测预选区域内是否存在遮挡物,在存在遮挡物的情况下,再去寻找其他无遮挡物且可以投影的区域,如果能够找到,也认定是不存在影响观看投影画面的遮挡物。这些可能的变化,均在本公开实施例的范围之内。
本步骤也可以在步骤340之前执行,如果能够确定一个目标区域,再执行步骤340中的处理即选定根基点并进行法向量延伸,确定投影距离。
步骤360,根据所述根基点和法向量进行路径规划,实现投影位置修正。
本步骤中,可以进行运动控制,使得机器人行驶到由法向量和投影距离确定的第二定位点,并进行位姿调整,使得投影画面位于墙面的目标区域。本步骤的路径规划可以基于根基点作为相对坐标,不依赖绝对地图,仅仅是依赖实时点云信息进行相对目标判断,进行相对位姿调整,进而达到位置自 纠正的目的。
在上述步骤中,如果结束了本次投影定位,可以进行下一个路标(landmark)信息确认,再次启动一次投影定位过程。
本实施例提供了一种基于雷达点云和视觉融合对投影画面进行自纠正控制策略,通过激光雷达获取点云图像,通过相邻点聚类算法实现墙面宽度的识别,进而确定根基点和法向量。通过视频对预选区域进行二次确认。通过立体巡视实现了精确的相对位姿调整。
本实施例通过结合雷达点云信息和单目视觉信息的融合,可以进行精确定位,找到相应的墙面得到对最佳投影点,使得投影出画面比较工整,提高用户体验。
本实施例可以基于墙面寻找最佳投影点,使得投射出界面宽度,高度能够达到最佳观影感,弥补由于定位运动感带来位姿误差,经过二次相对位置修正可以进行精确相对位姿调整确定。而且可以不依赖于全局地图,仅仅依赖墙面相对位置关系,进行局部路径规划,对算力要求较小,
本公开一实施例还提供了一种计算机程序产品,包括计算机程序,其中,所述计算机程序被处理器执行时能够实现如本公开任一实施例所述的投影位置修正方法,或者能够实现如本公开任一实施例所述的投影定位方法。
本公开一实施例还提供了一种非瞬态计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时能够实现如本公开任一实施例所述的投影位置修正方法,或者能够实现如本公开任一实施例所述的投影定位方法。
在本公开上述任意一个或多个示例性实施例中,所描述的功能可以硬件、软件、固件或其任一组合来实施。如果以软件实施,那么功能可作为一个或多个指令或代码存储在计算机可读介质上或经由计算机可读介质传输,且由基于硬件的处理单元执行。计算机可读介质可包含对应于例如数据存储介质等有形介质的计算机可读存储介质,或包含促进计算机程序例如根据通信协议从一处传送到另一处的任何介质的通信介质。以此方式,计算机可读介质通常可对应于非暂时性的有形计算机可读存储介质或例如信号或载波等通信 介质。数据存储介质可为可由一个或多个计算机或者一个或多个处理器存取以检索用于实施本公开中描述的技术的指令、代码和/或数据结构的任何可用介质。计算机程序产品可包含计算机可读介质。
举例来说且并非限制,此类计算机可读存储介质可包括RAM、ROM、EEPROM、CD-ROM或其它光盘存储装置、磁盘存储装置或其它磁性存储装置、快闪存储器或可用来以指令或数据结构的形式存储所要程序代码且可由计算机存取的任何其它介质。而且,还可以将任何连接称作计算机可读介质举例来说,如果使用同轴电缆、光纤电缆、双绞线、数字订户线(DSL)或例如红外线、无线电及微波等无线技术从网站、服务器或其它远程源传输指令,则同轴电缆、光纤电缆、双纹线、DSL或例如红外线、无线电及微波等无线技术包含于介质的定义中。然而应了解,计算机可读存储介质和数据存储介质不包含连接、载波、信号或其它瞬时(瞬态)介质,而是针对非瞬时有形存储介质。如本文中所使用,磁盘及光盘包含压缩光盘(CD)、激光光盘、光学光盘、数字多功能光盘(DVD)、软磁盘或蓝光光盘等,其中磁盘通常以磁性方式再生数据,而光盘使用激光以光学方式再生数据。上文的组合也应包含在计算机可读介质的范围内。
可由例如一个或多个数字信号理器(DSP)、通用微处理器、专用集成电路(ASIC)现场可编程逻辑阵列(FPGA)或其它等效集成或离散逻辑电路等一个或多个处理器来执行指令。因此,如本文中所使用的术语“处理器”可指上述结构或适合于实施本文中所描述的技术的任一其它结构中的任一者。另外,在一些方面中,本文描述的功能性可提供于经配置以用于编码和解码的专用硬件和/或软件模块内,或并入在组合式编解码器中。并且,可将所述技术完全实施于一个或多个电路或逻辑元件中。
本公开实施例的技术方案可在广泛多种装置或设备中实施,包含无线手机、集成电路(IC)或一组IC(例如,芯片组)。本公开实施例中描各种组件、模块或单元以强调经配置以执行所描述的技术的装置的功能方面,但不一定需要通过不同硬件单元来实现。而是,如上所述,各种单元可在编解码器硬件单元中组合或由互操作硬件单元(包含如上所述的一个或多个处理器)的集合结合合适软件和/或固件来提供。

Claims (14)

  1. 一种投影位置修正方法,应用于设置有投影仪的机器人,所述方法包括:
    控制运动机构行驶到投影的第一定位点,控制图像采集装置采集墙面的图像;
    根据所述墙面的图像确定所述墙面是否存在可投影区域,在存在可投影区域的情况下,确定投影画面在所述墙面上的目标区域;
    根据所述目标区域控制所述运动机构进行投影位置修正,修正后所述投影仪向所述墙面投影生成的投影画面位于所述目标区域。
  2. 如权利要求1所述的方法,其中:
    所述根据所述墙面的图像确定所述墙面是否存在可投影区域,包括:
    根据所述墙面的3D图像确定所述墙面的宽度,在所述墙面的宽度小于投影所需宽度的情况下,确定所述墙面不存在可投影区域。
  3. 如权利要求2所述的方法,其中:
    所述根据所述墙面的图像确定所述墙面是否存在可投影区域,还包括:
    在所述墙面的宽度大于或等于投影所需宽度的情况下,根据所述墙面的纹理图像进行遮挡物的识别,根据识别结果确定所述墙面是否存在可容纳预设投影画面的无遮挡区域:
    在存在所述无遮挡区域的情况下,确定所述墙面存在可投影区域;或者,
    在不存在所述无遮挡区域的情况下,确定所述墙面不存在可投影区域。
  4. 如权利要求2所述的方法,其中
    所述根据所述墙面的图像确定所述墙面是否存在可投影区域,还包括:
    在所述墙面的宽度大于或等于投影所需宽度的情况下,确定投影画面在所述墙面上的一个或多个预选区域;
    根据所述墙面的纹理图像进行遮挡物的识别,在确定至少一个所述预选区域不存在遮挡物的情况下,确定所述墙面存在可投影区域;或者,在确定 所述预选区域均存在遮挡物的情况下,确定所述墙面不存在可投影区域;
    存在可投影区域的情况下,将一个所述预选区域确定为所述目标区域。
  5. 如权利要求3或4所述的方法,其中:
    所述控制图像采集装置采集墙面的图像,包括:控制3D激光雷达扫描所述墙面,生成所述墙面的点云图像;及,控制视觉传感器拍摄所述墙面,生成所述墙面的纹理图像。
  6. 如权利要求5所述的方法,其中:
    所述根据所述墙面的3D图像确定所述墙面的宽度,包括:通过相邻点聚类算法对所述点云图像进行处理,提取出所述墙面的两侧轮廓线,根据所述两侧轮廓线之间的距离确定所述墙面的宽度。
  7. 如权利要求3至6中任一所述的方法,其中:
    所述投影画面为矩形,所述确定投影画面在所述墙面上的目标区域,包括:
    确定所述投影画面的尺寸,使得所述投影画面能够被所述无遮挡区域所容纳且最接近于设定的画面优选尺寸;
    确定所述投影画面在所述墙面的高度方向上的位置和宽度方向上的位置,使得所述投影画面能够被所述无遮挡区域所容纳。
  8. 如权利要求1至7中任一所述的方法,其中:
    所述根据所述目标区域控制所述运动机构进行位置修正,包括:
    确定所述目标区域的中心点的坐标,以所述中心点为根基点进行所述墙面的法向量的延伸;
    根据所述投影仪沿所述法向量向所述墙面投影时的投影三角,确定使投影画面位于所述目标区域所需的投影距离;
    根据所述中心点的坐标和所述投影距离确定投影的第二定位点,控制所述运动机构行驶到所述第二定位点。
  9. 一种投影定位方法,应用于设置有投影仪的机器人,所述方法包括:
    采集当前环境的图像,根据所述图像识别出至少一个墙面;
    将识别出的一个墙面作为路标进行路径规划,确定投影的第一定位点;
    按照如权利要求1至8中任一所述的方法进行投影位置修正。
  10. 一种机器人控制装置,包括处理器以及存储有计算机程序的存储器,其中,所述处理器执行所述计算机程序时能够实现如权利要求1至8中任一所述的投影位置修正方法,或者能够实现如权利要求9所述的投影定位方法。
  11. 一种机器人,包括机器人本体,所述机器人本体设置有运动机构,其中,还包括设置在机器人本体上的投影仪、图像采集装置和控制装置,其中:
    所述图像采集装置设置为能够采集墙面的图像;
    所述投影仪设置为能够向墙面投影要播放的画面;
    所述控制装置设置为能够执行如权利要求1至8中任一所述的投影位置修正方法,或者能够执行如权利要求9所述的投影定位方法。
  12. 一种投影位置修正装置,应用于设置有投影仪的机器人,其中,所述投影位置修正装置包括:
    控制模块,设置为控制运动机构行驶到投影的第一定位点,及控制图像采集装置采集墙面的图像;
    确定模块,设置为根据所述墙面的图像确定所述墙面是否存在可投影区域,及在存在可投影区域的情况下,确定投影画面在所述墙面上的目标区域;
    修正模块,设置为根据所述目标区域控制所述运动机构进行投影位置修正,修正后所述投影仪向所述墙面投影生成的投影画面位于所述目标区域。
  13. 一种计算机程序产品,包括计算机程序,其中,所述计算机程序被处理器执行时能够实现如权利要求1至8中任一所述的投影位置修正方法,或者能够实现如权利要求9所述的投影定位方法。
  14. 一种非瞬态计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其中,所述计算机程序被处理器执行时能够实现如权利要求1至8中任一所述的投影位置修正方法,或者能够实现如权利要求9所述的投影定位方法。
PCT/CN2022/135943 2022-01-27 2022-12-01 投影位置修正方法、投影定位方法及控制装置、机器人 WO2023142678A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210100253.0A CN114245091B (zh) 2022-01-27 2022-01-27 投影位置修正方法、投影定位方法及控制装置、机器人
CN202210100253.0 2022-01-27

Publications (1)

Publication Number Publication Date
WO2023142678A1 true WO2023142678A1 (zh) 2023-08-03

Family

ID=80747383

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/135943 WO2023142678A1 (zh) 2022-01-27 2022-12-01 投影位置修正方法、投影定位方法及控制装置、机器人

Country Status (2)

Country Link
CN (1) CN114245091B (zh)
WO (1) WO2023142678A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114245091B (zh) * 2022-01-27 2023-02-17 美的集团(上海)有限公司 投影位置修正方法、投影定位方法及控制装置、机器人
CN117456483B (zh) * 2023-12-26 2024-03-08 陕西卫仕厨房灭火设备有限公司 一种基于图像处理的智能交通行车安全警示方法及装置

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105278759A (zh) * 2014-07-18 2016-01-27 深圳市大疆创新科技有限公司 一种基于飞行器的图像投影方法、装置及飞行器
CN105676572A (zh) * 2016-04-19 2016-06-15 深圳市神州云海智能科技有限公司 用于移动机器人配备的投影仪的投影校正方法和设备
CN107222732A (zh) * 2017-07-11 2017-09-29 京东方科技集团股份有限公司 自动投影方法以及投影机器人
US20170371237A1 (en) * 2016-06-28 2017-12-28 Qihan Technology Co., Ltd. Projection method and device for robot
CN111031298A (zh) * 2019-11-12 2020-04-17 广景视睿科技(深圳)有限公司 控制投影模块投影的方法、装置和投影系统
CN113973196A (zh) * 2021-11-09 2022-01-25 北京萌特博智能机器人科技有限公司 移动投影机器人及其移动投影方法
CN114245091A (zh) * 2022-01-27 2022-03-25 美的集团(上海)有限公司 投影位置修正方法、投影定位方法及控制装置、机器人

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106507077B (zh) * 2016-11-28 2018-07-24 江苏鸿信系统集成有限公司 基于图像分析的投影仪画面矫正及遮挡避让方法
CN108303972B (zh) * 2017-10-31 2020-01-17 腾讯科技(深圳)有限公司 移动机器人的交互方法及装置
CN109996051B (zh) * 2017-12-31 2021-01-05 广景视睿科技(深圳)有限公司 一种投影区域自适应的动向投影方法、装置及系统
CN108965839B (zh) * 2018-07-18 2020-03-24 成都极米科技股份有限公司 一种自动调整投影画面的方法及装置
US10803314B2 (en) * 2018-10-10 2020-10-13 Midea Group Co., Ltd. Method and system for providing remote robotic control
CN112540672A (zh) * 2020-11-09 2021-03-23 清华大学深圳国际研究生院 智能投影方法、设备和存储介质
CN112702587A (zh) * 2020-12-29 2021-04-23 广景视睿科技(深圳)有限公司 一种智能跟踪投影方法及系统
CN112954284A (zh) * 2021-02-08 2021-06-11 青岛海信激光显示股份有限公司 投影画面的显示方法以及激光投影设备
CN112804507B (zh) * 2021-03-19 2021-08-31 深圳市火乐科技发展有限公司 投影仪校正方法、系统、存储介质以及电子设备
CN113840126A (zh) * 2021-09-16 2021-12-24 广景视睿科技(深圳)有限公司 一种控制投影设备的方法、装置及投影设备

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105278759A (zh) * 2014-07-18 2016-01-27 深圳市大疆创新科技有限公司 一种基于飞行器的图像投影方法、装置及飞行器
CN105676572A (zh) * 2016-04-19 2016-06-15 深圳市神州云海智能科技有限公司 用于移动机器人配备的投影仪的投影校正方法和设备
US20170371237A1 (en) * 2016-06-28 2017-12-28 Qihan Technology Co., Ltd. Projection method and device for robot
CN107222732A (zh) * 2017-07-11 2017-09-29 京东方科技集团股份有限公司 自动投影方法以及投影机器人
CN111031298A (zh) * 2019-11-12 2020-04-17 广景视睿科技(深圳)有限公司 控制投影模块投影的方法、装置和投影系统
CN113973196A (zh) * 2021-11-09 2022-01-25 北京萌特博智能机器人科技有限公司 移动投影机器人及其移动投影方法
CN114245091A (zh) * 2022-01-27 2022-03-25 美的集团(上海)有限公司 投影位置修正方法、投影定位方法及控制装置、机器人

Also Published As

Publication number Publication date
CN114245091A (zh) 2022-03-25
CN114245091B (zh) 2023-02-17

Similar Documents

Publication Publication Date Title
WO2023142678A1 (zh) 投影位置修正方法、投影定位方法及控制装置、机器人
WO2019232806A1 (zh) 导航方法、导航系统、移动控制系统及移动机器人
US20210166495A1 (en) Capturing and aligning three-dimensional scenes
US11677920B2 (en) Capturing and aligning panoramic image and depth data
CN108140235B (zh) 用于产生图像视觉显示的系统和方法
JP6732746B2 (ja) 機械視覚システムを使用した、同時位置測定マッピングを実施するためのシステム
Scaramuzza et al. Extrinsic self calibration of a camera and a 3d laser range finder from natural scenes
WO2019128109A1 (zh) 一种基于人脸追踪的动向投影方法、装置及电子设备
AU2017300937A1 (en) Estimating dimensions for an enclosed space using a multi-directional camera
KR20190030197A (ko) 훈련된 경로를 자율주행하도록 로봇을 초기화하기 위한 시스템 및 방법
CN110801180A (zh) 清洁机器人的运行方法及装置
Lin et al. Mapping and Localization in 3D Environments Using a 2D Laser Scanner and a Stereo Camera.
WO2018140656A1 (en) Capturing and aligning panoramic image and depth data
WO2021146862A1 (zh) 移动设备的室内定位方法、移动设备及控制系统
WO2022078488A1 (zh) 定位方法、装置、自移动设备和存储介质
WO2020024684A1 (zh) 三维场景建模方法及装置、电子装置、可读存储介质及计算机设备
US9304582B1 (en) Object-based color detection and correction
CN110880161B (zh) 一种多主机多深度摄像头的深度图像拼接融合方法及系统
WO2019189381A1 (ja) 移動体、制御装置、および制御プログラム
WO2023088127A1 (zh) 室内导航方法、服务器、装置和终端
Piérard et al. I-see-3d! an interactive and immersive system that dynamically adapts 2d projections to the location of a user's eyes
WO2022011560A1 (zh) 图像裁剪方法与装置、电子设备及存储介质
Cheng et al. 3D Radar and Camera Co-Calibration: A flexible and Accurate Method for Target-based Extrinsic Calibration
WO2021217444A1 (zh) 深度图生成方法、电子设备、计算处理设备及存储介质
WO2023109347A1 (zh) 自移动设备的重定位方法、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22923469

Country of ref document: EP

Kind code of ref document: A1