WO2023020174A1 - 一种结构光模组及自移动设备 - Google Patents

一种结构光模组及自移动设备 Download PDF

Info

Publication number
WO2023020174A1
WO2023020174A1 PCT/CN2022/105817 CN2022105817W WO2023020174A1 WO 2023020174 A1 WO2023020174 A1 WO 2023020174A1 CN 2022105817 W CN2022105817 W CN 2022105817W WO 2023020174 A1 WO2023020174 A1 WO 2023020174A1
Authority
WO
WIPO (PCT)
Prior art keywords
target object
camera
structured light
line laser
self
Prior art date
Application number
PCT/CN2022/105817
Other languages
English (en)
French (fr)
Inventor
许开立
罗潇
单俊杰
陈巍
吴永东
张鹏
刘阳
Original Assignee
科沃斯机器人股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202110944998.0A external-priority patent/CN113960562A/zh
Priority claimed from CN202110944997.6A external-priority patent/CN113786125B/zh
Application filed by 科沃斯机器人股份有限公司 filed Critical 科沃斯机器人股份有限公司
Priority to EP22857487.7A priority Critical patent/EP4385384A1/en
Publication of WO2023020174A1 publication Critical patent/WO2023020174A1/zh
Priority to US18/442,785 priority patent/US20240197130A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/24Floor-sweeping machines, motor-driven
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/20Control system inputs
    • G05D1/24Arrangements for determining position or orientation
    • G05D1/242Means based on the reflection of waves generated by the vehicle
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/933Lidar systems specially adapted for specific applications for anti-collision purposes of aircraft or spacecraft
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/20Control system inputs
    • G05D1/22Command input arrangements
    • G05D1/228Command input arrangements located on-board unmanned vehicles
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/20Control system inputs
    • G05D1/24Arrangements for determining position or orientation
    • G05D1/243Means capturing signals occurring naturally from the environment, e.g. ambient optical, acoustic, gravitational or magnetic signals
    • G05D1/2435Extracting 3D information
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/20Control system inputs
    • G05D1/24Arrangements for determining position or orientation
    • G05D1/246Arrangements for determining position or orientation using environment maps, e.g. simultaneous localisation and mapping [SLAM]
    • G05D1/2462Arrangements for determining position or orientation using environment maps, e.g. simultaneous localisation and mapping [SLAM] using feature-based mapping
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/20Control system inputs
    • G05D1/24Arrangements for determining position or orientation
    • G05D1/246Arrangements for determining position or orientation using environment maps, e.g. simultaneous localisation and mapping [SLAM]
    • G05D1/2467Arrangements for determining position or orientation using environment maps, e.g. simultaneous localisation and mapping [SLAM] using a semantic description of the environment
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/60Intended control result
    • G05D1/617Safety or protection, e.g. defining protection zones around obstacles or avoiding hazards
    • G05D1/622Obstacle avoidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/145Illumination specially adapted for pattern recognition, e.g. using gratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D2101/00Details of software or hardware architectures used for the control of position
    • G05D2101/20Details of software or hardware architectures used for the control of position using external object recognition
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D2105/00Specific applications of the controlled vehicles
    • G05D2105/10Specific applications of the controlled vehicles for cleaning, vacuuming or polishing
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D2107/00Specific environments of the controlled vehicles
    • G05D2107/40Indoor domestic environment
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D2109/00Types of controlled vehicles
    • G05D2109/10Land vehicles
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D2111/00Details of signals used for control of position, course, altitude or attitude of land, water, air or space vehicles
    • G05D2111/10Optical signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D2111/00Details of signals used for control of position, course, altitude or attitude of land, water, air or space vehicles
    • G05D2111/10Optical signals
    • G05D2111/17Coherent light, e.g. laser signals

Definitions

  • This application relates to the technical field of artificial intelligence, in particular to a structured light module and self-moving equipment.
  • Various aspects of the present application provide a structured light module and a self-moving device, which are used to provide a new structured light module and expand the application range of laser sensors.
  • An embodiment of the present application provides a structured light module including: a first camera and line laser emitters distributed on both sides of the first camera; the structured light module further includes: a second camera; wherein the line laser emitter is responsible for external emission Line laser, the first camera is used to collect the first environmental image detected by the line laser while the line laser emitter emits the line laser, and the second camera is used to collect the second environmental image within its field of view; the first environmental image is The laser image includes laser stripes generated by the line laser after encountering an object, and the second environment image is a visible light image that does not include laser stripes.
  • the embodiment of the present application also provides a self-moving device, including: a device body, a main controller and a structured light module are arranged on the device body, and the main controller is electrically connected to the structured light module;
  • the structured light module includes: a first camera, line laser emitters distributed on both sides of the first camera, a second camera and a module controller; wherein, the module controller controls the line laser emitter to emit line laser, And control the first camera to collect the first environment image detected by the line laser during the emission of the line laser by the line laser transmitter, and send the first environment image to the main controller; the main controller controls the second camera to collect the image within its field of view the second environment image, and perform functional control on the self-mobile device according to the first environment image and the second environment image; wherein, the first environment image is a laser image containing laser stripes generated by the line laser after encountering an object, and the second The environment image is a visible light image that does not contain laser streaks.
  • the structured light module can not only collect the first environment image including the laser stripes generated by the line laser after encountering the object through the cooperation of the first camera and the line laser transmitter, but also can collect through the second environment image Visible light images that do not contain laser stripes, the first environmental image and the second environmental image can help to detect more and rich environmental information more accurately, and expand the application range of laser sensors.
  • Various aspects of the present application provide an operation method, a self-moving device, and a storage medium to meet more detailed operation requirements.
  • the embodiment of the present application provides a working method, which is suitable for self-moving equipment with a structured light module, and the method includes: using the structured light component and the visual sensor in the structured light module to respectively collect structured light data in the front working area and image data; based on the image data, identify the target object category in the front work area, and select the target machine behavior mode that matches the target object category; with the assistance of structured light data, control the self-mobile device to target the front A target object present in the work area performs a work task.
  • the embodiment of the present application also provides a self-moving device, including: a device body, one or more memory devices, one or more processors, and a structured light module are arranged on the device body; the structured light module includes: a structured light component and vision sensor;
  • One or more memories are used to store computer programs; one or more processors are used to execute computer programs for: using the structured light components and vision sensors in the structured light module to collect structures in the front work area respectively Light data and image data; based on the image data, identify the target object category in the front work area, and select the target machine behavior mode that matches the target object category; with the assistance of structured light data, control the self-moving equipment according to the target machine behavior mode Execute work tasks against target objects present in the work area ahead.
  • the embodiment of the present application also provides a computer-readable storage medium storing computer instructions.
  • the computer instructions are executed by one or more processors, one or more processors are caused to execute the self-mobile device provided by the embodiment of the present application. The steps in the working method embodiment.
  • Fig. 1a is a schematic structural diagram of a structured light module provided by an exemplary embodiment of the present application
  • Figure 1b is a schematic diagram of the working principle of a line laser emitter provided in an exemplary embodiment of the present application
  • Fig. 1c is a schematic structural diagram of another structured light module provided by an exemplary embodiment of the present application.
  • Fig. 1d is a structural schematic diagram of the installation position relationship of each device in a structured light module provided by an exemplary embodiment of the present application;
  • Fig. 1e is a schematic diagram of the relationship between the line laser of a line laser emitter and the field of view angle of the first camera provided by an exemplary embodiment of the present application;
  • Fig. 1f is a schematic structural diagram of another structured light module provided by an exemplary embodiment of the present application.
  • Fig. 1h is a front view of a structured light module provided by an exemplary embodiment of the present application.
  • Fig. 1i is an axonometric view of a structured light module provided by an exemplary embodiment of the present application
  • Figures 1j-1m are respectively an exploded view of a structured light module provided by an exemplary embodiment of the present application.
  • Fig. 1n is a partial diagram of a structured light module provided by an exemplary embodiment of the present application.
  • Figure 1o is a cross-sectional view of Figure 1n;
  • Fig. 1p is a cross-sectional view of a structured light module provided by an exemplary embodiment of the present application
  • Fig. 1q is a rear view of a structured light module provided by an exemplary embodiment of the present application.
  • Fig. 1r is another partial view of a structured light module provided by an exemplary embodiment of the present application.
  • Fig. 1s is another cross-sectional view of a structured light module provided by an exemplary embodiment of the present application
  • Fig. 1t is a schematic diagram of tilting the first camera or line laser emitter in a structured light module provided by an exemplary embodiment of the present application;
  • Fig. 1u is a schematic diagram of detecting a measured object from a mobile device provided by an exemplary embodiment of the present application
  • Fig. 1v is a cross-sectional view of a wave mirror provided by an exemplary embodiment of the present application.
  • Figure 1w is a light intensity distribution diagram of a line laser emitter with a wave mirror provided in an exemplary embodiment of the present application
  • FIG. 1x is a schematic structural diagram of a lenticular mirror provided in an exemplary embodiment of the present application.
  • Figure 1y is a light intensity distribution diagram of a line laser emitter with a lenticular mirror provided in an exemplary embodiment of the present application;
  • Fig. 2a is a schematic structural diagram of a mobile device provided by an exemplary embodiment of the present application.
  • Fig. 2b is a schematic structural diagram of a structured light module in a mobile device provided by an exemplary embodiment of the present application;
  • Fig. 2c and Fig. 2d are respectively exploded schematic diagrams of the structured light module and the impact plate provided by the exemplary embodiment of the present application;
  • Fig. 2e is a schematic structural diagram of a striker equipped with a structured light module provided by an exemplary embodiment of the present application;
  • Fig. 2f is a schematic structural diagram of a sweeping robot provided in an exemplary embodiment of the present application.
  • FIG. 1 is a schematic diagram of a scene where a mobile device uses a structured light module to perform operations provided by an exemplary embodiment of the present application;
  • Fig. 2 is a schematic structural diagram of a structured light module provided by an exemplary embodiment of the present application
  • Fig. 3 is a schematic structural diagram of another structured light module provided by an exemplary embodiment of the present application.
  • Fig. 4 is a schematic flow chart of a working method provided by an exemplary embodiment of the present application.
  • Fig. 5 is a schematic diagram of a scene when a sweeping robot is working according to an exemplary embodiment of the present application
  • Fig. 6 is a floor plan of a family environment provided by an exemplary embodiment of the present application.
  • Fig. 7 is a schematic structural diagram of a mobile device provided by an exemplary embodiment of the present application.
  • Structured light module 21 First camera: 101 Line laser transmitter: 102
  • Second camera 103
  • Module controller 104
  • Indicator light 105
  • Main controller 106 Fixed seat: 107 Fixed cover: 108
  • Fixing plate 109 Indicator lamp plate: 201 Mounting hole: 202
  • Second window 232 Second window: 233
  • the first drive circuit 1001
  • the second drive circuit 1002
  • the third drive circuit 1003
  • the embodiment of the present application provides a structured light module.
  • the structured light module can cooperate with the first camera and the line laser transmitter to collect the objects including the line laser when it encounters objects. After the first environmental image of the laser stripes is generated, the visible light image that does not contain the laser stripes can also be collected through the second environmental image. The first environmental image and the second environmental image can help to detect more and rich environmental information more accurately and expand the laser The scope of application of the sensor.
  • the rich environmental information detected by the structured light module can help improve the accuracy of object recognition. For example, if the structured light module is applied in an obstacle avoidance scene, the success rate of obstacle avoidance can be improved. For another example, if the structured light module is used in obstacle-crossing scenarios, it can improve the success rate of obstacle-crossing. For another example, if the structured light module is used in the creation of the environmental map, the accuracy of the environmental map creation can be improved.
  • Fig. 1a is a schematic structural diagram of a structured light module provided by an exemplary embodiment of the present application.
  • the structured light module includes: a first camera 101 , line laser emitters 102 distributed on both sides of the first camera 101 , and a second camera 103 .
  • the implementation form of the line laser emitter 102 is not limited, and may be any equipment/product form capable of emitting line laser.
  • line laser emitter 102 may be, but is not limited to, a laser tube.
  • the internal or external controller of the structured light module can control the line laser to work, for example, control the line laser emitter 102 to emit the line laser.
  • the line laser emitter 102 emits the laser plane FAB and the laser plane ECD to the outside. After the laser plane reaches the obstacle, a laser stripe will be formed on the surface of the obstacle, that is, the line segment AB and the line segment CD shown in Fig. 1b.
  • the implementation form of the first camera 101 is not limited. Any visual device that can collect a laser image of the environment detected by the line laser emitted by the line laser emitter 102 is applicable to this embodiment of the present application.
  • the first camera 101 may include but not limited to: a laser camera, a 2D camera installed with a filter that only allows line laser light to pass through, and the like.
  • the wavelength of the line laser light emitted by the line laser emitter 102 is not limited. The color of the line laser light will be different with different wavelengths, for example, red laser light, purple laser light, etc. can be used.
  • the line laser may be visible light or invisible light.
  • the first camera 101 may be a camera capable of collecting the line laser emitted by the line laser emitter 102 .
  • the first camera 101 can also be an infrared camera, an ultraviolet camera, a starlight camera, a high-definition camera, a 2D vision camera with a red laser, a transparent A 2D vision camera with a purple laser, and a 2D vision camera with a purple laser.
  • the first camera 101 can collect environmental images within its field of view.
  • the field of view of the first camera 101 includes a vertical field of view, a horizontal field of view, and a diagonal field of view.
  • the viewing angle of the first camera 101 is not limited, and the first camera 101 with a suitable viewing angle can be selected according to application requirements.
  • the horizontal field of view of the first camera 101 is 100.6°; or, the vertical field of view of the first camera 101 is 74.7°; or, the diagonal field of view of the first camera 101 is 133.7°.
  • the line laser transmitter 102 and the first camera 101 can be regarded as a structured light component capable of acquiring 3D information of objects in an environment scene.
  • the line laser emitted by the line laser transmitter 102 is located within the field of view of the first camera 101, and the line laser can help detect the three-dimensional point cloud data, contour, shape, height of the object in the field of view of the first camera 101 And/or width, depth, volume and other information.
  • the environment image captured by the first camera 101 and detected by the line laser is referred to as the first environment image.
  • the angle between the laser stripes formed by the line laser on the surface of the object and the horizontal plane is not limited.
  • it can be parallel or perpendicular to the horizontal plane, and can form any angle with the horizontal plane, which can be determined according to application requirements.
  • the first camera 101 can be controlled by the internal or external controller of the structured light module to work, for example, the internal or external controller of the structured light module can control the exposure frequency, exposure duration, working frequency etc. It should be understood that the controller outside the structured light module refers to the controller of an external device relative to the structured light module.
  • FIG. 1 d is a schematic diagram of the relationship between the line laser light emitted by the line laser emitter 102 and the field of view angle of the first camera 101 .
  • the letter K represents the first camera 101
  • the letters J and L represent the line laser emitters 102 on both sides of the first camera 101;
  • the straight lines KP and KM represent the two boundaries of the horizontal field of view of the first camera 101
  • ⁇ PKM represents the horizontal field of view of the first camera 101 .
  • the straight line JN represents the center line of the line laser emitted by the line laser emitter 102J;
  • the straight line LQ represents the center line of the line laser emitted by the line laser emitter 102L.
  • the distance from the structured light module or the device where the structured light module is located to the front object (that is, the depth information of the front object)
  • the 3D point cloud data, outline, shape, height and/or width, volume and other information of the object in front (such as an obstacle)
  • 3D reconstruction can also be performed.
  • the principle of laser triangulation can be used to calculate the distance between the first camera 101 and the object in front of it through a trigonometric function.
  • the implementation form of the second camera 103 is not limited. All visual devices that can collect visible light images are applicable to the embodiments of this application. Visible light images can present the color features, texture features, shape features, and spatial relationship features of objects in the environment, and can help identify the type and material of objects.
  • the second environment image collected by the second camera 103 within its field of view is a visible light image.
  • the second camera 103 may include but not limited to: a monocular RGB camera, a binocular RGB camera, and the like. Wherein, the monocular RGB camera includes one RGB camera, the binocular RGB camera includes two RGB cameras, and the RGB camera is a 2D visual camera capable of collecting RGB images.
  • the first camera 101 can collect environmental images within its field of view.
  • the field of view of the second camera 103 includes a vertical field of view, a horizontal field of view, and a diagonal field of view.
  • the field of view of the second camera 103 is not limited, and the second camera 103 with a suitable field of view can be selected according to application requirements.
  • the horizontal field of view of the second camera 103 is 148.3°; or, the vertical field of view of the second camera 103 is 125.8°; or, the diagonal field of view of the second camera 103 is 148.3°.
  • the optical filter of the RGB camera cannot penetrate the reflected light emitted by the line laser emitter 102 and reflected by the object. Therefore, the RGB camera can capture visible light images that do not contain the laser streaks generated by the line laser after encountering the object. It can be understood that, the second environment image collected by the second camera 103 within its field of view is a visible light image that does not include laser stripes.
  • the second camera 103 can be controlled by the internal or external controller of the structured light module to work, for example, the internal or external controller of the structured light module can control the exposure frequency, exposure duration, working frequency etc.
  • the structured light module may further include an indicator light 105 , the on and off state of the indicator light 105 indicates the working state of the second camera 103 .
  • the indicator light 105 when the indicator light 105 is on, it means that the second camera 103 is in a working state.
  • the indicator light 105 is off, indicating that the second camera 103 is in an off state.
  • the controller inside or outside the structured light module can control the indicator light 105 to work, for example, the controller inside or outside the structured light module can control the indicator light 105 based on the working status information of the second head. On and off status, etc.
  • the second camera 103 and the indicator light 105 can be regarded as visual sensor components in the structured light module.
  • the same controller can be used to control the line laser transmitter 102, the first camera 101, the indicator light 105, and the second camera 103 to work, and different controllers can be used to control the line laser transmitter 102, the second camera 102
  • the first camera 101, the indicator light 105, and the second camera 103 work, which is not limited.
  • a controller may be provided inside the structured light module, or no controller may be provided inside the structured light module.
  • the controller inside the structured light module is referred to as the module controller 104 .
  • the module controller 104 in the dashed box is an optional component of the structured light module.
  • the embodiment of the present application does not limit the implementation form of the module controller 104, for example, it may be but not limited to a processor such as CPU, GPU or MCU.
  • the embodiment of the present application also does not limit the manner in which the module controller 104 controls the structured light module. Any implementation manner that can realize the function of the structured light module is applicable to the embodiments of this application.
  • a module controller 104 may be provided inside the structured light module.
  • the module controller 104 controls the line laser transmitter 102, the first camera 101, the indicator light 105, and the second camera 103 to work, and undertakes the data processing of the image data collected by the first camera 101 and the second camera 103 Task.
  • the structured light module can also perform data interaction with the main controller 106 in the self-mobile device.
  • the module controller 104 in the structured light module can communicate with the main controller 106 through an SPI (Serial Peripheral Interface, Serial Peripheral Interface) interface.
  • SPI Serial Peripheral Interface, Serial Peripheral Interface
  • the structured light module sends data to the main controller 106 through the SPI interface. Therefore, the structured light module can be used as the master device of the SPI interface, and the main controller 106 can be used as the slave device of the SPI interface. If the main controller 106 needs to send data to the structured light module, the main controller 106 can notify the structured light module by pulling up the level of an additional IO pin, and receive the data from the main controller 106 while sending data next time or directive and parse.
  • the structured light module only undertakes the task of image acquisition, does not undertake or undertakes less calculation tasks related to image data, and the main controller 106 undertakes all or most of the calculation tasks related to image data.
  • the associated party of the self-mobile device can deploy corresponding AI in the main controller 106 according to its own application requirements (Artificial Intelligence, artificial intelligence) algorithm processes the visible light image data collected by the structured light module to obtain the corresponding AI recognition results.
  • AI algorithms include, but are not limited to, the following algorithms: algorithms for identifying information such as types and materials of objects in the environment; algorithms for creating three-dimensional maps; algorithms for avoiding or overcoming obstacles.
  • the main controller 106 is also used to identify the three-dimensional point cloud data, outline, shape, height and/or width, volume, etc. of objects in the environment; identify the color features, texture features, shape features and spatial relationships of objects in the environment features etc.
  • the module controller 104 can cooperate with the line laser emitter 102, the first camera 101 and the indicator light 105.
  • the module The controller 104 may also be in electrical communication with a main controller 106 of the mobile device.
  • the second camera 103 in the structured light module can also be electrically connected to the main control.
  • the module controller 104 is used to control the exposure of the first camera 101, and control the line laser emitter 102 to emit line laser during the exposure period for the first camera 101 A first image of the environment detected by the line laser is acquired.
  • the main controller 106 is configured to control the exposure of the second camera 103 so that the second camera 103 collects a second environment image.
  • the main controller 106 is also used to send the working status information of the second camera 103 to the module controller 104; the module controller 104 is also used to control the light on and off of the indicator light 105 according to the working status information of the second camera 103 state.
  • the downtilt angle of the optical axis of the first camera is denoted as ⁇ ; the vertical field of view of the first camera is denoted as ⁇ ; the installation height of the first camera is h; the measurement blind spot distance of the first camera is denoted as d; the structured light module
  • the range (that is, the detection distance) is recorded as Range; the vertical distance from the intersection point P between the optical axis of the first camera and the ground to the installation position of the first camera (that is, the intersection P between the optical axis of the first camera and the ground to the first camera distance) is denoted as L.
  • L is set to half the range of the structured light module, or in the area near half the range of the structured light module, so that the center of the image (the image area near the optical axis) can be aligned with the range detection central area, thereby improving measurement accuracy. Therefore: L ⁇ Range/2.
  • the downtilt angle of the optical axis of the line laser transmitter is recorded as ⁇ ; the light output angle of the line laser transmitter is recorded as ⁇ ; the installation height of the line laser transmitter is recorded as h; the starting distance of the ground spot of the line laser transmitter (also That is, the distance of the blind zone) is denoted as d; the range of the structured light module (that is, the detection distance) is denoted as Range; the vertical distance from the intersection point P between the optical axis of the line laser emitter and the ground to the installation position of the line laser emitter (that is, the line laser The distance between the intersection point P of the optical axis of the emitter and the ground to the line laser emitter) is denoted as L.
  • L is set to 3/4 of the range of the structured light module, or in the area near 3/4 of the range of the structured light module, so that the strongest light intensity of the line laser transmitter can be irradiated to The remote end area within the range, thereby improving the detection ability of the structured light module for ground objects. Therefore: L ⁇ Range*3/4.
  • the inclination angles of the optical axes of the first camera 101 and the line laser emitter 102 relative to the horizontal plane parallel to the ground are not limited. Further optionally, the optical axis of the line laser emitter 102 is inclined downward at a certain angle relative to the horizontal plane parallel to the ground, so that the line laser with high light energy can irradiate the core image acquisition area of the first camera 101, which is beneficial to improve the structured light The detection distance of the module.
  • the optical axis of the first camera 101 is inclined downward at a certain angle relative to the horizontal plane parallel to the ground, so that the image acquisition area with small viewing angle distortion and high illumination can be aligned with the key image sensing area of the first camera 101, which is beneficial to improve the structured light.
  • the inclination angle of the optical axis of the first camera 101 relative to the horizontal plane parallel to the ground has a certain inclination angle.
  • the optical axis of the line laser emitter 102 is inclined downward at a certain angle relative to the horizontal plane parallel to the ground.
  • the optical axis of the first camera 101 is inclined downward at a first angle relative to the horizontal plane parallel to the ground, and the optical axis of the line laser emitter 102 is inclined downward at a second angle relative to the horizontal plane parallel to the ground, and the second angle less than the first angle.
  • the angle range of the first angle is [0, 40] degrees, further optionally, the angle range of the first angle is [11, 12] degrees; correspondingly, the angle range of the second angle is [5, 10] degrees, further optionally, the angle range of the second angle is [7.4, 8.4] degrees.
  • the first angle is 11.5 degrees
  • the second angle is 7.9 degrees.
  • the embodiment of the present application does not limit the light output angle of the line laser emitter.
  • the angle range of the light output angle of the line laser transmitter is [70, 80] degrees, preferably, the light output angle of the line laser transmitter is 75°.
  • Table 1 shows the test data of the first camera in different test scenarios.
  • the inclination of the optical axis of the first camera means that the optical axis of the first camera is inclined downward at a certain angle to the horizontal plane parallel to the ground
  • the non-inclined optical axis of the first camera means that the optical axis of the first camera is parallel to the ground plane.
  • the ground is parallel to the horizontal plane.
  • Table 1 gives the detection distances in different test scenarios when the line laser sensor on the left side of the first camera emits line laser light, and the detection distance of the first camera The detection distance of the line laser sensor on the right in different test scenarios when emitting a line laser.
  • Table 1 gives the detection distances in different test scenarios when the line laser sensor on the left side of the first camera emits line laser light, and the detection distances on the right side of the first camera The detection distance in different test scenarios when the line laser sensor on the side emits line laser. It can be seen from Table 1 that the optical axis of the first camera is not inclined relative to the optical axis of the first camera, which can effectively increase the detection distance of the structured light module.
  • the optical axis of the first camera is parallel to the ground (that is, the optical axis of the first camera is not inclined) and the optical axis of the first camera is down-tilted (that is, the optical axis of the first camera is Axis tilt) distance measurement data for comparison.
  • L the distance from the measured object to the first camera in the structured light module
  • h the height from the measurement point on the measured object (that is, the measured position point) to the ground
  • the distance error is significantly improved and the measurement accuracy is higher when the optical axis down-tilt solution of the first camera is compared with the solution in which the optical axis of the first camera is parallel to the ground.
  • Fig. 1u the height measurement of an object above the ground level by a mobile device is illustrated as an example.
  • the height h measured by an object above the ground level is generally a positive value.
  • the height of the object at ground level is measured, so negative values of h are included in Table 2.
  • the inclination angle of the optical axis of the second camera 103 relative to the horizontal plane parallel to the ground is not limited.
  • the optical axis of the second camera 103 is parallel to the horizontal plane parallel to the ground, that is, the optical axis of the second camera 103 is inclined downward by 0° relative to the horizontal plane parallel to the ground.
  • the optical shaping lens of the line laser emitter 102 is not limited.
  • the optical shaping lens of the line laser emitter 102 may be a wave mirror or a cylindrical mirror.
  • Figure 1v shows a wave mirror.
  • the cross-sectional shape of the wave mirror shown in Figure 1v is circular, but it does not mean that the cross-sectional shape of the wave mirror is limited to a circle, and can also be oval, square, etc.
  • Both the thickness d and the diameter D of the wave mirror are selected according to actual application requirements.
  • the error range of the thickness d is [-0.1, 0.1] mm
  • the error range of the diameter D is [-0.05, 0.05] mm.
  • the thickness d is 2.10 mm and the diameter is 8 mm.
  • Figure 1x illustrates a cylindrical mirror, and the outer diameter ⁇ D and length L of the cylindrical mirror are selected according to actual application requirements.
  • the error range of the outer diameter ⁇ D is [0, 0.05] mm
  • the error range of the length L is [-0.1, 0.1] mm.
  • FIG. 1w shows a light intensity distribution diagram of the line laser emitted by the line laser emitter 102 with a wave mirror
  • FIG. 1y shows a light intensity distribution diagram of the line laser emitted by the line laser emitter 102 with a cylindrical mirror.
  • each ordinate on the vertical axis is the normalized light intensity
  • each abscissa on the horizontal axis is the angle between each line laser emitted by the line laser emitter 102 relative to the optical axis, 0 degrees among them represents the direction of the optical axis.
  • the light intensity difference between the optical axis of the wave mirror and the two sides of the optical axis is small, the light intensity in the area around the optical axis is stronger (the light intensity of the line laser corresponding to the black line segment in Figure 1w is stronger), and the light intensity in the area far away from the optical axis is weaker (The light intensity of the line laser corresponding to the gray line segment in Figure 1w is weak).
  • the light intensity of the line laser emitted by the line laser emitter and hitting the intersection point P of the horizontal plane parallel to the ground is the strongest.
  • the light intensity of the line laser beams hitting areas other than the intersection point P is relatively weak.
  • the line laser emitted by the line laser emitter 102 when the line laser emitter 102 uses a wave mirror, the line laser emitted by the line laser emitter 102 has an angle range of [-30, 30] degrees relative to the optical axis, These line lasers have the strongest light intensity.
  • the line laser emitter 102 adopts a lenticular mirror the light intensity of the line laser emitted by the line laser emitter 102 is the strongest when the angle range of the line laser emitter 102 relative to the optical axis is [-10, 10] degrees.
  • the line laser emitter 102 can make the optical axis tilt down, and at the same time use a cylindrical mirror, so that the line laser with the highest light intensity can be irradiated to the key points that need to be detected by the structured light module. area, to increase the image brightness of key areas, thereby further increasing the detection distance of the structured light module.
  • the total number of line laser emitters 102 is not limited, for example, it may be two or more.
  • the number of line laser emitters 102 distributed on each side of the first camera 101 is also not limited, and the number of line laser emitters 102 on each side of the first camera 101 can be one or more;
  • the number of line laser emitters 102 may be the same or different.
  • FIG. 1 a it is illustrated by taking a line laser emitter 102 disposed on both sides of the first camera 101 as an example, but it is not limited thereto.
  • two line laser emitters 102 may be arranged on the left side of the first camera 101
  • one line laser emitter 102 may be arranged on the right side of the first camera 101 .
  • two, three or five line laser emitters 102 are arranged on the left and right sides of the first camera 101 .
  • the distribution form of the line laser emitters 102 on both sides of the first camera 101 is not limited, for example, it may be evenly distributed, or non-uniformly distributed, symmetrically distributed, or asymmetrically distributed.
  • uniform distribution and non-uniform distribution may mean that the line laser emitters 102 distributed on the same side of the first camera 101 may be uniformly distributed or non-uniformly distributed, of course, it can also be understood as: distributed between the first camera 101
  • the side line laser emitters 102 are distributed uniformly or non-uniformly as a whole.
  • the symmetrical distribution and the asymmetrical distribution mainly mean that the line laser emitters 102 distributed on both sides of the first camera 101 are distributed symmetrically or asymmetrically as a whole.
  • the symmetry here includes not only the equivalence in quantity, but also the symmetry in the installation position.
  • the structured light module shown in FIG. 1 a there are two line laser emitters 102 , and the two line laser emitters 102 are symmetrically distributed on both sides of the first camera 101 .
  • the installation position relationship between the line laser transmitter 102 and the first camera 101 is not limited, and any installation position relationship in which the line laser transmitter 102 is distributed on both sides of the first camera 101 is applicable to this application.
  • the installation position relationship between the line laser transmitter 102 and the first camera 101 is related to the application scenario of the structured light module. According to the application scenario of the structured light module, the installation position relationship between the line laser emitter 102 and the first camera 101 can be flexibly determined.
  • the installation location relationship here includes the following aspects:
  • the line laser emitter 102 and the first camera 101 can be located at different heights.
  • the line laser emitters 102 on both sides are higher than the first camera 101, or the first camera 101 is higher than the line laser emitters 102 on both sides; or the line laser emitter 102 on one side is higher than the first camera 101, The line laser transmitter 102 on the other side is lower than the first camera 101 .
  • the line laser emitter 102 and the first camera 101 may also be located at the same height. More preferably, the line laser emitter 102 and the first camera 101 may be located at the same height.
  • the structured light module will be installed on a certain device (such as robots, purifiers, unmanned vehicles and other self-moving devices), in this case, the line laser transmitter 102 and the first camera 101
  • the distance to the working surface (such as the ground) where the equipment is located is the same, for example, the distance from both to the working surface is 47mm, 50mm, 10cm, 30cm or 50cm, etc.
  • the installation distance refers to the mechanical distance (or referred to as the baseline distance) between the line laser emitter 102 and the first camera 101 .
  • the mechanical distance between the line laser transmitter 102 and the first camera 101 can be flexibly set according to the application requirements of the structured light module. Among them, information such as the mechanical distance between the line laser transmitter 102 and the first camera 101, the detection distance of the equipment (such as a robot) where the structured light module is located, and the diameter of the equipment can determine the size of the measurement blind area to a certain extent. .
  • the equipment such as a robot
  • its diameter is fixed, and the mechanical distance between the measurement range and the line laser transmitter 102 and the first camera 101 can be flexibly set according to requirements, which means that the mechanical The distance and dead zone range are not fixed values.
  • the blind area should be reduced as much as possible.
  • the greater the mechanical distance between the line laser transmitter 102 and the first camera 101 the greater the controllable distance range. It is beneficial to better control the size of the dead zone.
  • the laser emitter, the indicator light 105 , the first camera 101 and the second camera 103 may be located at the same height or at different heights at the installation position.
  • the second camera 103 or the indicator light 105 may be distributed on the left side, the right side, the upper side or the lower side of the first camera 101 .
  • the second cameras 103 may be distributed at 17 mm (millimeters) to the right of the first camera 101 .
  • the indicator light 105 and the second camera 103 are symmetrically arranged on both sides of the first camera 101 .
  • the structured light module is applied to the sweeping robot, for example, it can be installed on the collision plate of the sweeping robot or on the robot body.
  • a reasonable mechanical distance range between the line laser transmitter 102 and the first camera 101 is given as an example below.
  • the mechanical distance between the line laser emitter 102 and the first camera 101 may be greater than 20 mm.
  • the mechanical distance between the line laser emitter 102 and the first camera 101 is greater than 30 mm.
  • the mechanical distance between the line laser emitter 102 and the first camera 101 is greater than 41mm. It should be noted that the range of mechanical distance given here is not only applicable to the scenario where the structured light module is applied to the sweeping robot, but also applicable to the situation where the structured light module is close to or similar in size to the sweeping robot. Apps on other devices.
  • the emission angle refers to the angle between the center line of the line laser emitter 102 emitting the line laser and the installation baseline of the line laser emitter 102 after installation.
  • the installation baseline refers to a straight line where the line laser emitter 102 and the first camera 101 are located when the line laser emitter 102 and the first camera 101 are located at the same installation height.
  • the emission angle of the line laser emitter 102 is not limited.
  • the emission angle is related to the detection distance of the equipment (such as a robot) where the structured light module is located, the radius of the equipment, and the mechanical distance between the line laser transmitter 102 and the first camera 101 .
  • the line can be obtained directly through the trigonometric function relationship
  • the emission angle of the laser emitter 102 that is, the emission angle is a fixed value.
  • the detection distance of the equipment such as a robot
  • the mechanical distance between the line laser emitter 102 and the first camera 101 the line The emission angle of the laser emitter 102 may vary within a certain angle range, for example, 50-60 degrees, but not limited thereto.
  • the emission angle of the line laser emitter 102 is 55.26°.
  • FIG. 1d the letter B represents the first camera 101, and the letters A and C represent the line laser emitters 102 located on both sides of the first camera 101; The intersection point within the field of view of the camera 101 ; the straight lines BD and BE represent the two boundaries of the horizontal field of view of the first camera 101 , and ⁇ DBE represents the horizontal field of view of the first camera 101 .
  • FIG. 1d the letter B represents the first camera 101, and the letters A and C represent the line laser emitters 102 located on both sides of the first camera 101; The intersection point within the field of view of the camera 101 ; the straight lines BD and BE represent the two boundaries of the horizontal field of view of the first camera 101 , and ⁇ DBE represents the horizontal field of view of the first camera 101 .
  • the line AG represents the center line of the line laser emitted by the line laser emitter 102A; the line CF represents the center line of the line laser emitted by the line laser emitter 102C.
  • the straight line BH represents the center line of the field of view of the first camera 101, that is, in Fig. 1e, the center line of the line laser emitters 102 on both sides and the center line of the field of view of the first camera 101 The centerlines intersect.
  • the sweeping robot has a radius of 175mm and a diameter of 350mm; the line laser emitters 102A and C are symmetrically distributed on both sides of the first camera 101B, and the distance between the line laser emitters 102A or C and the first camera 101B
  • the mechanical distance is 30mm; the horizontal field angle ⁇ DBE of the first camera 101B is 67.4 degrees; when the detection distance of the sweeping robot is 308mm, the emission angle of the line laser transmitter 102A or C is 56.3 degrees.
  • the distance between the straight line IH passing through point H and the installation baseline is 45mm, and the distance between the straight line IH and the tangent line to the edge of the sweeping robot is 35mm. blind spot.
  • the various numerical values shown in FIG. 1e are for illustrative purposes only and are not limited thereto.
  • the embodiment of the present application does not limit the angle between the optical axis of the line laser emitter and the baseline of the structured light module. For ease of understanding, continue to describe the calculation process of the angle between the optical axis of the line laser emitter and the baseline of the structured light module in conjunction with FIG. 1e.
  • the length of the baseline of the structured light module (that is, the mechanical distance between the line laser emitter and the first camera) is denoted as l; the angle between the optical axis of the line laser emitter and the baseline of the structured light module is denoted as ⁇ ; line
  • the vertical distance from the intersection point of the optical axis of the laser emitter and the tangent line from the edge of the mobile device to the baseline is denoted as L.
  • the vertical distance from the center of the first camera to the tangent line from the edge of the mobile device is recorded as d;
  • the outer contour diameter of the mobile device is recorded as ⁇ D;
  • the range of the structured light module (that is, the detection distance) is recorded as Range;
  • the vertical distance L from the intersection of the optical axis of the line laser emitter and the tangent line from the edge of the mobile device to the baseline is usually set to a value close to the diameter of the outer contour of the mobile device (if the setting is too large, the detection accuracy of obstacles at this position will be low, If it is too small, the effective detection distance of the detection structured light module will be small). Therefore: L ⁇ D.
  • the angle range between the optical axis of the line laser emitter and the baseline of the structured light module is [50, 60] degrees. Further optionally, the angle between the optical axis of the line laser emitter and the baseline of the structured light module is 55.26 degrees.
  • the structured light module further includes a driving circuit.
  • a drive circuit may be electrically connected between the module controller 104 and the line laser emitter 102 , or a drive circuit may be electrically connected between the module controller 104 and the indicator light 105 .
  • the driving circuit can amplify the control signal of the module controller 104 to the line laser emitter 102 , or can amplify the control signal of the module controller 104 to the indicator light 105 .
  • the circuit structure of the driving circuit is not limited, and any circuit structure that can amplify the signal and provide the amplified signal to the line laser transmitter 102 or the indicator light 105 is applicable to the embodiment of the present application.
  • the number of driving circuits is not limited. Different line laser emitters 102 may share one drive circuit, or one line laser emitter 102 corresponds to one drive circuit 100 . More preferably, one line laser emitter 102 corresponds to one driving circuit. In Fig. 1f, take a line laser emitter 102 corresponding to a first driving circuit 1001, another line laser emitter 102 corresponding to a second driving circuit 1002 and an indicator lamp 105 corresponding to a third driving circuit 1003 as an example for drawing Show.
  • the structured light module provided in the embodiment of the present application also includes some A supporting structure for carrying the first camera 101 , the line laser emitters 102 distributed on both sides of the first camera 101 , the indicator lights 105 , and the second camera 103 .
  • the carrying structure may have multiple implementation forms, which are not limited.
  • the carrying structure includes a fixing seat 107, and may further include a fixing cover 108 used in conjunction with the fixing seat 107.
  • the structure of the structured light module with the fixing seat 107 and the fixing cover 108 will be described with reference to FIGS. 1h-1r.
  • Figure 1h- Figure 1r are the front view, isometric view and exploded view of the structured light module respectively. Due to the viewing angle, each view does not show all components, so only some components are marked in Figure 1h- Figure 1r.
  • the structured light module further includes: a fixing seat 107 ; the laser emitter, the indicator light 105 , the first camera 101 and the second camera 103 are assembled on the fixing seat 107 .
  • the fixing seat 107 includes: a main body and ends located on both sides of the main body; wherein, the indicator light 105, the first camera 101 and the second camera 103 are assembled on the main body
  • the line laser emitter 102 is assembled on the end; wherein, the end face of the end faces the reference plane, so that the centerline of the line laser emitter 102 intersects with the centerline of the first camera 101 at one point; the reference plane is connected to the main body A plane perpendicular to the end face or end face tangent.
  • three grooves 203 are opened in the middle of the main body, and three grooves 203 are opened in the middle of the main body. There are three grooves 203, the indicator light 105, the first camera 101 and the second camera 103 are installed in the corresponding grooves 203;
  • the module controller 104 can be fixed behind the fixing seat 107 .
  • the structured light module also includes a fixed cover 108 assembled above the fixed seat 107; a cavity is formed between the fixed cover 108 and the fixed seat 107 to accommodate the line laser emitter 102 , the connection line between the first camera 101 and the module controller 104 , and the connection line between the module controller 104 and the second camera 103 and the main controller 106 .
  • the second camera 103 in the structured light module can be connected to the main controller 106 through an FPC (Flexible Printed Circuit, flexible circuit board) connector.
  • FPC Flexible Printed Circuit, flexible circuit board
  • Fixing elements include but are not limited to screws, bolts, and buckles, for example.
  • the structured light module further includes a fixing plate 109 assembled on the line laser emitter 102 , or an indicator light plate 201 assembled on the indicator light 105 .
  • the fixing plate 109 or the indicator light plate 201 can be a plate structure of any shape.
  • the first camera 101 is located within the outer edge of the groove 203, that is, the lens is retracted in the groove 203, which can prevent the lens from being scratched or bumped, and is beneficial to protect the lens.
  • the shape of the end surface of the main body is not limited, for example, it may be a plane, or a curved surface concave inward or outward.
  • the shape of the end face of the main body is also different.
  • the end surface of the main body can be implemented as an inwardly concave curved surface, which is adapted to the outline of the self-moving device.
  • the end face of the main body can be implemented as a plane, which is adapted to the outline of the autonomous mobile device.
  • the self-mobile device with a circular or elliptical outline may be a sweeping robot, a window cleaning robot, etc. with a circular or elliptical outline.
  • the autonomous mobile device with a square or rectangular outline may be a floor sweeping robot, a window cleaning robot, etc. with a square or rectangular outline.
  • the structured light module is installed on the self-moving device.
  • the radius of the curved surface of the main body is the same or approximately the same as that of the mobile device.
  • the curved surface radius of its main body can be 170mm or approximately 170mm, for example, it can be within 170mm -172mm range, but not limited to this.
  • the emission angle of the line laser transmitter 102 in the structured light module is mainly determined by the detection distance and self-moving device that the self-moving device needs to satisfy. The radius of the device, etc. is determined.
  • the end face or end face tangent of the main body of the structured light module is parallel to the installation base line, so the emission angle of the line laser emitter 102 can also be defined as: the center line of the line laser emitter 102 and the main body The angle between the ends or end tangents of .
  • the range of the emission angle of the line laser emitter 102 may be 50-60 degrees, but it is not limited thereto.
  • the detection distance that needs to be satisfied by the self-mobile device refers to a distance range within which it needs to detect environmental information, and mainly refers to a certain distance range in front of the self-mobile device.
  • the structured light module provided by the above embodiments of the present application has a stable structure, small size, fits the appearance of the whole machine, greatly saves space, and can support various types of self-moving devices.
  • the embodiment of the present application also provides a schematic structural diagram of a self-moving device, as shown in Figure 2a, the device includes: a device body 20, on which a main controller 106 and a structured light module are arranged Group 21.
  • the main controller 106 is electrically connected to the structured light module 21 .
  • the structured light module 21 includes: a first camera 101 , line laser emitters 102 distributed on both sides of the first camera 101 , and a second camera 103 .
  • the structured light module 21 further includes a module controller 104 , and the module controller 104 is electrically connected to the main controller 106 .
  • the module controller 104 controls the line laser transmitter 102 to emit the line laser, and controls the first camera 101 to collect the first environment image detected by the line laser while the line laser transmitter 102 emits the line laser, and captures the first environment image
  • the image is sent to the main controller 106;
  • the main controller 106 controls the second camera 103 to collect a second environmental image within its field of view, and performs functional control on the self-mobile device according to the first environmental image and the second environmental image;
  • the first environment image is a laser image containing laser stripes generated by the line laser after encountering an object
  • the second environment image is a visible light image not containing laser stripes.
  • clearance processing can be performed on the area where the FPC connector 204 is located. No other objects are set in the area where the device 204 is located.
  • the headroom processing can reduce the probability of the FPC colliding with other objects and being damaged when the collision plate 22 of the mobile device moves.
  • the autonomous mobile device may be any mechanical device capable of highly autonomous spatial movement in its environment, for example, it may be a robot, a purifier, a drone, and the like.
  • robots can include sweeping robots, glass cleaning robots, family escort robots, welcome robots, etc.
  • the shape of the self-mobile device will also be different according to the different implementation forms of the self-mobile device.
  • This embodiment does not limit the implementation form of the self-mobile device.
  • the outer contour shape of the self-moving device may be an irregular shape or some regular shapes.
  • the outer contour shape of the self-mobile device may be a regular shape such as a circle, an ellipse, a square, a triangle, a drop shape, or a D shape. Shapes other than regular shapes are called irregular shapes.
  • the outer contours of humanoid robots, unmanned vehicles, and drones are irregular shapes.
  • the implementation form of the main controller 106 is not limited, for example, it may be but not limited to a processor such as CPU, GPU or MCU.
  • the embodiment of the present application does not limit the specific implementation manner in which the main controller 106 controls the function of the self-mobile device according to the environment image.
  • the main controller 106 can control the self-mobile device to implement various functions based on environment awareness according to the first environment image and the second environment map. For example, it can realize the functions of object recognition, tracking and classification on the visual algorithm; in addition, based on the advantages of high accuracy of line laser detection, it can also realize real-time, robust, high-precision positioning and map building functions , and then it can also provide comprehensive support for motion planning, path navigation, positioning, etc. based on the constructed high-precision environment map.
  • the main controller 106 can also control the movement of the self-mobile device according to the environment image, for example, controlling the self-mobile device to perform actions such as moving forward, retreating, and turning.
  • the structured light module 21 further includes: an indicator light 105 and a driving circuit 100 .
  • the module controller 104 as an MCU as an example, the working principle of the cooperation between the MCU and the main controller 106 will be described below.
  • the MCU initializes the first camera 101 through an I2C (Inter Integrated Circuit) interface.
  • the MCU sends a Trig trigger signal to the first camera 101 through the I2C interface to trigger the exposure of the first camera 101.
  • the first camera 101 When the first camera 101 starts to expose, it also sends an LDE STROBE synchronization signal to the MCU through the I2C interface, and the MCU After receiving the LDE STROBE synchronous signal, on the rising edge of the LED STROBE signal, the frequency and current of the line laser transmitter 102 are controlled by the drive circuit 100, and the line laser transmitter 102 is driven to emit the line laser. On the falling edge of the LED STROBE signal, the MCU Line laser emitter 102 is turned off.
  • the first camera 101 sends the collected picture data to the MCU through a digital video interface (Digital Video Port, DVP) and processes it by the MCU, and sends the image data to the main control through the SPI (Serial Peripheral Interface, Serial Peripheral Interface) interface.
  • the device 106 outputs the first environment image.
  • the MCU may perform some image preprocessing operations such as denoising processing and image enhancement on the image data collected by the first camera 101 .
  • the main controller 106 can also send a control signal through the MIPI (Mobile Industry Processor Interface, mobile industry processor interface) interface to control the second camera 103 to collect the second environment image within its field of view, and receive the second camera 103 The second environment image sent over the MIPI interface.
  • MIPI Mobile Industry Processor Interface, mobile industry processor interface
  • the main controller 106 can also send the working status information of the second camera 103 to the MCU, so that the MCU can control the indicator light 105 to turn on or off according to the working status information of the second camera 103 and through the driving circuit 100 .
  • the main controller 106 can use the AI algorithm to identify the first environment image and the second environment image, so as to identify the three-dimensional point cloud data, category, texture and More object information, such as material, is more conducive to the travel control, obstacle avoidance processing, and obstacle surmounting processing of self-mobile equipment in the working environment.
  • the specific position of the structured light module 21 on the device body 20 is not limited.
  • it may be, but not limited to, the front side, rear side, left side, right side, top, middle and bottom of the device body 20 .
  • the structured light module 21 is arranged at a middle position, a top position or a bottom position in the height direction of the device body 20 .
  • the structured light module 21 is arranged on the front side of the device body 20; the front side is forward from the mobile device The side to which the device body 20 faces during movement.
  • a strike plate 22 is installed on the front side of the device body 20 , and the strike plate 22 is located outside the structured light module 21 .
  • FIGS. 2c and 2d they are exploded schematic diagrams of the structured light module 21 and the impact plate 22 .
  • the self-mobile device is illustrated by taking the cleaning robot as an example, but it is not limited thereto.
  • the structured light module 21 may be installed on the strike plate 22 or may not be installed on the strike plate 22, which is not limited.
  • a window 23 is opened on the collision plate 22 corresponding to the structured light module 21 to expose the first camera 101 , the line laser transmitter 102 , the indicator light 105 and the second camera 103 in the structured light module 21 .
  • three windows are opened on the collision plate 22, namely a first window 231, a second window 232 and a third window 233, wherein the second window 232 is used to expose the first camera 101 , the second camera 103 and the indicator light 105, the first window 231 and the third window 233 are respectively used to expose the corresponding line laser transmitter 102.
  • the structured light module is installed on the impact plate, which can minimize the gap between the first camera, the second camera and the impact plate, and can also reduce the occlusion of the field angle of the first camera and the second camera. It can also be used.
  • the smaller second window 232 improves the aesthetic appearance of the self-mobile device, greatly saves space, and can support various types of self-mobile devices.
  • a light-transmitting protective plate is arranged on the first window 231 . It should be understood that if the self-mobile device collides with an obstacle, the light-transmitting protective plate on the first window 231 can reduce the probability of the first camera 101 or the second camera 103 being damaged by the collision. In addition, the light-transmitting protective plate can ensure the normal image collection work of the first camera 101 or the second camera 103 .
  • the first window 231 and the light-transmitting protective plate are provided with a sealing ring, which can prevent dust, water mist, etc. from contaminating the lens of the first camera 101 or the lens of the second camera 103 and causing image quality degradation.
  • the sealing ring is a sealing ring made of EVA (Ethylene Vinyl Acetate Copolymer, ethylene-vinyl acetate copolymer).
  • a sealing ring is provided between the line laser emitter 102 and the light-transmitting protection plate to prevent dust, water mist and other stains from contaminating the lens of the line laser emitter 102 to cause spot deformation or power drop.
  • the sealing ring is a sealing ring made of EVA material.
  • a light-transmitting protective plate is arranged on the second window 232 or the third window 233 .
  • the light-transmitting protective plate is a protective plate for a line-transmitting laser. It should be understood that if the mobile device collides with an obstacle, the light-transmitting protective plate on the second window 232 or the third window 233 can reduce the probability of the line laser being damaged by the collision.
  • the structured light module 21 is installed on the inner side wall of the impact plate 22 .
  • FIG. 2 d is an exploded schematic diagram of the structured light module 21 and the impact plate 22 .
  • the distance from the center of the structured light module 21 to the working surface where the mobile device is located is in the range of 20-60 mm. In order to reduce the spatial blind area of the self-mobile device and make the field of view sufficiently large, further optionally, the distance from the center of the structured light module 21 to the working surface where the self-mobile device is located is 47mm.
  • the autonomous mobile device in this embodiment may also include some basic components, such as one or more memories, communication components, power supply components, drive components, and so on.
  • one or more memories are mainly used to store computer programs, and the computer programs can be executed by the main controller 106, causing the main controller 106 to control the self-mobile device to perform corresponding tasks.
  • the one or more memories may also be configured to store various other data to support operation on the mobile device. Examples of such data include instructions for any application or method operating on the mobile device, map data of the environment/scene where the mobile device is located, working mode, working parameters and so on.
  • the autonomous mobile device in this embodiment may also include some basic components, such as one or more memories, communication components, power supply components, drive components and so on.
  • the autonomous mobile device may be any mechanical device capable of highly autonomous spatial movement in its environment, for example, it may be a robot, a purifier, an unmanned vehicle, and the like.
  • the robot may include a sweeping robot, an accompanying robot or a guiding robot.
  • self-mobile device here is applicable to all embodiments of the present application, and no repeated description will be made in subsequent embodiments.
  • the structured light module that can be used by the mobile device will be described first.
  • the mobile device is equipped with a structured light module.
  • the structured light module used in the embodiment of this application generally refers to any structured light module including a structured light component and a visual sensor.
  • the structured light component includes a line laser emitter 102 and a laser camera 101.
  • the line laser emitter 102 is used to emit visible or invisible line laser
  • the laser camera 101 is responsible for collecting the laser image of the environment detected by the line laser. Specifically, after the line laser emitted by the line laser emitter 102 encounters an object in the environment, a laser stripe is formed on the object, and the laser camera 101 collects a laser image including the laser stripe within its field of view.
  • the laser image can be used for detection 3D point cloud data, outline, height, width, depth, length and other information of objects within the field of view of the laser camera 101.
  • the self-moving device moves on the working surface (such as the ground, desktop and glass surface) according to the forward direction, and emits the line laser outward through the line laser emitter 102. If the line laser encounters an object in the front work area, Laser stripes will be formed on the object, and at this time, the laser camera 101 collects a laser image including the laser stripes.
  • the working surface such as the ground, desktop and glass surface
  • the principle of triangulation distance measurement the coordinate transformation relationship between the laser camera 101 coordinate system, the equipment coordinate system of the self-moving device and the world coordinate system, it is not difficult to calculate the object corresponding to the laser stripe
  • the height h of each location point on the object that is, the distance between the location point on the object and the working surface
  • the depth s of each location point that is, the distance from the location point on the object to the self-mobile device
  • each location point The 3D point cloud data of the object, the width b of the object (the width direction is perpendicular to the forward direction), and the length a of the object (the length direction is parallel to the forward direction).
  • the contour information of the object can be determined by analyzing the 3D point cloud data.
  • the visual sensor 103 may be a visual camera capable of collecting visible light images, for example including but not limited to a monocular RGB camera and a binocular RGB camera. Further optionally, the optical filter of the visual sensor 103 cannot penetrate the reflected light emitted by the line laser emitter 102 and is reflected by the object, so as to ensure that the visual sensor 103 can collect the light that does not contain the line laser generated after encountering an object.
  • the visible light image of the laser stripes such as the visible light image shown in FIG. 1 , ensures the quality of the image data collected by the visual sensor 103 .
  • the above-mentioned structured light module can detect the three-dimensional point cloud data, contour, height, width, depth, length and other information of the object through the structured light component; the color feature and texture feature of the object can be perceived through the visual sensor 103 , shape features, and spatial relationship features and other information, and then perceive more abundant environmental information, which is conducive to helping to improve the intelligence of self-mobile devices.
  • a structured light module mainly includes: a structured light component and a visual component.
  • the structured light component includes a laser camera 101, and line laser emitters 102 distributed on both sides of the laser camera 101.
  • the vision component includes a vision sensor 103 .
  • the structured light component or visual component can be controlled by the internal controller of the structured light module or an external controller.
  • the controller inside the structured light module is referred to as the module controller 104 .
  • the module controller 104 is represented by a dotted box, indicating that the module controller 104 is an optional component.
  • the structured light module When the structured light module is applied to the autonomous mobile device, all or part of the components in the structured light module can work under the control of the main controller 106 of the autonomous mobile device. For ease of understanding, it is described by taking the structured light component working under the control of the module controller 104 and the vision component working under the control of the main controller 106 as an example.
  • the line laser emitter 102 can be installed above, below, left or right of the laser camera 101 , as long as the line laser emitted by the line laser emitter 102 is within the field of view of the laser camera 101 .
  • the line laser emitter 102 is installed on the left and right sides of the laser camera 101 as an example for illustration.
  • the laser beam emitted by the line laser emitter 102 hits obstacles or the ground surface to form laser stripes that are horizontal to the ground in front and perpendicular to the direction of movement of the mobile device. This type of installation can be called horizontal installation.
  • Figure 1 shows a schematic diagram of the installation state and application state of the structured light module on the self-mobile device.
  • the structured light module in the process of moving forward from the mobile device, can be controlled to work in a certain way, such as periodically (every 20 ms) to conduct environmental detection to obtain a set of laser image data, each laser
  • the image data includes the laser stripes formed by the line laser hitting the surface of the object or the ground.
  • One laser stripe contains multiple 3D data, and the 3D data on the laser stripes in a large number of laser images can form 3D point cloud data.
  • the module controller 104 controls the exposure of the laser camera 101 on the one hand, and on the other hand can control the line laser emitter 102 to emit line laser to the outside during the exposure period of the laser camera 101, so that the laser camera 101 collects the light detected by the line laser. to the laser image.
  • the module controller 104 may control the line laser emitters 102 located on both sides of the laser camera 101 to work simultaneously or alternately, which is not limited.
  • the embodiment of the present application does not limit the implementation form of the module controller 104, for example, it may be but not limited to a processor such as CPU, GPU or MCU.
  • the embodiment of the present application also does not limit the manner in which the module controller 104 controls the structured light module. Any implementation manner that can realize the function of the structured light module is applicable to the embodiments of this application.
  • the module controller 104 can control the exposure frequency, exposure duration, working frequency, etc. of the laser camera 101 .
  • the laser camera 101 collects laser images detected by the line laser while the line laser transmitter 102 emits the line laser under the control of the module controller 104 .
  • the distance from the structured light module or the device where the structured light module is located to the front object (that is, the depth information of the front object) can be calculated, and the front object (such as 3D point cloud data, contour, shape, height and/or width, volume and other information of obstacles), and further, 3D reconstruction can also be performed.
  • the principle of laser triangulation can be used to calculate the distance between the laser camera 101 and the object in front of it through a trigonometric function.
  • the implementation form of the line laser emitter 102 is not limited, and may be any equipment/product form capable of emitting line laser.
  • line laser emitter 102 may be, but is not limited to, a laser tube.
  • the wavelength of the line laser light emitted by the line laser emitter 102 is not limited. The color of the line laser light will be different with different wavelengths, such as red laser light, purple laser light, etc.
  • the line laser may be visible light or invisible light.
  • the implementation form of the laser camera 101 is not limited. Any visual device that can collect a laser image of the environment detected by the line laser emitted by the line laser emitter 102 is applicable to this embodiment of the present application.
  • the laser camera 101 may be a camera capable of collecting the line laser emitted by the line laser emitter 102 .
  • the laser camera 101 can also be an infrared camera, an ultraviolet camera, a starlight camera, a high-definition camera, a 2D visual camera with a red laser, or a purple light Laser 2D vision camera, and 2D vision camera with purple laser.
  • the laser camera 101 can collect laser images within its field of view.
  • the field of view of the laser camera 101 includes a vertical field of view, a horizontal field of view, and a diagonal field of view.
  • the viewing angle of the laser camera 101 is not limited, and a laser camera 101 with a suitable viewing angle can be selected according to application requirements.
  • the horizontal field of view of the laser camera 101 is 100.6°; or, the vertical field of view of the laser camera 101 is 74.7°; or, the diagonal field of view of the laser camera 101 is 133.7°.
  • the angle between the laser stripes formed by the line laser on the surface of the object and the horizontal plane is not limited, for example It can be parallel or perpendicular to the horizontal plane, and can form any angle with the horizontal plane, depending on the application requirements.
  • the implementation form of the vision sensor 103 is not limited. All visual devices that can collect visible light images are applicable to the embodiments of this application. Visible light images can present the color features, texture features, shape features, and spatial relationship features of objects in the environment, and can help identify the type and material of objects.
  • the environmental image collected by the vision sensor 103 within its field of view is a visible light image.
  • the visual sensor 103 may include but not limited to: a monocular RGB camera, a binocular RGB camera, and the like. Wherein, the monocular RGB camera includes one RGB camera, the binocular RGB camera includes two RGB cameras, and the RGB camera is a 2D visual camera capable of collecting RGB images.
  • the laser camera 101 can collect environmental images within its field of view.
  • the field of view of the visual sensor 103 includes a vertical field of view, a horizontal field of view, and a diagonal field of view.
  • the viewing angle of the visual sensor 103 is not limited, and the visual sensor 103 with a suitable viewing angle can be selected according to application requirements.
  • the horizontal viewing angle of the visual sensor 103 is 148.3°; or, the vertical viewing angle of the visual sensor 103 is 125.8°; or, the diagonal viewing angle of the visual sensor 103 is 148.3°.
  • the optical filter of the RGB camera cannot penetrate the reflected light emitted by the line laser emitter 102 and reflected by the object. Therefore, the RGB camera can capture visible light images that do not contain the laser streaks generated by the line laser after encountering the object. It can be understood that the environmental image collected by the vision sensor 103 within its field of view is a visible light image that does not contain laser stripes.
  • the visual sensor 103 works under the control of the main controller 106 .
  • the main controller 106 can control the exposure frequency, exposure duration, working frequency, etc. of the visual sensor 103 .
  • the vision component in the structured light module may further include an indicator light 105 , and the on and off state of the indicator light 105 indicates the working state of the vision sensor 103 .
  • the indicator light 105 lights up, it means that the visual sensor 103 is in a working state.
  • the indicator light 105 is off, indicating that the visual sensor 103 is in an off state.
  • the indicator light 105 works under the control of the module controller 104 .
  • the module controller 104 can interact with the main controller 106 , obtain the working status of the visual sensor 103 sent by the main controller 106 , and control the on-off status of the indicator light 105 based on the working status of the visual sensor 103 .
  • the module controller 104 can control the image acquisition work of the structured light component and the vision component, and undertake the acquisition of the laser image data collected by the structured light component and the vision component and visible light image data for data processing.
  • the main controller 106 is responsible for collecting the laser image data and visible light image data collected by the structured light component and the visual component Perform data processing.
  • the structured light module sends the laser image data collected by the structured light component to the main controller 106 through the module controller 104, and at the same time, the main controller 106 acquires the visible light image data collected by the vision component.
  • the main controller 106 can analyze the laser image data, object three-dimensional point cloud data, outline, shape, height and/or width, volume and so on.
  • the main controller 106 can also analyze the visible light image data to identify information such as color features, texture features, shape features, spatial relationship features, types, and materials of objects.
  • Fig. 4 is a flowchart of an operation method provided by an exemplary embodiment of the present application. This method is applicable to self-mobile devices, which are equipped with structured light modules. For the introduction of the structured light module, please refer to the previous content. As shown in Figure 1, the method includes the following steps:
  • control the self-mobile device to perform the operation task on the target object existing in the front operation area according to the behavior mode of the target machine.
  • the structured light module can be used to detect the environmental information of the working area in front.
  • the front work area refers to the range that the self-mobile device can identify along the traveling direction during the operation of the self-mobile device.
  • the environmental information of the front work area will change as the self-mobile device travels.
  • the environmental information of the working area in front of it is different.
  • the structured light component is used to collect the structured light data in the front work area, that is, after the line laser emitter emits the line laser to the front work area, the laser camera collects the laser image data of the environment detected by the line laser.
  • the visual sensor is used to collect image data in the front work area, and the image data is visible light image data.
  • the structured light data and image data in the front work area are acquired, firstly, based on the image data, it is identified whether there is an object in the front work area and the category to which the object belongs.
  • the object category is to classify the objects from the perspective of the impact of the objects on the operation of the self-mobile device.
  • object categories can be roughly divided into: easy to get stuck, easy to wind, easy to get dirty, and movable, etc., but not limited to the above categories.
  • easy-to-stuck objects refer to some objects in the working environment that are likely to cause the self-moving equipment to be trapped and stuck
  • Dirty objects refer to some objects that exist in the working environment that tend to cause the area where they are located to be dirty
  • movable objects refer to some movable objects that exist in the working environment.
  • these objects may interfere with the normal movement of self-moving equipment .
  • it will interfere with the self-mobile device to perform tasks.
  • the self-mobile device will not be able to clean the place occupied by movable objects, and some special processing methods are required.
  • trash cans trash cans, charging stands, shoes, bowls, U-shaped chairs, bar stools, sliding door slides, clothes, carpet edges, wires, people, and animals.
  • some objects belong to the category of objects that are prone to getting stuck, some belong to the category of objects that are prone to entanglement, some belong to the category of objects that are prone to dirt, and some belong to the category of objects that are easy to move.
  • objects that are easy to get stuck include but are not limited to: U-shaped chairs, bar stools, and sliding door slides.
  • Objects in the entanglement category include, but are not limited to: clothing, carpet fringing, electrical wires.
  • Objects in the easy-to-dirty category include but are not limited to: trash cans, charging stations, shoes, and bowls.
  • movable objects include but are not limited to: people, animals, and the like.
  • the category of the object existing in the front working area identified based on the image data in the front working area collected by the visual sensor in the structured light module is called the target object category.
  • the target object category may include any one or several of the several object categories listed above, which is not limited.
  • the self-mobile device can use AI (Artificial Intelligence, artificial intelligence) algorithm to perform object recognition on the image data collected by the visual sensor in the structured light module, so as to obtain the target object category existing in the work area in front of the self-mobile device .
  • AI Artificial Intelligence, artificial intelligence
  • the AI recognition result includes which object in the working environment the object is, and the category to which the object belongs.
  • object recognition is performed on the image data collected by the visual sensor in the structured light module.
  • it can be: using a pre-trained neural network model to perform object recognition on the image data collected by the visual sensor in the structured light module.
  • a large number of sample object images can be prepared in advance, and the object categories of the objects in the sample object images can be marked, model training can be performed according to the sample object images and their labeling results, and an image recognition model capable of identifying object categories can be obtained.
  • Models are built into self-mobile devices. Afterwards, after the visual sensor in the structured light module collects the image data in the working area in front of the self-mobile device, the self-mobile device can use the image recognition model to perform object recognition on the image data, thereby obtaining the image data in the work area in front of the mobile device.
  • the target object class that exists.
  • the network structure of the image recognition model includes but is not limited to: CNN (Convolutional Neural Networks, convolutional neural network), RNN (Recurrent Neural Network, cyclic neural network) and LSTM (Long Short-Term Memory, long-term short-term memory artificial neural network ).
  • the image recognition model adopted in the embodiment of the present application includes a feature extraction network and a classification neural network.
  • An implementation process for the image recognition model to recognize the object category existing in the working area in front of the mobile device based on the image data is: input the image data into the feature extraction network, generate at least one candidate frame on the image data, and correspond to each candidate frame
  • the feature map of the pooling process is performed to obtain the first feature vector; further, based on the first feature vector corresponding to each candidate frame, an effective candidate frame is selected from at least one candidate frame; an effective candidate frame refers to the enclosed image area that contains There is a candidate frame of an object; the image in the effective candidate frame is input into the classification neural network, and the feature extraction is performed on the image in the effective candidate frame to obtain a second feature vector; based on the second feature vector, it is identified that the object in the effective candidate frame belongs to object category.
  • the second feature vector can be matched with the feature vector corresponding to the known object in the pre-maintained feature library, and the category of the known object corresponding to the feature vector in the second feature vector matching in the feature library is used as an effective The category to which the object in the candidate box belongs.
  • the association relationship between known objects and their object categories is maintained in the feature library.
  • the known object may refer to an object whose object category has been confirmed.
  • the identified target objects and the target object categories to which they belong can also be Update to the map area corresponding to the target object in the environment map.
  • the user can view the environment map on the display screen of the terminal device bound to the self-mobile device or on the display screen of the self-mobile device, and compare the existing objects and their object categories in the environment map with the self-mobile device Compare the objects that actually exist in the operating environment and their object categories; if the objects that actually exist in the operating environment and their object categories do not match the objects and their object categories that have been recorded in the environmental map, the user can update Environmental map, so that the environmental map can more accurately reflect the information of the objects that actually exist in the operating environment and the types of objects they belong to, so that the environmental map can better fit the operating environment. It should be understood that an environment map that is more suitable for the working environment can help to improve the information that the mobile device perceives objects existing in the working environment more accurately, and is beneficial to improving the working performance of the mobile device.
  • the situations where the objects actually existing in the working environment and their object categories do not match the objects and their object categories already recorded in the environmental map include the following:
  • Scenario 2 There are actually some objects and their object categories in the working environment that are inconsistent with the information marked in the environmental map.
  • an object that actually exists in the working environment but does not appear in the environment map is called a first object.
  • the user can add information such as the first object and its object category to the environment map according to the position information of the first object in the working environment.
  • second objects objects that actually exist in the working environment but are incorrectly marked in the environment map are called second objects.
  • the user can modify the relevant information of the second object in the environment map to match its real information.
  • the self-mobile device may also display the known object category when receiving the user's modification request for the known object category, and respond to the first request initiated for the known object category.
  • a modification operation obtaining the modified object category; the known object category is set by the user on the environment map and/or identified from the mobile device based on historical image data.
  • the first modifying operation includes at least one of the following: modifying the name of the object category, adjusting objects corresponding to the object category, and deleting known object categories.
  • the object of the object category before modification becomes the object under the object category after modification. For example, modify the object class of shoes from a class that gets dirty to a class that gets tangled.
  • the object category of the adjusted object is changed.
  • the objects under the easy-to-stuck category are changed from U-shaped chairs, bar stools, and sliding door slides to U-shaped chairs and bar stools, which means that the sliding door slides will be removed from the easy-to-stuck category.
  • the user can set the correct object and its category in the environmental map.
  • Object category the user sets the object category as a known object category in the environment map.
  • the target object in the working area in front of the mobile device can be identified based on the image data collected by the visual sensor based on the modified known object type information category.
  • a target machine behavior pattern adapted to the class of the target object is selected.
  • the target machine behavior pattern adapted to the target object category is the target machine behavior pattern targetedly selected for the self-mobile device based on the target object category.
  • the self-mobile device executes the operation task against the target object in the front operation area according to the behavior pattern of the target machine, its own operation ability is less affected by the target object.
  • the behavior mode of the target machine can be an obstacle avoidance behavior mode or an accelerated passage behavior mode. If the self-moving device encounters an easy-to-stuck object during the execution of the task, if the easily-stuck object is impassable, the self-mobile device will avoid the easy-to-stuck object according to the obstacle avoidance behavior mode; if the easy-to-stuck object is passable, it will automatically The mobile device quickly passes through the inside of the easy-to-stuck object according to the accelerated passing behavior mode; in order to reduce the probability of being trapped by the easy-to-stuck object.
  • the sweeping robot if the sweeping robot encounters impassable objects such as U-shaped chairs or bar stools during the cleaning task, the sweeping robot will give up cleaning the surrounding area of the U-shaped chair or bar stool, and follow the The obstacle avoidance behavior mode avoids impassable and easy to get stuck objects such as U-shaped chairs or bar stools.
  • the sweeping robot If the sweeping robot encounters a passable and easy-to-stuck object such as the sliding door slide rail during the cleaning task, the sweeping robot will give up cleaning the surrounding area of the sliding door slide rail and accelerate through the object such as the sliding door sliding rail according to the accelerated traffic behavior mode. Objects that are easy to get stuck, such as rails.
  • the target machine behavior mode can be a deceleration operation behavior mode.
  • the self-mobile device slows down the operation speed during the operation according to the deceleration operation behavior mode, so as to reduce the probability of being entangled by easily entangled objects.
  • the sweeping robot can properly turn off the side brush or roller brush according to the deceleration operation behavior mode, or slow down the speed of the side brush, That is, the cleaning operation is stopped or the cleaning operation speed is slowed down.
  • the sweeping robot is away from easily entangled objects, it will return to the normal cleaning state.
  • the target machine behavior mode can be an enhanced operation behavior mode.
  • the self-mobile device performs enhanced processing on easily dirty objects according to the enhanced operation behavior mode to improve the operation capacity.
  • the sweeping robot will strengthen the cleaning of such surroundings according to the enhanced operation behavior mode.
  • the sweeping robot can speed up the rotation speed of the side brushes and roller brushes, and enhance the suction of the fan.
  • the sweeping robot can also perform repeated cleaning around such objects, or perform multiple rounds of cleaning.
  • the target machine behavior mode may be a voice prompt behavior mode.
  • the voice prompt behavior mode can realize the interaction between the mobile device and the movable object, and prompt the movable object to avoid the area where the mobile device needs to perform the task.
  • the sweeping robot will prompt the person to leave the current position according to the voice prompt behavior mode, or lift up both feet, and let the sweeping robot complete the cleaning task of the area occupied by the human feet.
  • known object categories and their corresponding machine behavior patterns can be associated and stored in advance, so that the mobile device According to the target object category, known object categories and corresponding machine behavior patterns can be queried to obtain the machine behavior pattern corresponding to the target object category as the target machine behavior pattern.
  • the known object categories and their corresponding machine behavior patterns may be set by the mobile device or by the user, without limitation.
  • the machine behavior mode at least includes the modification of behavior parameters and behavior actions required to perform job tasks from the mobile device.
  • the behavior parameters include but are not limited to: the number of operations, the suction force of the fan, the rotation speed of the side brush, the distance value and direction angle from the target object when performing the action, etc.
  • Behavioral actions include speeding up, decelerating, avoiding obstacles, strengthening operations, and voice prompts.
  • the above method further includes: displaying the machine behavior pattern corresponding to the known object category, responding to the second modification operation initiated for the machine behavior pattern, and obtaining the modified machine behavior pattern;
  • the second modification operation includes at least one of the following: modifying existing behavior parameters, adding new behavior parameters, deleting existing behavior parameters, modifying existing machine action parameters, adding new machine action parameters, and deleting existing machine action parameters .
  • the self-mobile equipment can control the behavior mode of the target machine according to the behavior mode of the target machine in order to improve the operation performance of the self-mobile equipment.
  • the target object performs the operation task.
  • structured light data can detect information such as three-dimensional point cloud data, contour, shape, height, width, depth (that is, the distance between the object and the mobile device), length, thickness, and volume of the object, combined with the above structured light data, It can improve the performance of work from mobile devices.
  • the operation method provided by the embodiment of the present application makes full use of and combines with the structured light module on the mobile device to obtain more abundant environmental data, and performs type identification on different objects in the operation environment, and then adopts different methods for different types of objects.
  • the machine behavior mode executes the job tasks, so as to achieve more targeted, refined and purposeful execution of the job tasks, which can not only shorten the operation time, but also improve the operation ability and improve the user experience.
  • different machine behavior modes can be used to perform cleaning tasks for different types of objects, so as to achieve more targeted, refined and purposeful cleaning operations, which can not only shorten the cleaning time, but also improve cleaning efficiency. ability to improve user experience.
  • structured light in order to improve the recognition accuracy of the object category existing in the work area in front of the self-mobile device, before selecting the target machine behavior mode that is suitable for the target object category, structured light can also be combined
  • the data corrects the class of the target object identified based on the image data. For example, combining structured light data to identify whether the height, width, length or volume of an object matches the target object category. For another example, considering the specific similarity of the outlines of objects of the same object category, the outline of the object can also be identified in combination with structured light data, and the target object category can be corrected based on the outline information.
  • the above method before selecting the target machine behavior mode adapted to the category of the target object, the above method further includes: identifying the outline of the target object existing in the working area ahead based on the structured light data; The target object outline is corrected for the target object class.
  • the three-dimensional point cloud data of the target object can be obtained based on the structured light data first, and the three-dimensional reconstruction of the target object can be performed based on the three-dimensional point cloud data of the target object, And the outline feature extraction is performed on the target object obtained from the three-dimensional reconstruction to obtain the outline of the target object.
  • contour features of objects belonging to any object category may be extracted in advance. If the target object contour matches the contour features of objects belonging to the target object class, no correction is required for the target object class. If the contour of the target object does not match the contour features of objects belonging to the target object category, the object category corresponding to the target object contour is used as a reference object category, and the target object category is corrected according to the reference object category; wherein , different object categories have not exactly the same object outlines.
  • an implementation process of correcting the target object category according to the reference object category is: when the difference between the target object category and the reference object category is less than a set threshold, the The target object category is directly corrected to the reference object category; or, in the case where the difference between the target object category and the reference object category is greater than or equal to the set threshold, it is determined that the transition between the target object category and the reference object category state object category, and correct the target object category to the intermediate state object category.
  • an implementation process of correcting the target object category according to the target object outline is: performing finer-grained division on the target object category according to the target object outline to obtain the target object category subcategories below.
  • non-hollow objects such as sliding door slides, but also hollow objects such as U-shaped chairs and bar stools.
  • the self-moving device encounters a non-hollow object, it can speed up and quickly pass through the non-hollow object to avoid being trapped by the non-hollow object.
  • the object category of the hollow object can be refined to identify whether the hollow object is passable.
  • an implementation process of classifying the target object category in a more fine-grained manner according to the outline of the target object to obtain subcategories under the target object category is: when the target object category is easy In the case of the stuck category, combined with the outline of the target object, determine whether the target object corresponding to the outline of the target object is a hollow-out object; in the case of a hollow-out object, combine the hollow width of the target object and the body width of the self-moving device, Divide the target object category into two subcategories that are easy to get stuck and cannot pass, and easy to get stuck and pass through.
  • the target object is a hollow-out object according to the height information of multiple position points on the lower edge of the target object contour close to the working surface (such as the ground, the desktop and the glass surface) and the corresponding lateral distance information, and Whether it belongs to the subcategory of easy card difficult and not passable or easy card difficult and passable subcategory.
  • the target object category is classified as easy to get stuck and passable. If there are no continuous hollow points with a width greater than the width of the fuselage or a height greater than the height of the fuselage, the target object category is classified as easy to get stuck and cannot pass through.
  • the height of the target object is obtained by calculating an average value according to the heights of a plurality of continuous position points.
  • the hollow width refers to the horizontal distance information corresponding to multiple continuous position points, which can be calculated by the position coordinates of the first position point and the last position point among the continuous multiple position points, that is, the first position point and the last position point Distance information between location points. Referring to the arched hollow object shown in Figure 5, the circles represent multiple position points on the arched hollow object, and the heights of the multiple position points from the ground are averaged to form a circular arch hollow object.
  • the height of the object if the height of the arched hollow object is greater than the height of the fuselage, then further calculate the lateral distance information l of multiple position points, if the lateral distance information l is greater than the width of the fuselage, then the self-mobile device can move from the arched
  • the inner passage of the hollow object that is, the object category of the round arch hollow object is classified as easy to get stuck and passable. If the lateral distance information l is less than or equal to the width of the fuselage, or the height of the circular arched hollow object is less than or equal to the height of the fuselage, then the self-mobile device cannot pass through the interior of the circular arched hollow object, that is, the circular arched hollow object
  • the object class of the object is entrapped and impassable.
  • an implementation process of controlling self-moving equipment to perform tasks for target objects existing in the front work area according to the target machine behavior mode is: based on the structure Light data, identifying the position information and/or shape parameters of the target object existing in the front work area; according to the position information and/or shape parameters of the target object, control the self-mobile device to perform operations on the target object according to the behavior mode of the target machine.
  • the position information of the target object may be the three-dimensional point cloud data of the target object
  • shape parameters include but not limited to information such as outline, height, width, depth, and length.
  • the following describes the implementation process of controlling the self-mobile device to perform tasks on the target object according to the target machine behavior mode according to the position information and/or shape parameters of the target object.
  • the obstacle avoidance behavior mode is selected as the target machine behavior mode adapted to the target object category.
  • the target machine behavior mode adapted to the target object category.
  • an implementation process of controlling the self-moving equipment to perform tasks on the target object according to the behavior mode of the target machine is: based on the position information of the target object and the contour parameters in the shape parameters , according to the obstacle avoidance behavior mode, the self-mobile device is controlled to avoid obstacles for the target object.
  • a U-shaped chair or a bar stool is classified as a target object that is easy to get stuck and cannot pass through, then when the current distance between the self-mobile device and the U-shaped chair or bar chair is close to the obstacle avoidance distance, obstacle avoidance begins, and During the obstacle process, it is implemented to monitor whether it touches the contour edge of the U-shaped chair or bar stool.
  • the self-mobile device is controlled to quickly pass over the sliding door slide rail according to the accelerated passing behavior mode.
  • the target object is a hollow object, for example, a U-shaped chair or a bar stool.
  • the position information of the target object to determine whether the current distance between the self-mobile device and the target object is close to the obstacle avoidance distance, it is also necessary to consider at least the hollow width and height in the shape parameters so that the self-mobile device can pass through the area from the target object Travel out to reduce the collision between the mobile device and the target object during the travel.
  • an implementation process of controlling the self-moving device to perform tasks on the target object according to the behavior mode of the target machine is: based on the position information of the target object and the hollow width in the shape parameters and height, according to the accelerated traffic behavior mode, the self-mobile device passes through the hollowed-out area of the target object to continue to perform the task.
  • the acceleration passing behavior pattern includes: a first indication parameter indicating an acceleration action and a first execution parameter required by the acceleration action, where the first execution parameter includes a direction parameter, a distance parameter and a speed parameter.
  • the first indication parameter mainly indicates whether the action to be performed is an acceleration action.
  • the first execution parameter refers to a parameter required to perform an acceleration action, such as at least one of a direction parameter, a distance parameter, and a speed parameter.
  • the distance parameter may include how far the self-mobile device is from the target object to start the accelerated passing mode, or when the self-mobile device is far away from the target object to end the accelerated pass mode.
  • the target object is a non-hollow object, such as a sliding door slide rail
  • the self-moving device when the self-moving device is 15cm away from the sliding door slide rail, the target object faces the sliding door slide rail at an angle of 45 degrees, and the speed is 30 cm per second. Accelerate through the sliding door slide rail, and after the target object leaves the sliding door slide rail for at least 10cm, you can exit the accelerated passing mode and move according to the normal speed passing mode.
  • the target object is a non-hollow object, such as a U-shaped chair or a bar stool
  • control the self-mobile device to pass through the hollow of the target object according to the accelerated passing mode
  • An implementation process for continuing to perform tasks in the area is: based on the position information of the target object and the hollow width and height in the shape parameters, combined with the direction parameters, adjust the orientation of the self-mobile device so that the self-mobile device faces the hollow of the target object Area: According to the distance parameter and speed parameter, control the self-mobile device to accelerate along the current direction until it passes through the hollow area of the target object.
  • the hollowed-out area toward the target object from the mobile device refers to the hollowed-out area that the mobile device can pass through.
  • the target object is a U-shaped chair or bar stool
  • the self-moving device when the self-moving device is 15cm away from the U-shaped chair or bar chair, the target object faces the U-shaped chair or bar chair at an angle of 45 degrees, and the speed is 30 cm per second.
  • Speed acceleration Pass through the traversable hollow area of the U-shaped chair or bar stool, and after the target object leaves the U-shaped chair or bar chair for at least 10cm, you can exit the accelerated passing mode and move in the normal speed passing mode.
  • Case 3 When the target object category is easy to entangle, select the target machine behavior mode that is suitable for the deceleration operation behavior mode and the target object category, and accordingly, according to the target object’s position information and/or shape parameters, according to the target machine behavior
  • An implementation process of mode control from the mobile device to perform tasks on the target object is: based on the contour edge position in the shape parameters of the target object, control the self-mobile device to perform tasks on the target object according to the deceleration operation behavior mode.
  • the contour edge position of such objects can be identified, and based on the edge position of the contour of such objects, self-moving devices such as sweeping robots can be used without entanglement.
  • self-moving devices such as sweeping robots can be used without entanglement.
  • the deceleration operation behavior pattern includes: a second indication parameter indicating the deceleration operation and a second execution parameter required for the deceleration operation.
  • the second execution parameter includes at least the obstacle avoidance distance and the First side brush speed. Among them, the speed threshold and the first side brush speed are set according to actual application requirements.
  • the self-moving device can be based on the second indication parameter and the second execution parameter when the distance from the target object is greater than the obstacle avoidance distance According to the speed control of the first side brush in the surrounding area, the side brush is driven to perform the cleaning task.
  • the target object is an easily entangled object such as a carpet that not only needs to be operated in its surrounding area, but also needs to be operated on its upper surface (that is, to be operated on the top of the target object), correspondingly, based on the outline of the target object in the shape parameters
  • An implementation process of controlling self-mobile equipment to perform tasks on target objects according to the deceleration operation behavior mode is:
  • the self-mobile device Based on the contour edge position of the target object, combined with the obstacle avoidance distance, control the self-mobile device to perform tasks in the surrounding area where the distance from the target object is greater than the obstacle avoidance distance; and when the self-mobile device climbs above the target object to perform tasks, based on the shape
  • the height information of the upper edge of the contour in the parameter is used to control the side brush from the mobile device to perform the cleaning task above the target object according to the first side brush speed control.
  • the upper edge of the contour refers to the edge of the contour away from the working surface, which is the highest edge in the contour relative to other edges.
  • the difficulty of the target object can be evaluated, and the target object can be further classified.
  • carpets are long-haired carpets, and some carpets are short-haired carpets.
  • the height of the upper edge of the outline of the long-pile carpet is higher than the height of the upper edge of the outline of the short-pile carpet,
  • Long-pile rugs are harder to clean than short-pile rugs. Both short-pile carpets and long-pile carpets require increased fan suction. Long-pile carpets require greater fan suction than short-pile carpets, but hard floors do not require a large fan suction. Therefore, it is also possible to adjust the suction force of the fan in the rolling brush of the self-moving device according to the height information of the upper edge of the contour, which can not only ensure a certain cleaning force but also ensure the battery life of the self-moving device. Therefore, combined with the height information of the upper edge of the contour, the operation of the self-mobile device can be further targeted and purposefully controlled.
  • the enhanced operation behavior mode is selected as the target machine behavior mode adapted to the category of the target object.
  • an implementation process of controlling the self-mobile device to perform tasks on the target object according to the behavior mode of the target machine is: based on the contour edge position in the shape parameters of the target object, according to Strengthen the operational behavior mode to control the execution of operational tasks against the target object from the mobile device.
  • the strengthening operation behavior pattern includes: a third indication parameter indicating the strengthening operation and a third execution parameter required by the strengthening operation, the third execution parameter includes at least the number of operations and the first speed greater than the speed threshold. Two side brush speed;
  • the self-mobile device is controlled to perform tasks on the target object according to the enhanced operation behavior mode, including:
  • the device drives its side brushes to perform cleaning tasks in the area around the target object.
  • the rotational speed of the second side brush is set according to actual application requirements.
  • the second side brush speed may be a greater side brush speed that is greater than a speed threshold.
  • the self-mobile device executes operation tasks multiple times in the surrounding area where the distance from the target object is greater than the obstacle avoidance distance.
  • Case 5 When the target object category is movable, select the voice prompt behavior mode as the target machine behavior mode adapted to the target object category.
  • an implementation process of controlling the self-mobile device to perform tasks on the target object according to the behavior mode of the target machine is: based on the position information of the target object, control according to the behavior mode of the voice prompt Send a voice prompt message from the mobile device to the target object to prompt the target object to change its state; and combine the structured light data collected for the target object to identify the latest state of the target object, and when the latest state meets the voice prompt requirements, Continue to control the self-mobile device to perform tasks against the target object.
  • the self-mobile device can voice prompt the movable target object to change its posture, so that the self-mobile device can continue to move forward.
  • the sweeping robot usually cannot clean the position where the person is. Therefore, the sweeping robot can remind the user to avoid (when the user is standing), or remind the user to lift his feet (when the user is sitting) by playing voice prompts, that is, to remind the user to change its state.
  • the image data collected by the visual sensor can only identify the approximate position of the person, but cannot judge whether the person’s feet are on the ground.
  • the robot vacuum reminds the user to lift their feet, It is impossible to judge whether a person's feet have been lifted only by the recognition result of image data, but the structured light component can judge whether the user's feet have been lifted by comparing the approximate position of the person before and after the prompt voice. If it has been lifted, the sweeping robot passes through the user to clean, otherwise, the sweeping robot bypasses the user for cleaning.
  • the home service robot’s work area may be the master bedroom, living room, second bedroom, kitchen, bathroom, balcony and other areas.
  • the visual sensor such as an RGB camera
  • the structured light module uses the visual sensor (such as an RGB camera) in the structured light module to collect RGB image data in the home environment, and identify the targets in the work area in front of the home service robot based on the RGB image data object class.
  • the home service robot can collect the structured light data of the sliding door slides, specifically control the line laser transmitter to emit line lasers to the sliding door slides, and A laser camera is used to capture laser images including laser stripes formed by line lasers on sliding door slides.
  • the home service robot Based on structured light data to more accurately identify the position, length, height, and angle of the sliding door slide, the home service robot adjusts the body posture according to the relevant information of the sliding door slide, so that the home service robot and the sliding door slide Form an appropriate angle, and when the distance from the sliding door slide rail reaches the obstacle avoidance distance, accelerate through the sliding door slide rail.
  • the appropriate angle and speed help to improve the obstacle-surmounting performance of the home service robot.
  • the structured light data can also give the length of the carpet hair, confirm whether the carpet is a long-haired carpet or a short-haired carpet, and help the home service robot to adjust the appropriate roller brush suction (long-haired carpets need to increase the fan suction, short-haired carpets The suction is smaller than that of long hair, but both are stronger than that of hard floors), so that the cleaning power can be maintained, and the battery life of the home service robot can be guaranteed.
  • Fig. 7 is a schematic structural diagram of an autonomous mobile device provided by an exemplary embodiment of the present application.
  • the autonomous mobile device includes: a device body 70, on which one or more memories 71, one or more processors 72, and a structured light module 73 are arranged; the structured light module 73 includes: Structured light component 731 and vision component 732 .
  • the structured light component 731 includes at least a laser camera 7311 and a line laser emitter 7312 .
  • the vision component 732 includes at least a vision sensor 7321 .
  • FIG. 7 it is illustrated by taking the line laser emitters 7312 distributed on both sides of the laser camera 7311 as an example, but it is not limited thereto.
  • the structured light module 73 reference may be made to the descriptions in the foregoing embodiments, and details are not repeated here.
  • one or more memories 71 are used to store computer programs; one or more processors 72 are used to execute computer programs, so as to: use the structured light components and visual sensors in the structured light module to capture the data in the front work area respectively. Based on the structured light data and image data; based on the image data, identify the target object category in the front work area, and select the target machine behavior mode that matches the target object category; with the assistance of the structured light data, control the automatic The mobile device executes work tasks on target objects existing in the work area ahead.
  • the autonomous mobile device of this embodiment may also include some basic components, such as a communication component 74 , a power supply component 75 , a drive component 76 and so on.
  • one or more memories are mainly used to store computer programs, which can be executed by the main controller, causing the main controller to control the autonomous mobile device to perform corresponding tasks.
  • the one or more memories may also be configured to store various other data to support operation on the autonomous mobile device. Examples of such data include instructions for any application or method operating on the autonomous mobile device, map data of the environment/scene in which the autonomous mobile device is located, operating modes, operating parameters, etc.
  • the communication component is configured to facilitate wired or wireless communication between the device on which the communication component resides and other devices.
  • the device where the communication component is located can access a wireless network based on communication standards, such as Wifi, 2G or 3G, 4G, 5G or a combination thereof.
  • the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component may further include a Near Field Communication (NFC) module, Radio Frequency Identification (RFID) technology, Infrared Data Association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology and the like.
  • NFC Near Field Communication
  • RFID Radio Frequency Identification
  • IrDA Infrared Data Association
  • UWB Ultra Wideband
  • Bluetooth Bluetooth
  • the driving assembly may include driving wheels, driving motors, universal wheels and the like.
  • the autonomous mobile device in this embodiment can be realized as a sweeping robot, and in the case of realizing it as a sweeping robot, the autonomous mobile device can also include a cleaning component, and the cleaning component can include a cleaning motor, a cleaning brush, a dusting brush, Vacuum fan etc.
  • the basic components contained in different autonomous mobile devices and the composition of the basic components are different, and the embodiments of the present application are only partial examples.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本申请实施例提供一种结构光模组及自移动设备。在本申请实施例中,结构光模组既可以通过第一摄像头和线激光发射器相互协作采集包含所述线激光在遇到物体后产生的激光条纹的第一环境图像,还可以通过第二环境图像采集不包含所述激光条纹的可见光图像,第一环境图像和第二环境图像可以帮助更加准确探测到更多丰富的环境信息,拓展激光传感器的应用范围。

Description

一种结构光模组及自移动设备
交叉引用
本申请引用于2021年08月17日递交的名称为“结构光模组及自移动设备”的第2021109449980号中国专利申请,其通过引用被全部并入本申请。
本申请引用于2021年08月17日递交的名称为“作业方法、自移动设备及存储介质”的第2021109449976号中国专利申请,其通过引用被全部并入本申请。
技术领域
本申请涉及人工智能技术领域,尤其涉及一种结构光模组及自移动设备。
背景技术
随着激光技术的普及,激光传感器的应用被逐步挖掘。其中,障碍物识别及避障是激光传感器比较重要的应用方向。各领域对激光传感器的要求越来越高,现有激光类传感器已经无法满足用户的应用需求,有待提出新的激光传感器结构。
发明内容
本申请的多个方面提供一种结构光模组及自移动设备,用以提供一种新的结构光模组,拓展激光传感器的应用范围。
本申请实施例提供一种结构光模组包括:第一摄像头和分布于第一摄像头两侧的线激光发射器;结构光模组还包括:第二摄像头;其中,线激光发射器负责对外发射线激光,第一摄像头用于在线激光发射器发射线激光期间采集由线激光探测到的第一环境图像,第二摄像头用于采集其视场范围内的第二环境图像;第一环境图像是包含线激光在遇到物体后产生的激光条纹的激光图像,第二环境图像是不包含激光条纹的可见光图像。
本申请实施例还提供一种自移动设备,包括:设备本体,设备本体上设置有主控制器和结构光模组,主控制器与结构光模组电连接;
其中,结构光模组包括:第一摄像头、分布于第一摄像头两侧的线激光发射器、第二摄像头和模组控制器;其中,模组控制器控制线激光发射器对外发射线激光,并控制第一摄像头在线激光发射器发射线激光期间采集由线激光探测到的第一环境图像,以及将第一环境图像发送给主控制器;主控制器控制第二摄像头采集其视场范围内的第二环境图像,并根据第一环境图像和第二环境图像对自移动设备进行功能控制;其中,第一环境图像是包含线激光在遇到物体后产生的激光条纹的激光图像,第二环境图像是不包含激光条纹的可见光图像。
在本申请实施例中,结构光模组既可以通过第一摄像头和线激光发射器相互协作采集包含线激光在遇到物体后产生激光条纹的第一环境图像,还可以通过第二环境图像采集不包含激光条纹的可见光图像,第一环境图像和第二环境图像可以帮助更加准确探测到更多丰富的环境信息,拓展激光传感器的应用范围。
本申请的多个方面提供一种作业方法、自移动设备及存储介质,用以满足更加细致化的作业需求。
本申请实施例提供一种作业方法,适用于带有结构光模组的自移动设备,该方法包括:利用结构光模组中的结构光组件和视觉传感器分别采集前方作业区域中的结构光数据和图像数据;基于图像数据识别前方作业区域中存在的目标物体类别,选择与目标物体类别适配的目标机器行为模式;在结构光数据的辅助下,按照目标机器行为模式控制自移动设备针对前方作业区域中存在的目标物体执行作业任务。
本申请实施例还提供一种自移动设备,包括:设备本体,设备本体上设置有一个或多个存储器、一个或多个处理器以及结构光模组;结构光模组包括:结构光组件和视觉传感器;
一个或多个存储器,用于存储计算机程序;一个或多个处理器,用于执行计算机程序,以用于:利用结构光模组中的结构光组件和视觉传感器分别采集前方作业区域中的结构光数据和图像数据;基于图像数据识别前方作业区域中存在的目标物体类别,选择与目标物体类别适配的目标机器行为模式;在结构光数据的辅助下,按照目标机器行为模式控制自移动设备针对前方作业区域中存在的目标物体执行作业任务。
本申请实施例还提供一种存储有计算机指令的计算机可读存储介质,当计算机指令被一个或多个处理器执行时,致使一个或多个处理器执行本申请实施例提供的自移动设备的作业方法实施例中的步骤。
附图说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:
图1a为本申请示例性实施例提供的一种结构光模组的结构示意图;
图1b为本申请示例性实施例提供的一种线激光发射器的工作原理示意图;
图1c为本申请示例性实施例提供的另一种结构光模组的结构示意图;
图1d为本申请示例性实施例提供的一种结构光模组中各器件安装位置关系的结构示意图;
图1e为本申请示例性实施例提供的一种线激光发射器的线激光与第一摄像头视场角的关系示意图;
图1f为本申请示例性实施例提供的另一种结构光模组的结构示意图;
图1h为本申请示例性实施例提供的一种结构光模组的正视图;
图1i为本申请示例性实施例提供的一种结构光模组的轴侧图;
图1j-1m分别为本申请示例性实施例提供的一种结构光模组的一种爆炸图;
图1n为本申请示例性实施例提供的一种结构光模组的一种局部图;
图1o为图1n的剖视图;
图1p为本申请示例性实施例提供的一种结构光模组的一种剖视图;
图1q为本申请示例性实施例提供的一种结构光模组的后视图;
图1r为本申请示例性实施例提供的一种结构光模组的另一种局部图;
图1s为本申请示例性实施例提供的一种结构光模组的另一种剖视图;
图1t为本申请示例性实施例提供的一种结构光模组中的第一摄像头或线激光发射器倾斜的示意图;
图1u为本申请示例性实施例提供的一种自移动设备探测被测物体的示意图;
图1v为本申请示例性实施例提供的一种波浪镜的截面图;
图1w为本申请示例性实施例提供的一种带有波浪镜的线激光发射器的光强分布图;
图1x为本申请示例性实施例提供的一种柱状镜的结构示意图;
图1y为本申请示例性实施例提供的一种带有柱状镜的线激光发射器的光强分布图;
图2a为本申请示例性实施例提供的一种自移动设备的结构示意图;
图2b为本申请示例性实施例提供的一种自移动设备中结构光模组的结构示意图;
图2c和图2d分别本申请示例性实施例提供的结构光模组与撞板的分解示意图;
图2e为本申请示例性实施例提供的安装有结构光模组的撞板的结构示意图;
图2f为本申请示例性实施例提供的一种扫地机器人的结构示意图;
图1为本申请一示例性实施例提供的自移动设备利用结构光模组进行作业的场景示意图;
图2为本申请一示例性实施例提供的一种结构光模组的结构示意图;
图3为本申请一示例性实施例提供的另一种结构光模组的结构示意图;
图4为本申请一示例性实施例提供的一种作业方法的流程示意图;
图5为本申请一示例性实施例提供的一种扫地机器人作业时的场景示意图;
图6为本申请一示例性实施例提供的家庭环境的户型图;
图7为本申请示例性实施例提供的一种自移动设备的结构示意图。
附图标记:
结构光模组:21             第一摄像头:101            线激光发射器:102
第二摄像头:103            模组控制器:104            指示灯:105
主控制器:106              固定座:107                固定盖:108
固定板:109                指示灯灯板:201            安装孔:202
凹槽:203                  FPC连接器:204             设备本体:20
第一窗口:231              第二窗口:232              第二窗口:233
第一驱动电路:1001         第二驱动电路:1002         第三驱动电路:1003
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合本申请具体实施例及相应的附图对本申请技术方案进行清楚、完整地描述。显然,所描述的实施例仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
针对现有激光传感器无法满足应用需求的问题,本申请实施例提供一种结构光模组,该结构光模组既可以通过第一摄像头和线激光发射器相互协作采集包含线激光在遇到物体后产生激光条纹的第一环境图像,还可以通过第二环境图像采集不包含激光条纹的可见光图像,第一环境图像和第二环境图像可以帮助更加准确探测到更多丰富的环境信息,拓展激光传感器的应用范围。
应理解,结构光模组探测到多丰富的环境信息,能够帮助提升物体识别准确度。例如,若结构光模组应用于避障场景中,可以提高避障成功率。又例如,若结构光模组应用于越障场景中,可以提高越障成功率。又例如,若结构光模组应用于环境地图创建中,可以提高环境地图创建的准确度。
图1a为本申请示例性实施例提供的一种结构光模组的结构示意图。如图1a所示,该结构光模组包括:第一摄像头101、分布于第一摄像头101两侧的线激光发射器102、以及第二摄像头103。
在本实施例中,并不限定线激光发射器102的实现形态,可以是任何能够发射线激光的设备/产品形态。例如,线激光发射器102可以是但不限于:激光管。在本实施例中,可以由结构光模组内部或外部的控制器控制线激光进行工作,例如控制线激光发射器102对外发射线激光。线激光发射器102发射的线激光遇到环境中的物体之后,在物体上形成激光条纹。如图1b所示,线激光发射器102对外发射激光平面FAB和激光平面ECD,激光平面到达障碍物后会在障碍物表面形成一条激光条纹,即图1b中所示线段AB和线段CD。
在本实施例中,并不限定第一摄像头101的实现形态。凡是可以采集线激光发射器102发射出的线激光所探测到的环境的激光图像的视觉类设备均适用于本申请实施例。例如,第一摄像头101可以包括但不限于:激光摄像头、安装了仅允许线激光穿透的滤光片的2D摄像头等。另外,在本实施例中,也不限定线激光发射器102发射线激光的波长,波长不同,线激光的颜色会不同,例如可以是红色激光、紫色激光等。另外,线激光可以是可见光,也可以是不可见光。相应地,第一摄像头101可以采用能够采集线激光发射器102发射出的线激光的摄像头。与线激光发射器102发射线激光的波长适配,例如,第一摄像头101还可以是红外摄像头、紫外线摄像头、星光摄像机、高清摄像头、加装了透红色激光的2D视觉摄像头、加装了透紫色激光的2D视觉摄像头、以及加装了透紫色激光的2D视觉摄像头等。第一摄像头101可采集其视场角内的环境图像。第一摄像头101的视场角包括垂直视场角、水平视场角、对角视场角。在本实施例中,并不限定第一摄像头101的视场角,可以根据应用需求来选择具有合适视场角的第一摄像头101。可选的,第一摄像头101的水平视场角为100.6°;或者,第一摄像头101的垂直视场角为74.7°;或者,第一摄像头101的对角视场角为133.7°。
在本实施例中,可以将线激光发射器102和第一摄像头101看成是一个能够获取到环境场景中物体的3D信息的结构光组件。具体的,线激光发射器102发射出去的线激光位于第一摄像头101的视场范围内,线激光可帮助探测第一摄像头101视场角内的物体的三维点云数据、轮廓、形状、高度和/或宽度、深度、体积等信息。
为了便于区分和理解,将第一摄像头101采集由线激光探测到的环境图像称作为第一环境图像。在本实施例中,只要线激光发射器102发射出去的线激光位于第一摄像头101的视场范围内即可,至于线激光在物体表面形成的激光条纹与水平面之间的角度不做限定,例如可以平行或垂直于水平面,可以与水平面之间成任意角度,具体可根据应用需求而定。在本实施例中,可以由结构光模组内部或外部的控制器控制第一摄像头101进行工作,例如结构光模组内部或外部的控制器可控制第一摄像头101的曝光 频率、曝光时长、工作频率等。应理解,结构光模组外部的控制器是指相对于结构光模组来说的外部设备的控制器。
对第一摄像头101来说,可在结构光模组内部或外部的控制器的控制下在线激光发射器102发射线激光期间采集由线激光探测到的第一环境图像。如图1d所示为线激光发射器102发射的线激光与第一摄像头101的视场角之间的关系示意图。其中,字母K表示第一摄像头101,字母J和L表示位于第一摄像头101两侧的线激光发射器102;Q表示两侧的线激光发射器102发射出去的线激光在第一摄像头101的视场角内的交点;直线KP和KM表示第一摄像头101的水平视场的两个边界,∠PKM表示第一摄像头101的水平视场角。在图1d中,直线JN表示线激光发射器102J发射线激光的中心线;直线LQ表示线激光发射器102L发射线激光的中心线。
其中,基于第一摄像头101采集到的第一环境图像,可以计算出结构光模组或结构光模组所在设备到前方物体(如障碍物)的距离(也即前方物体的深度信息),还可以计算前方物体(如障碍物)的三维点云数据、轮廓、形状、高度和/或宽度、体积等信息,进一步,还可以进行三维重建等。其中,可以利用激光三角测距法原理,通过三角函数计算第一摄像头101与其前方物体的距离。
在本实施例中,并不限定第二摄像头103的实现形态。凡是可以采集可见光图像的视觉类设备均适用于本申请实施例。可见光图像可以呈现环境中物体的颜色特征、纹理特征、形状特征和空间关系特征等特征,能够帮助识别物体的种类、材质等信息。在本申请实施例中,第二摄像头103采集其视场范围内的第二环境图像是一种可见光图像。其中,第二摄像头103可以包括但不限于:单目RGB摄像头、双目RGB摄像头等。其中,单目RGB摄像头包括一个RGB摄像头,双目RGB摄像头包括两个RGB摄像头,RGB摄像头是能够采集RGB图像的2D视觉摄像头。第一摄像头101可采集其视场角内的环境图像。第二摄像头103的视场角包括垂直视场角、水平视场角、对角视场角。在本实施例中,并不限定第二摄像头103的视场角,可以根据应用需求来选择具有合适视场角的第二摄像头103。可选的,第二摄像头103的水平视场角为148.3°;或者,第二摄像头103的垂直视场角为125.8°;或者,第二摄像头103的对角视场角为148.3°。
应理解,RGB摄像头的滤光片无法穿透线激光发射器102对外发射线激光被物体反射回来的反射光。因此,RGB摄像头可以采集到不包含线激光在遇到物体后产生激光条纹的可见光图像。可以理解的是,第二摄像头103采集其视场范围内的第二环境图像是不包含激光条纹的可见光图像。
在本实施例中,可以由结构光模组内部或外部的控制器控制第二摄像头103进行工作,例如结构光模组内部或外部的控制器可控制第二摄像头103的曝光频率、曝光时长、工作频率等。
进一步可选的,参见图1c,结构光模组还可以包括指示灯105,指示灯105的亮灭状态指示第二摄像头103的工作状态。例如,指示灯105亮表示第二摄像头103处于工作状态。指示灯105熄灭,表示第二摄像头103处于关闭状态。在本实施例中,可以由结构光模组内部或外部的控制器控制指示灯105进行工作,例如结构光模组内部或外部的控制器可基于第二头的工作状态信息控制指示灯105的亮灭状态等。
另外,可以将第二摄像头103和指示灯105看做是结构光模组中的视觉传感器组件。
在本申请实施例中,可以采用同一个控制器控制线激光发射器102、第一摄像头101、指示灯105、第二摄像头103进行工作,可以采用不同的控制器控制线激光发射器102、第一摄像头101、指示灯105、 第二摄像头103进行工作,对此不做限制。
在本申请实施例中,可以在结构光模组内部设置控制器,也可以在结构光模组内部不设置控制器。为了便于理解和区分,将结构光模组内部的控制器称之为模组控制器104。如图1a和图1c,虚线框中的模组控制器104为结构光模组可选的组件。
本申请实施例不限定模组控制器104的实现形态,例如可以是但不限于CPU、GPU或MCU等处理器。本申请实施例也不限定模组控制器104控制结构光模组的方式。凡是可以实现结构光模组功能的实施方式均适用于本申请实施例。
进一步可选的,为了提高结构光模组的智能性,可以在结构光模组内部设置模组控制器104。通过该模组控制器104控制线激光发射器102、第一摄像头101、指示灯105、第二摄像头103进行工作,以及承担对第一摄像头101和第二摄像头103采集的图像数据进行数据加工的任务。
进一步可选的,为了减少结构光模组的数据处理量,提高结构光模组的图像采集效率,结构光模组还可以与自移动设备中的主控制器106进行数据交互。可选的,为了提高通信速度,结构光模组中的模组控制器104可以采用SPI(Serial Peripheral Interface,串行外设接口)接口与主控制器106进行通信。大多数情况下都是结构光模组通过SPI接口向主控制器106发送数据,因此,可以以结构光模组作为SPI接口的主设备,以主控制器106作为SPI接口的从设备。若主控制器106需要向结构光模组发送数据,主控制器106可以通过拉高额外的一个IO引脚的电平通知结构光模组,在下一次发送数据的同时接收主控制器106的数据或指令并解析。
例如,结构光模组只承担图像采集任务,不承担或少承担与图像数据相关的计算任务,将全部或大部分的图像数据相关的计算任务由主控制器106承担。可以理解的,在结构光模组可与自移动设备中的主控制器106进行数据交互的情况下,自移动设备的关联方可以根据自身的应用需求,在主控制器106中部署相应的AI(Artificial Intelligence,人工智能)算法处理结构光模组采集的可见光图像数据,获取相应的AI识别结果。AI算法例如包括但不限于以下算法:识别环境中物体的种类、材质等信息的算法;创建三维立体地图的算法;避障或越障算法。可选的,主控制器106还用于识别环境中物体三维点云数据、轮廓、形状、高度和/或宽度、体积等;识别环境中的物体的颜色特征、纹理特征、形状特征和空间关系特征等。
进一步可选的,参见图1c可知,模组控制器104除了可以与线激光发射器102、第一摄像头101以及指示灯105之外,在结构光模组应用到自移动设备上时,模组控制器104还可以与自移动设备的主控制器106电连接。另外,为了进一步减少结构光模组的数据处理量,结构光模组中的第二摄像头103还可以与主控制电连接。
在结构光模组与主控制器106采用上述交互方案时,模组控制器104用于控制第一摄像头101曝光,并控制线激光发射器102在曝光期间发射线激光,以供第一摄像头101采集由线激光探测到的第一环境图像。主控制器106,用于控制第二摄像头103曝光,以供第二摄像头103采集第二环境图像。主控制器106,还用于向模组控制器104发送第二摄像头103的工作状态信息;模组控制器104,还用于根据第二摄像头103的工作状态信息,控制指示灯105的亮灭状态。
下面结合图1t简要说明下第一摄像头和线激光发射器的光轴下倾角度(也即相对于与地面平行的水平面的倾斜角度)计算流程。
第一摄像头光轴下倾角度计算流程如下:
假设第一摄像头的光轴下倾角度记为θ;第一摄像头的垂直视场角记为β;第一摄像头的安装高度:h;第一摄像头的测量盲区距离记为d;结构光模组的量程(也即探测距离)记为Range;第一摄像头的光轴与地面交点P到第一摄像头的安装位置的垂直距离(也即第一摄像头的光轴与地面的交点P到第一摄像头的距离)记为L。
在设计结构光模组时,通常L设置为结构光模组量程的一半,或者在结构光模组量程一半附近的区域,这样可以使图像中心(靠近光轴的图像区域)对准量程探测的中心区域,从而提高测量精度。因此有:L≈Range/2。
根据第一摄像头的安装高度h和第一摄像头的光轴与地面的交点P到第一摄像头的距离L,可以计算出第一摄像头光轴下倾角度θ=arctan(h/L);当第一摄像头的光轴下倾角度确定后,可以计算得到第一摄像头的测量盲区距离d=h*arcctg(θ+β/2)。
线激光发射器的光轴下倾角度计算流程如下:
假设线激光发射器的光轴下倾角度记为θ;线激光发射器的出光张角记为β;线激光发射器的安装高度记为h;线激光发射器的地面光斑起始距离(也即盲区距离)记为d;结构光模组量程(也即探测距离)记为Range;线激光发射器的光轴与地面交点P到线激光发射器的安装位置的垂直距离(也即线激光发射器的光轴与地面的交点P到线激光发射器的距离)记为L。
在设计结构光模组时,通常L设置为结构光模组量程的3/4,或者在结构光模组量程3/4附近的区域,这样可以使线激光发射器光强最强部分照射到量程范围内偏远端区域,从而提高结构光模组对地面物体的探测能力。因此有:L≈Range*3/4。
根据线激光发射器的安装高度h和线激光发射器的光轴与地面的交点P到线激光发射器的垂直距离L,可以计算出线激光发射器的光轴下倾角度θ=arctan(h/L);当线激光发射器的下倾角度确定后,可以计算得到线激光发射器的地面光斑起始距离d=h*arcctg(θ+β/2)。
在本申请实施例中,并不限定第一摄像头101以及线激光发射器102的光轴相对于与地面平行的水平面的倾斜角度。进一步可选的,线激光发射器102光轴相对于与地面平行的水平面向下倾斜一定的角度可以使得光能量高的线激光照射到第一摄像头101的核心图像采集区域,有利于提高结构光模组的探测距离。第一摄像头101的光轴相对于与地面平行的水平面向下倾斜一定的角度,可以将视角畸变小、照度高的图像采集区域对准第一摄像头101的重点图像感应区域,有利于提高结构光模组的探测距离和测量精度。参见图1n、图1o和图1t,第一摄像头101的光轴相对于与地面平行的水平面的倾斜角度具有一定的倾斜角度。参见图1r、图1s、图1t,线激光发射器102的光轴相对于与地面平行的水平面向下倾斜一定的角度。
可选的,第一摄像头101的光轴相对于与地面平行的水平面向下倾斜第一角度,线激光发射器102的光轴相对于与地面平行的水平面向下倾斜第二角度,第二角度小于第一角度。可选地,第一角度的角度范围为[0,40]度,进一步可选的,第一角度的角度范围为[11,12]度;相应地,第二角度的角度范围为[5,10]度,进一步可选地,第二角度的角度范围为[7.4,8.4]度。优选的,为了有效提升结构光模组的探测距离,第一角度为11.5度,第二角度为7.9度。
另外,本申请实施例对线激光发射器的出光张角不做限制。可选的,线激光发射器的出光张角的角 度范围为[70,80]度,优先的,线激光发射器的出光张角为75°。
表1为第一摄像头在不同测试场景下的测试数据。在表1中,第一摄像头的光轴倾斜是指第一摄像头的光轴与地面平行的水平面向下倾斜一定角度,第一摄像头的光轴不倾斜是指第一摄像头的光轴平行于与地面平行的水平面。针对第一摄像头的光轴不倾斜的情形,表1中分别给出了在第一摄像头左侧的线激光传感器发射线激光时在不同测试场景中的探测距离,以及给出了在第一摄像头右侧的线激光传感器发射线激光时在不同测试场景中的探测距离。针对第一摄像头的光轴倾斜的情形,表1中分别给出了在第一摄像头左侧的线激光传感器发射线激光时在不同测试场景中的探测距离,以及给出了在第一摄像头右侧的线激光传感器发射线激光时在不同测试场景中的探测距离。通过表1可知,第一摄像头的光轴倾斜相对于第一摄像头的光轴不倾斜,可以有效提升结构光模组的探测距离。
表1
Figure PCTCN2022105817-appb-000001
为了便于理解,还结合图1u和表2对第一摄像头的光轴与地面平行(也即第一摄像头的光轴不倾斜)和第一摄像头的光轴下倾(也即第一摄像头的光轴倾斜)的测距数据进行对比。参见图1u,假设 被测物体到结构光模组中的第一摄像头的距离记为L,被测物体上测量点(也即测量的位置点)到地面的高度记为h。则下表2中的数据可以看出,第一摄像头的光轴下倾方案相比于第一摄像头的光轴与地面平行的方案,距离误差有明显提升,测量精度更高。在图1u中,以自移动设备对位于地平面上方的物体高度测量为例进行了图示,位于地平面上方的物体测量到的高度h一般为正值,但在实际测量中同时对低于地平面的物体高度进行了测量,故在表2中包括取值为负的高度h。
表2
Figure PCTCN2022105817-appb-000002
在本申请实施例中,并不限定第二摄像头103的光轴相对于与地面平行的水平面的倾斜角度。可选的,第二摄像头103的光轴相对于与地面平行的水平面平行,也即第二摄像头103的光轴相对于与地面平行的水平面向下倾斜0°。
在本申请实施例中,并不限定线激光发射器102的光学整形透镜。例如,线激光发射器102的光学整形透镜可以采用波浪镜或柱状镜。图1v示出了一种波浪镜。图1v所示波浪镜横截面形状为圆形,但并不表示波浪镜横截面形状仅仅限于圆形,还可以是椭圆形、方形等。波浪镜的厚度d和直径D均根据实际应用需求进行选取。可选的,厚度d的误差范围为[-0.1,0.1]毫米,直径D的误差范围为[-0.05,0.05]毫米。可选的,厚度d为2.10毫米,直径为8毫米。
图1x中以一种柱状镜进行了图示,柱状镜的外径ФD和长度L均根据实际应用需求进行选取。可选的,外径ФD的误差范围为[0,0.05]毫米,长度L的误差范围为[-0.1,0.1]毫米。
图1w示出了带波浪镜的线激光发射器102发出的线激光的光强分布图,图1y示出了带柱状镜的线激光发射器102发出的线激光的光强分布图。在图1w和图1y中,纵轴上的各个纵坐标为归一化的光强,横轴上的各个横坐标为线激光发射器102发射出的各条线激光相对光轴的夹角,其中的0度表示光轴方向。
通过图1w和图1y可知,柱状镜光轴处的光强最强,光轴两侧的光强会随着距离光轴原来越远而逐渐减弱,也即柱状镜光轴处与光轴两侧的光强差异大,与光轴的距离越近的区域光强越强(图1y 中黑色线段对应的线激光的光强较强),与光轴的距离越远的区域光强越弱(图1y中灰色线段对应的线激光的光强较弱)。波浪镜光轴处和光轴两侧光强差异小,在光轴周围区域的光强较强(图1w中黑色线段对应的线激光的光强较强),远离光轴的区域光强较弱(图1w中灰色线段对应的线激光的光强较弱)。反映到图1t上,线激光发射器发射出的打到光轴与地面平行的水平面的交点处P周围区域内的线激光的光强最强。相应地,打到交点处P周围之外的其它区域内的线激光的光强较弱。
在本申请的一些可选实施例中,在线激光发射器102采用的是波浪镜时,线激光发射器102发射出的线激光在相对光轴的角度范围为[-30,30]度时,这些线激光的光强最强。在线激光发射器102采用的是柱状镜时,线激光发射器102发射的线激光在相对光轴的角度范围为[-10,10]度时,这些线激光的光强最强。
基于上述,在一些可选的实施例中,线激光发射器102可以使得光轴下倾的同时,还采用柱状镜,以进一步使得光强最大的线激光照射到结构光模组需要探测的重点区域,提升重点区域的图像亮度,从而进一步提升结构光模组的探测距离。
在本申请实施例中,并不限定线激光发射器102的总数量,例如可以是两个或者两个以上。对于分布于第一摄像头101每一侧的线激光发射器102的数量也不做限定,第一摄像头101每一侧的线激光发射器102的数量可以是一个或多个;另外,两侧的线激光发射器102的数量可以相同,也可以不相同。在图1a中,以第一摄像头101两侧各设置一个线激光发射器102为例进行图示,但并不限于此。例如,第一摄像头101的左侧可以设置2个线激光发射器102,第一摄像头101的右侧可以设置1个线激光发射器102。又例如,第一摄像头101的左右侧均设置2个、3个或5个线激光发射器102等。
在本实施例中,也不限定线激光发射器102在第一摄像头101两侧的分布形态,例如可以是均匀分布,也可以是非均匀分布,可以是对称分布,也可以是非对称分布。其中,均匀分布和非均匀分布可以是指分布于第一摄像头101同一侧的线激光发射器102之间可以是均匀分布或非均匀分布,当然,也可以理解为:分布于第一摄像头101两侧的线激光发射器102从整体上来看是均匀分布或非均匀分布。对于对称分布和非对称分布,主要是指分布于第一摄像头101两侧的线激光发射器102从整体上看是对称分布或非对称分布。这里的对称既包括数量上的对等,也包括安装位置上的对称。例如,在图1a所示的结构光模组中,线激光发射器102的数量为两个,且两个线激光发射器102对称分布于第一摄像头101两侧。
在本申请实施例中,也不限定线激光发射器102与第一摄像头101之间的安装位置关系,凡是线激光发射器102分布在第一摄像头101两侧的安装位置关系均适用于本申请实施例。其中,线激光发射器102与第一摄像头101之间的安装位置关系,与结构光模组的应用场景相关。可根据结构光模组的应用场景,灵活确定线激光发射器102与第一摄像头101之间的安装位置关系。这里的安装位置关系包括以下几个方面:
安装高度:在安装高度上,线激光发射器102和第一摄像头101可以位于不同高度。例如,两侧的线激光发射器102高于第一摄像头101,或者,第一摄像头101高于两侧的线激光发射器102;或者一侧的线激光发射器102高于第一摄像头101,另一侧的线激光发射器102低于第一摄像头101。当然,线激光发射器102和第一摄像头101也可以位于同一高度。较为优选的,线激光发射器102和第一摄像头101可以位于同一高度。例如,在实际使用中,结构光模组会被安装在某一设备(例如机器人、净化 器、无人车等自移动设备)上,在该情况下,线激光发射器102和第一摄像头101到设备所在工作面(例如地面)之间的距离相同,例如两者到工作面的距离都是47mm、50mm、10cm、30cm或50cm等。
安装距离:安装距离是指线激光发射器102与第一摄像头101之间的机械距离(或者称为基线距离)。线激光发射器102与第一摄像头101之间的机械距离,可根据结构光模组的应用需求灵活设定。其中,线激光发射器102与第一摄像头101之间的机械距离、结构光模组所在设备(例如机器人)需要满足的探测距离以及该设备的直径等信息可在一定程度上决定测量盲区的大小。对结构光模组所在设备(例如机器人)来说,其直径是固定的,测量范围与线激光发射器102与第一摄像头101之间的机械距离是可以根据需求灵活设定,这意味着机械距离及盲区范围不是固定值。在保证设备测量范围(或性能)的前提下,应该尽量减小盲区范围,然而,线激光发射器102与第一摄像头101之间的机械距离越大,可以控制的距离范围就越大,这有利于更好地控制盲区大小。
在本申请实施例中,安装位置上,激光发射器、指示灯105、第一摄像头101和第二摄像头103可以位于同一高度,也可以位于不同高度。
在本申请实施例中,第二摄像头103或指示灯105可以分布在第一摄像头101的左侧、右侧、上侧或下侧。可选地,第二摄像头103可以分布在第一摄像头101的右侧17mm(毫米)处。进一步可选的,指示灯105与第二摄像头103对称设置在第一摄像头101两侧。
在一些应用场景中,结构光模组应用于扫地机器人上,例如可以安装在扫地机器人的撞板上或机器人本体上。针对扫地机器人来说,下面示例性给出线激光发射器102与第一摄像头101之间比较合理的机械距离范围。例如,线激光发射器102与第一摄像头101之间的机械距离可以大于20mm。进一步可选的,线激光发射器102与第一摄像头101之间的机械距离大于30mm。更进一步,线激光发射器102与第一摄像头101之间的机械距离大于41mm。需要说明的是,这里给出的机械距离的范围,并不仅仅适用于结构光模组应用在扫地机器人这一种场景,也适用于结构光模组在规格尺寸与扫地机器人比较接近或类似的其它设备上的应用。
发射角度:发射角度是指在安装好之后,线激光发射器102发射线激光的中心线与线激光发射器102的安装基线之间的夹角。安装基线是指在线激光发射器102与第一摄像头101位于同一安装高度的情况下,线激光发射器102和第一摄像头101所在的一条直线。在本实施例中,并不限定线激光发射器102的发射角度。该发射角度与结构光模组所在设备(例如机器人)需要满足的探测距离、该设备的半径以及线激光发射器102与第一摄像头101之间的机械距离有关。在结构光模组所在设备(例如机器人)需要满足的探测距离、该设备的半径和线激光发射器102与第一摄像头101之间的机械距离确定的情况下,可直接通过三角函数关系得到线激光发射器102的发射角度,即发射角度是一固定值。
当然,如果需要某个特定的发射角度,可以通过调整结构光模组所在设备(例如机器人)需要满足的探测距离和线激光发射器102与第一摄像头101之间的机械距离来实现。在一些应用场景中,在结构光模组所在设备(例如机器人)需要满足的探测距离和设备的半径确定的情况下,通过调整线激光发射器102与第一摄像头101之间的机械距离,线激光发射器102的发射角度可在一定角度范围内变化,例如可以是50-60度,但不限于此。优选的,线激光发射器102的发射角度为55.26°。
结合图1e所示,以结构光模组在扫地机器人上的应用为例,对上述几个安装位置关系以及相关参数进行示例性图示。在图1d中,字母B表示第一摄像头101,字母A和C表示位于第一摄像头101两 侧的线激光发射器102;H表示两侧的线激光发射器102发射出去的线激光在第一摄像头101的视场角内的交点;直线BD和BE表示第一摄像头101的水平视场的两个边界,∠DBE表示第一摄像头101的水平视场角。在图1c中,直线AG表示线激光发射器102A发射线激光的中心线;直线CF表示线激光发射器102C发射线激光的中心线。另外,在图1e中,直线BH表示第一摄像头101视场角的中心线,即在图1e中,两侧的线激光发射器102发射线激光的中心线与第一摄像头101视场角的中心线相交。
在图1e中,扫地机器人的半径为175mm,直径为350mm;线激光发射器102A和C对称分布在第一摄像头101B的两侧,且线激光发射器102A或C与第一摄像头101B之间的机械距离为30mm;第一摄像头101B的水平视场角∠DBE为67.4度;在扫地机器人的探测距离为308mm的情况下,线激光发射器102A或C的发射角度为56.3度。如图1e所示,过H点的直线IH与安装基线(也即结构光模组基线)之间的距离时45mm,直线IH到扫地机器人边缘切线之间的距离为35mm,这部分区域为视场盲区。图1e所示各种数值仅为示例性说明,并不限于此。
本申请实施例对线激光发射器的光轴与结构光模组基线的夹角不做限制。为了便于理解,继续结合图1e对线激光发射器的光轴与结构光模组基线的夹角的计算流程进行说明。参见图1e,假设结构光模组基线长度(也即线激光发射器与第一摄像头之间的机械距离)记为l;线激光发射器的光轴与结构光模组基线的夹角记为α;线激光发射器的光轴与自移动设备边缘的切线的交点到基线的垂直距离记为L。第一摄像头中心到自移动设备边缘的切线的垂直距离记为d;自移动设备外轮廓直径记为ФD;结构光模组量程(也即探测距离)记为Range;
线激光发射器的光轴与自移动设备边缘的切线的交点到基线的垂直距离L通常设置为与自移动设备外轮廓直径接近的值(设置过大会造成处于此位置的障碍探测精度较低,过小会造成探测结构光模组探测的有效距离小)。因此有:L≈ФD。L确定后,可以得到线激光发射器的光轴与结构光模组基线的夹角α=arctan(L/(d+l))。可选的,线激光发射器的光轴与结构光模组基线的夹角范围为[50,60]度。进一步可选的,线激光发射器的光轴与结构光模组基线的夹角为55.26度。
在本申请的一些实施例中,如图1f所示,结构光模组还包括驱动电路。模组控制器104与线激光发射器102之间可以电连接驱动电路,或者,模组控制器104与指示灯105之间可以电连接驱动电路。驱动电路可以放大模组控制器104对线激光发射器102的控制信号,或者可以放大模组控制器104对指示灯105的控制信号。在本申请实施例中,并不对驱动电路的电路结构进行限定,凡是可以放大信号并可将放大后的信号给到线激光发射器102或指示灯105的电路结构均适用于本申请实施例。
在本申请实施例中,并不限定驱动电路的数量。不同线激光发射器102可以共用一个驱动电路,也可以是一个线激光发射器102对应一个驱动电路100。较为优选的是,一个线激光发射器102对应一个驱动电路。在图1f中,以一个线激光发射器102对应一个第一驱动电路1001,另一个线激光发射器102对应一个第二驱动电路1002以及一个指示灯105对应一个第三驱动电路1003为例进行图示。
为了便于使用,本申请实施例提供的结构光模组除了包括第一摄像头101、分布于第一摄像头101两侧的线激光发射器102、指示灯105、第二摄像头103之外,还包括一些用于承载第一摄像头101、分布于第一摄像头101两侧的线激光发射器102、指示灯105、第二摄像头103的承载结构。承载结构可以有多种实现形式,对此不做限定。
在一些可选实施例中,承载结构包括固定座107,进一步还可以包括与固定座107配合使用的固定 盖108。结合图1h-图1r对带有固定座107以及固定盖108的结构光模组的结构进行说明。其中,图1h-图1r分别是结构光模组的正视图、轴侧图以及爆炸图,由于视角原因,每个视图并未展示全部组件,故而图1h-图1r中仅标记部分组件。如图1h-图1r所示,结构光模组还包括:固定座107;激光发射器、指示灯105、第一摄像头101和第二摄像头103装配在固定座107上。
需要指出的是,将激光发射器、指示灯105、第一摄像头101和第二摄像头103等装配在同一固定座107上,可以提高结构光模组的系统稳定性,降低因为分别装配时结构避免分别装配时结构蠕变造成系统参数变化过大的影响。
进一步可选的,如图1h-图1r所示,固定座107包括:主体部和位于主体部两侧的端部;其中,指示灯105、第一摄像头101和第二摄像头103装配在主体部上,线激光发射器102装配在端部上;其中,端部的端面朝向参考面,以使线激光发射器102的中心线与第一摄像头101的中心线相交于一点;参考面是与主体部的端面或端面切线垂直的平面。
在一可选实施例中,为了方便固定,降低器件对结构光模组外观的影响,如图1h-图1r所示,主体部的中间位置开设有三个凹槽203,主体部的中间位置开设有三个凹槽203,指示灯105、第一摄像头101和第二摄像头103安装于相应的凹槽203内;端部上设有安装孔202,线激光发射器102安装于安装孔202内。
进一步可选的,如图1h-图1r所示,在结构光模组包括模组控制器104时,可以将模组控制器104固设在固定座107的后方。
进一步可选的,如图1h-图1r所示,结构光模组还包括装配于固定座107上方的固定盖108;固定盖108与固定座107之间形成腔体,以容纳线激光发射器102、第一摄像头101与模组控制器104之间的连接线,以及容纳模组控制器104和第二摄像头103与主控制器106之间的连接线。可选的,结构光模组中的第二摄像头103可以通过FPC(Flexible Printed Circuit,柔性电路板)连接器连接至主控制器106上。
其中,固定盖108、模组控制器104以及固定座107之间可采用固定件进行固定。固定件例如包括但不限于螺钉、螺栓以及卡扣。
在一可选实施例中,如图1h-图1r所示,结构光模组还包括装配于线激光发射器102上的固定板109,或者装配于指示灯105上的指示灯灯板201。其中,固定板109或指示灯灯板201可以是任意形状的板状结构。
在一可选实施例中,第一摄像头101位于凹槽203外边缘之内,即镜头内缩在凹槽203内,可防止镜头被刮蹭或磕碰,有利于保护镜头。
在本申请实施例中,并不对主体部端面的形状做限定,例如可以是平面,也可以是向内或向外凹陷的曲面等。根据结构光模组所在设备的不同,主体部端面的形状也有所不同。例如,假设结构光模块应用于外形轮廓为圆形或椭圆形的自移动设备,则主体部的端面可实现为向内凹陷的曲面,该曲面与自移动设备的外形轮廓适配。若结构光模块应用于外形轮廓为方形或长方形的自移动设备,则主体部的端面可实现为平面,该平面与自移动设备的外形轮廓适配。其中,外形轮廓为圆形或椭圆形的自移动设备可以是外形轮廓为圆形或椭圆形的扫地机器人、擦窗机器人等。相应地,外形轮廓为方形或长方形的自移动设备可以是外形轮廓为方形或长方形的扫地机器人、擦窗机器人等。
在一可选实施例中,对于外形轮廓为圆形或椭圆形的自移动设备来说,结构光模组安装于自移动设备上,为了与自移动设备的外观更加契合,最大化利用自移动设备的空间,主体部的曲面半径与自移动设备的半径相同或近似相同。例如,若外形轮廓为圆形的自移动设备,其半径范围为170mm,则结构光模组在应用于该自移动设备时,其主体部的曲面半径可以为170mm或者近似170mm,例如可以在170mm-172mm范围内,但并不限于此。
进一步,在结构光模组应用在外形轮廓为圆形或椭圆形的自移动设备的情况下,结构光模组中线激光发射器102的发射角度主要由自移动设备需要满足的探测距离和自移动设备的半径等确定。在该场景下,结构光模组的主体部的端面或端面切线与安装基线平行,因此线激光发射器102的发射角度也可以定义为:线激光发射器102发射线激光的中心线与主体部的端面或端面切线之间的夹角。在一些应用场景中,在自移动设备的探测距离和半径确定的情况下,线激光发射器102的发射角度的范围可以实现为50-60度,但并不限于此。如图1h-图1r所示,线激光发射器102的数量为两个,且两个线激光发射器102对称分布于第一摄像头101两侧。其中,自移动设备需要满足的探测距离是指其需要探测环境信息的距离范围,主要是指自移动设备前方一定距离范围。
本申请上述实施例提供的结构光模组,结构稳定、尺寸小,契合整机外观,极大地节省了空间,可以支持多种类型的自移动设备。
基于上述结构光模组,本申请实施例还提供一种自移动设备的结构示意图,如图2a所示,该设备包括:设备本体20,设备本体20上设置有主控制器106和结构光模组21。主控制器106与结构光模组21电连接。
其中,结构光模组21包括:第一摄像头101、分布于第一摄像头101两侧的线激光发射器102、第二摄像头103。
进一步可选的,结构光模组21还包括模组控制器104,模组控制器104与主控制器106电连接。其中,模组控制器104控制线激光发射器102对外发射线激光,并控制第一摄像头101在线激光发射器102发射线激光期间采集由线激光探测到的第一环境图像,以及将第一环境图像发送给主控制器106;主控制器106控制第二摄像头103采集其视场范围内的第二环境图像,并根据第一环境图像和第二环境图像对自移动设备进行功能控制;其中,第一环境图像是包含线激光在遇到物体后产生的激光条纹的激光图像,第二环境图像是不包含激光条纹的可见光图像。
进一步可选的,在结构光模组21中的第二摄像头103通过FPC连接器204连接至主控制器106时,可以对FPC连接器204所在区域进行净空处理,净空处理可以理解为在FPC连接器204所在区域不设置其他物体。净空处理可以减少FPC在自移动设备的撞板22活动时与其他物体发生碰撞损坏的概率。
本申请实施例中,自移动设备可以是任何能够在其所在环境中高度自主地进行空间移动的机械设备,例如,可以是机器人、净化器、无人机等。其中,机器人可以包括扫地机器人、擦玻璃机器人、家庭陪护机器人、迎宾机器人等。
当然,根据自移动设备实现形态的不同,自移动设备的形状也会有所不同。本实施例并不限定自移动设备的实现形态。以自移动设备的外轮廓形状为例,自移动设备的外轮廓形状可以是不规则形状,也可以是一些规则形状。例如,自移动设备的外轮廓形状可以是圆形、椭圆形、方形、三角形、水滴形或 D形等规则形状。规则形状之外的称为不规则形状,例如人形机器人的外轮廓、无人车的外轮廓以及无人机的外轮廓等属于不规则形状。
在本申请实施例中,并不限定主控制器106的实现形态,例如可以是但不限于CPU、GPU或MCU等处理器。本申请实施例并不限定主控制器106根据环境图像对自移动设备进行功能控制的具体实施方式。例如,主控制器106可以根据第一环境图像和第二环境地图控制自移动设备实现各种基于环境感知的功能。例如,可以实现视觉算法上的物体识别、跟踪与分类等功能;另外,基于线激光检测精度较高的优势,还可以实现实时性强、鲁棒性强、精度高的定位和构建地图等功能,进而还可以基于构建出的高精度的环境地图对运动规划、路径导航、定位等提供全方位的支持。当然,主控制器106还可以根据环境图像对自移动设备进行行进控制,例如控制自移动设备执行继续前进、后退、拐弯等动作等。
进一步,如图2b所示,结构光模组21还包括:指示灯105以及驱动电路100。下面以模组控制器104为MCU为例,对MCU与主控制器106配合工作的原理进行说明。如图2b所示,结构光模组21通电后,MCU通过I2C(Inter Integrated Circuit)接口初始化第一摄像头101。第一摄像头101初始化完成后,MCU通过I2C接口向第一摄像头101发送Trig触发信号以触发第一摄像头101曝光,第一摄像头101开始曝光时,还通过I2C接口向MCU发送LDE STROBE同步信号,MCU接收到LDE STROBE同步信号之后,在LED STROBE信号的上升沿,通过驱动电路100控制线激光发射器102的频率和电流,驱动线激光发射器102发射线激光,在LED STROBE信号的下降沿,MCU关闭线激光发射器102。曝光完成后,第一摄像头101通过数字视频接口(Digital Video Port,DVP)向MCU发送采集的图片数据并由MCU进行处理,并通过SPI(Serial Peripheral Interface,串行外设接口)接口向主控制器106输出第一环境图像。可选的,MCU可以对第一摄像头101采集的图像数据进行一些去噪处理、图像增强等图像预处理操作。另外,主控制器106还可以通过MIPI(Mobile Industry Processor Interface,移动产业处理器接口)接口发送控制信号,以控制第二摄像头103采集其视野范围内的第二环境图像,并接收第二摄像头103通过MIPI接口发送的第二环境图像。另外,主控制器106还可以向MCU发送第二摄像头103的工作状态信息,以供MCU根据第二摄像头103的工作状态信息并通过驱动电路100控制指示灯105亮起或熄灭。主控制器106获取到第一环境图像和第二环境图像之后,可以采用AI算法对第一环境图像和第二环境图像进行识别,以识别作业环境中物体的三维点云数据、类别、纹理和材质等更多物体信息,更加有利于自移动设备在作业环境中的行进控制、避障处理、越障处理等。
在本申请实施例中,并不限定结构光模组21在设备本体20的具体位置。例如可以是但不限于设备本体20的前侧、后侧、左侧、右侧、顶部、中部以及底部等等。进一步,结构光模组21设置在设备本体20高度方向上的中部位置、顶部位置或底部位置。
在一可选实施例中,自移动设备向前移动执行作业任务,为了更好的探测前方的环境信息,结构光模组21设置于设备本体20的前侧;前侧是自移动设备向前移动过程中设备本体20朝向的一侧。
在又一可选实施例中,为了保护结构光模组21不受外力的破坏,设备本体20的前侧还安装有撞板22,撞板22位于结构光模组21外侧。如图2c和2d所示,为结构光模组21与撞板22的分解示意图。在图2c和2d中,以扫地机器人为例对自移动设备进行图示,但并不限于此。结构光模组21可以安装在撞板22上;也可以不安装在撞板22上,对此不做限定。撞板22上对应结构光模组21的区域开设有窗口23,以露出结构光模组21中的第一摄像头101、线激光发射器102、指示灯105、第二摄像头103。 进一步可选的,如图2c所示,撞板22上开设有三个窗口,分别为第一窗口231、第二窗口232以及第三窗口233,其中,第二窗口232用于露出第一摄像头101、第二摄像头103以及指示灯105,第一窗口231和第三窗口233分别用于露出相应的线激光发射器102。
另外,结构光模组安装至撞板上,能够最大程度地缩小第一摄像头、第二摄像头和撞板之间的间隙,也能够减少一摄像头、第二摄像头视场角的遮挡,还可以使用更小的第二窗口232,提升自移动设备的外形美观性,极大地节省了空间,可以支持多种类型的自移动设备。
进一步可选的,为了保护第一摄像头101或第二摄像头103的安全性,第一窗口231上设置有透光保护板。应理解,若自移动设备与障碍物发生碰撞时,第一窗口231上的有透光保护板可以减少第一摄像头101或第二摄像头103被碰撞损坏的概率。另外,透光保护板可以保证第一摄像头101或第二摄像头103进行正常的图像采集工作。
进一步可选的,第一窗口231与透光保护板设置密封圈,密封圈可以防止灰尘水雾等沾染第一摄像头101的镜头或第二摄像头103的镜头造成图像质量下降。可选的,密封圈为EVA(Ethylene Vinyl Acetate Copolymer,乙烯-乙酸乙烯共聚物)材质的密封圈。
进一步可选的,线激光发射器102与透光保护板之间设置密封圈,用于防止灰尘水雾等污渍沾染线激光发射器102的镜片造成光斑变形或功率下降。可选的,密封圈为EVA材质的密封圈。
进一步可选的,为了保护线激光的安全性,第二窗口232或第三窗口233上设置有透光保护板。可选的,透光保护板为透线激光的保护板。应理解,若自移动设备与障碍物发生碰撞时,第二窗口232或第三窗口233上的透光保护板可以减少线激光被碰撞损坏的概率。
在又一可选实施例中,结构光模组21安装在撞板22的内侧壁上。图2d所示,为结构光模组21与撞板22的分解示意图。
在又一可选实施例中,结构光模组21的中心到自移动设备所在工作面的距离范围为20-60mm。为了减小自移动设备的空间盲区,使视场角足够大,进一步可选的,结构光模组21的中心到自移动设备所在工作面的距离为47mm。
进一步,除了上述提到的各种组件,本实施例的自移动设备还可以包括一些基本组件,例如一个或多个存储器、通信组件、电源组件、驱动组件等等。
其中,一个或多个存储器主要用于存储计算机程序,该计算机程序可被主控制器106执行,致使主控制器106控制自移动设备执行相应任务。除了存储计算机程序之外,一个或多个存储器还可被配置为存储其它各种数据以支持在自移动设备上的操作。这些数据的示例包括用于在自移动设备上操作的任何应用程序或方法的指令,自移动设备所在环境/场景的地图数据,工作模式,工作参数等等。
进一步,除了上述提到的各种组件,本实施例的自主移动设备还可以包括一些基本组件,例如一个或多个存储器、通信组件、电源组件、驱动组件等等。
现有扫地机器人不能满足人们在家庭环境中细致化的清扫需求,而且对于复杂的、精细的家庭环境,现有扫地机器人也没有很有针对性地,因地制宜地执行清洁任务,导致清洁时间长,效率低,用户的使用体验不佳。针对该技术问题,在本申请实施例中,充分利用并结合自移动设备上的结构光模组获得更为丰富的环境数据,对作业环境中的不同物体进行类型识别,进而针对不同类型的物体采用不同的机器 行为模式执行作业任务,做到更有针对性地、精细化、有目的性的执行作业任务,不仅能缩短作业时间,还能提高作业能力,改善用户的使用体验。特别是针对扫地机器人,针对不同类型的物体可以采用不同的机器行为模式执行清洁任务,做到更有针对性地、精细化、有目的性的清洁作业,不仅能缩短清洁时间,还能提高清洁能力,改善用户的使用体验。
在此说明,本申请实施例提供的各种方法可由自移动设备实施。在本申请实施例中,自移动设备可以是任何能够在其所在环境中高度自主地进行空间移动的机械设备,例如,可以是机器人、净化器、无人驾驶车辆等。其中,机器人可以包括扫地机器人、陪护机器人或引导机器人等。这里对“自移动设备”进行的解释说明适用于本申请所有实施例,在后续各实施例中不再做重复性说明。
在对本申请实施例提供的各种方法进行详细说明之前,先对自移动设备可以采用的结构光模组进行说明。在本申请各实施例中,自移动设备安装有结构光模组。本申请实施例使用的结构光模组泛指任何包括结构光组件和视觉传感器的结构光模组。
其中,结构光组件包括线激光发射器102和激光摄像头101,线激光发射器102用于向外发射可见或不可见的线激光,激光摄像头101负责采集线激光探测到环境的激光图像。具体的,线激光发射器102发射的线激光遇到环境中的物体之后,在物体上形成激光条纹,激光摄像头101采集其视场范围内包括激光条纹的激光图像。利用三角法测距原理、激光条纹的激光图像在激光图像中的位置,激光摄像头101坐标系、自移动设备的设备坐标系和世界坐标系之间的坐标变换关系,可以从激光图像用于探测激光摄像头101视场角内的物体的三维点云数据、轮廓、高度、宽度、深度、长度等信息。
在图1中,自移动设备按照前进方向在工作面(诸如地面、桌面以及玻璃面)上移动,通过线激光发射器102向外发射线激光,线激光若遇到前方作业区域中的物体,会在物体上形成激光条纹,此时,激光摄像头101采集到包括激光条纹的激光图像。根据激光条纹在激光图像中的位置、三角法测距原理、激光摄像头101坐标系、自移动设备的设备坐标系和世界坐标系之间的坐标变换关系,不难计算出激光条纹所对应的物体上的各个位置点的高度h(也即物体上的位置点与工作面之间的距离)、各个位置点的深度s(也即物体上的位置点距离自移动设备的距离)、各个位置点的三维点云数据、物体的宽度b(宽度方向与前进方向垂直)以及物体的长度a(长度方向与前进方向平行)。在获取到物体上多个位置点的三维点云数据之后,通过分析三维点云数据可以确定物体的轮廓信息。
其中,视觉传感器103可以是能采集可见光图像的视觉摄像头,例如包括但不限于单目RGB摄像头和双目RGB摄像头等。进一步可选的,视觉传感器103的滤光片无法穿透线激光发射器102对外发射线激光被物体反射回来的反射光,以保证视觉传感器103可以采集到不包含线激光在遇到物体后产生激光条纹的可见光图像,如图1所示的可见光图像,进而保证视觉传感器103所采集的图像数据的质量。
值得注意的是,上述结构光模组可通过结构光组件探测到物体的三维点云数据、轮廓、高度、宽度、深度、长度等信息;通过视觉传感器103可以感知到物体的颜色特征、纹理特征、形状特征和空间关系特征等信息,进而感知更为丰富的环境信息,有利于帮助提升自移动设备的智能化程度。
下面结合图2-图3,对本申请实施例可以采用的几种结构光模组的结构以及工作原理进行简单说明。本领域技术人员应该理解下述列举的结构光模组仅为示例性说明,本申请实施例可以采用的结构光模组并不限于这几种。
如图2至3所示,一种结构光模组主要包括:结构光组件和视觉组件。其中,结构光组件包括激光 摄像头101、分布于激光摄像头101两侧的线激光发射器102。视觉组件包括视觉传感器103。其中,结构光组件或视觉组件可由结构光模组内部的控制器或外部的控制器进行控制。为了便于理解将结构光模组内部的控制器称之为模组控制器104。图2中模组控制器104以虚线框表示,说明模组控制器104为可选组件。当结构光模组应用到自主移动设备上后,结构光模组中的全部或部分组件可在自主移动设备的主控制器106的控制下进行工作。为了便于理解,以结构光组件在模组控制器104的控制下工作和视觉组件在主控制器106的控制下进行工作为例进行说明。
其中,线激光发射器102安装于激光摄像头101上方、下方、左侧或右侧均可以,只要线激光发射器102发射的线激光位于激光摄像头101的视场范围内即可。在图2和图3中,以线激光发射器102安装于激光摄像头101左右两侧为例进行图示。如图1所示,在结构光模组中,线激光发射器102发射出的激光面打在障碍物或地面表面形成的激光条纹在前方水平于地面、垂直于自移动设备前进方向。可将这种安装方式称为水平安装。图1所示为结构光模组在自移动设备上的安装状态以及应用状态示意图。
如图1所示,自移动设备向前行进过程中,可控制结构光模组按照一定的方式工作,例如周期性(每20ms)进行一次环境探测,从而得到一组激光图像数据,每个激光图像数据中包含线激光打到物体表面或地面上形成的激光条纹,一条激光条纹包含多个三维数据,大量激光图像中的激光条纹上的三维数据可形成三维点云数据。
可选地,模组控制器104一方面对激光摄像头101进行曝光控制,另一方面可控制线激光发射器102在激光摄像头101曝光期间对外发射线激光,以便于激光摄像头101采集由线激光探测到的激光图像。其中,模组控制器104可以控制位于激光摄像头101两侧的线激光发射器102同时工作,或者交替工作,对此不做限定。本申请实施例并不限制模组控制器104的实现形态,例如可以是但不限于CPU、GPU或MCU等处理器。本申请实施例也不限定模组控制器104控制结构光模组的方式。凡是可以实现结构光模组功能的实施方式均适用于本申请实施例。
具体的,模组控制器104可以控制激光摄像头101的曝光频率、曝光时长、工作频率等。激光摄像头101在模组控制器104的控制下在线激光发射器102发射线激光期间采集由线激光探测到的激光图像。基于激光摄像头101采集到的激光图像,可以计算出结构光模组或结构光模组所在设备到前方物体(如障碍物)的距离(也即前方物体的深度信息),还可以计算前方物体(如障碍物)的三维点云数据、轮廓、形状、高度和/或宽度、体积等信息,进一步,还可以进行三维重建等。其中,可以利用激光三角测距法原理,通过三角函数计算激光摄像头101与其前方物体的距离。
在本实施例中,并不限定线激光发射器102的实现形态,可以是任何能够发射线激光的设备/产品形态。例如,线激光发射器102可以是但不限于:激光管。在本实施例中,也不限定线激光发射器102发射线激光的波长,波长不同,线激光的颜色会不同,例如可以是红色激光、紫色激光等。另外,线激光可以是可见光,也可以是不可见光。
在本实施例中,并不限定激光摄像头101的实现形态。凡是可以采集线激光发射器102发射出的线激光所探测到的环境的激光图像的视觉类设备均适用于本申请实施例。例如,激光摄像头101可以采用能够采集线激光发射器102发射出的线激光的摄像头。与线激光发射器102发射线激光的波长适配,例如,激光摄像头101还可以是红外摄像头、紫外线摄像头、星光摄像机、高清摄像头、加装了透红色激 光的2D视觉摄像头、加装了透紫色激光的2D视觉摄像头、以及加装了透紫色激光的2D视觉摄像头等。激光摄像头101可采集其视场角内的激光图像。激光摄像头101的视场角包括垂直视场角、水平视场角、对角视场角。在本实施例中,并不限定激光摄像头101的视场角,可以根据应用需求来选择具有合适视场角的激光摄像头101。可选的,激光摄像头101的水平视场角为100.6°;或者,激光摄像头101的垂直视场角为74.7°;或者,激光摄像头101的对角视场角为133.7°。
在本实施例中,只要线激光发射器102发射出去的线激光位于激光摄像头101的视场范围内即可,至于线激光在物体表面形成的激光条纹与水平面之间的角度不做限定,例如可以平行或垂直于水平面,可以与水平面之间成任意角度,具体可根据应用需求而定。
在本实施例中,并不限定视觉传感器103的实现形态。凡是可以采集可见光图像的视觉类设备均适用于本申请实施例。可见光图像可以呈现环境中物体的颜色特征、纹理特征、形状特征和空间关系特征等特征,能够帮助识别物体的种类、材质等信息。在本申请实施例中,视觉传感器103采集其视场范围内的环境图像是一种可见光图像。其中,视觉传感器103可以包括但不限于:单目RGB摄像头、双目RGB摄像头等。其中,单目RGB摄像头包括一个RGB摄像头,双目RGB摄像头包括两个RGB摄像头,RGB摄像头是能够采集RGB图像的2D视觉摄像头。激光摄像头101可采集其视场角内的环境图像。视觉传感器103的视场角包括垂直视场角、水平视场角、对角视场角。在本实施例中,并不限定视觉传感器103的视场角,可以根据应用需求来选择具有合适视场角的视觉传感器103。可选的,视觉传感器103的水平视场角为148.3°;或者,视觉传感器103的垂直视场角为125.8°;或者,视觉传感器103的对角视场角为148.3°。
应理解,RGB摄像头的滤光片无法穿透线激光发射器102对外发射线激光被物体反射回来的反射光。因此,RGB摄像头可以采集到不包含线激光在遇到物体后产生激光条纹的可见光图像。可以理解的是,视觉传感器103采集其视场范围内的环境图像是不包含激光条纹的可见光图像。
可选的,视觉传感器103在主控制器106的控制下进行工作。例如,主控制器106可以控制视觉传感器103的曝光频率、曝光时长、工作频率等。
进一步可选的,参见图2和图3,结构光模组中的视觉组件还可以包括指示灯105,指示灯105的亮灭状态指示视觉传感器103的工作状态。例如,指示灯105亮起表示视觉传感器103处于工作状态。指示灯105熄灭,表示视觉传感器103处于关闭状态。可选的,指示灯105在模组控制器104的控制下进行工作。模组控制器104可与主控制器106进行交互,获取主控制器106发送的视觉传感器103的工作状态,并基于视觉传感器103的工作状态控制指示灯105的亮灭状态。
进一步可选的,为了提高结构光模组的智能性,可以由模组控制器104控制结构光组件和视觉组件的图像采集工作,并承担对结构光组件和视觉组件的所采集的激光图像数据和可见光图像数据进行数据处理工作。
进一步可选的,为了减少结构光模组的数据处理量,提高结构光模组的图像采集效率,由主控制器106承担对结构光组件和视觉组件的所采集的激光图像数据和可见光图像数据进行数据处理工作。在这种情形下,结构光模组通过模组控制器104向主控制器106发送结构光组件采集的激光图像数据,同时,主控制器106获取视觉组件采集的可见光图像数据。其中,主控制器106可以对激光图像数据进行分析,物体三维点云数据、轮廓、形状、高度和/或宽度、体积等。主控制器106还可以对可见光图像数据进 行分析,识别物体的颜色特征、纹理特征、形状特征、空间关系特征、种类、材质等信息。
以下结合附图,详细说明本申请各实施例提供的技术方案。
图4为本申请一示例性实施例提供的作业方法的流程图。该方法适用于自移动设备,自移动设备安装有结构光模组。关于结构光模组的介绍请参见前述内容。如图1所示,该方法包括以下步骤:
401、利用结构光模组中的结构光组件和视觉传感器分别采集前方作业区域中的结构光数据和图像数据。
402、基于图像数据识别前方作业区域中存在的目标物体类别,选择与目标物体类别适配的目标机器行为模式。
403、在结构光数据的辅助下,按照目标机器行为模式控制自移动设备针对前方作业区域中存在的目标物体执行作业任务。
在本申请实施例中,自移动设备作业过程中,可以利用结构光模组探测前方作业区域的环境信息。其中,前方作业区域是指在自移动设备作业过程中,自移动设备沿行进方向能够识别的范围,该前方作业区域的环境信息会随着自移动设备的行进随之变化,自移动设备在不同作业区域内,其前方作业区域的环境信息不同。具体的,利用结构光组件采集前方作业区域中的结构光数据,也即在线激光发射器向前方作业区域发射线激光之后,由激光摄像头采集线激光探测到环境的激光图像数据。同时,利用视觉传感器采集前方作业区域中的图像数据,该图像数据为可见光图像数据。
在获取到前方作业区域中的结构光数据和图像数据之后,首先基于图像数据识别前方作业区域中是否存在物体以及物体所属的类别。
值得注意的是,在本申请实施例中,物体类别是从物体对自移动设备作业影响的角度对物体进行分类。例如,物体类别大致可分为:易卡困、易缠绕、易脏以及可移动等,但并不限于以上类别。应理解,易卡困物体是指作业环境中存在的一些容易致使自移动设备被困住和卡死的物体;易缠绕物体是指作业环境中存在的一些容易缠绕自移动设备的物体;易脏污物体是指作业环境中存在的容易导致其所在区域比较脏污的一些物体;可移动物体是指作业环境中存在的一些可移动的物体,这些物体一方面可能会干扰自移动设备的正常行进,另一方面会干扰自移动设备执行作业任务,例如被可移动物体占据的地方自移动设备将无法打扫,需要采取一些特殊处理方式。
以家庭环境为例,假设家庭环境中存在以下物体:垃圾桶、充电座、鞋子、碗盆、U型椅、吧台椅、移门滑轨、衣服、地毯边缘、电线、人以及动物等。在这些物体中,有些物体属于易卡困类别的物体,有些属于易缠绕类别的物体,有些属于易脏污类别的物体,还有些属于可移动的物体。例如,易卡困类别的物体包括但不限于:U型椅、吧台椅、移门滑轨。易缠绕类别的物体包括但不限于:衣服、地毯边缘、电线。易脏污类别的物体包括但不限于:垃圾桶、充电座、鞋子、碗盆。又例如,可移动的物体包括但不限于:人、动物等。
在本申请实施例中,将基于结构光模组中视觉传感器采集到的前方作业区域中的图像数据识别到的前方作业区域中存在的物体所属的类别称为目标物体类别。目标物体类别可以包括上述列举的几种物体类别中的任一种或几种,对此不做限定。另外,在本申请实施例中,对基于结构光模组中视觉传感器采集到的图像数据识别前方作业区域中存在的目标物体类别的方式不做限定。下面对识别目标物体类别的实施方式进行举例说明:
可选地,自移动设备可以利用AI(Artificial Intelligence,人工智能)算法,对结构光模组中视觉传感器采集到的图像数据进行物体识别,从而得到自移动设备前方作业区域中存在的目标物体类别。可选的,AI识别结果包括物体是作业环境中的哪个物体,以及物体所属类别。其中,基于AI算法对结构光模组中视觉传感器采集到的图像数据进行物体识别,具体可以是:采用预先训练的神经网络模型对结构光模组中视觉传感器采集到的图像数据进行物体识别。具体地,可以预先准备大量的样本物体图像,并标注样本物体图像中物体所属的物体类别,根据样本物体图像及其标注结果进行模型训练,获取能够识别物体类别的图像识别模型,将该图像识别模型内置到自移动设备内。之后,在结构光模组中的视觉传感器采集到自移动设备前方作业区域内的图像数据之后,自移动设备可利用该图像识别模型对图像数据进行物体识别,从而得到自移动设备前方作业区域中存在的目标物体类别。其中,图像识别模型的网络结构包括但不限于:CNN(Convolutional Neural Networks,卷积神经网络)、RNN(Recurrent Neural Network,循环神经网络)以及LSTM(Long Short-Term Memory,长短期记忆人工神经网络)。
进一步可选的,本申请实施例采用的图像识别模型包括特征提取网络和分类神经网络。图像识别模型基于图像数据识别自移动设备前方作业区域中存在的物体类别的一种实施过程是:将图像数据输入特征提取网络中,在图像数据上生成至少一个候选框,对每个候选框对应的特征映射图进行池化处理以得到第一特征向量;进一步,基于每个候选框对应的第一特征向量从至少一个候选框中选定有效候选框;有效候选框是指圈定图像区域中包含有物体的候选框;将有效候选框中的图像输入到分类神经网络中,对有效候选框中的图像进行特征提取以得到第二特征向量;基于第二特征向量识别出有效候选框中物体所属的物体类别。例如,可以将第二特征向量与预先维护的特征库中已知物体对应的特征向量进行匹配,将特征库中被第二特征向量匹配中的特征向量对应的已知物体所属的类别,作为有效候选框中物体所属的类别。
可选的,特征库中维护了已知物体及其物体类别的关联关系。其中,已知物体可以指其所属物体类别已经确认的物体。
在本申请的上述或下述实施例中,在自移动设备识别出前方作业区域中存在的目标物体及其所属的目标物体类别之后,还可以将识别出的目标物体及其所属的目标物体类别更新到环境地图中与该目标物体对应的地图区域中。
进一步可选的,在本申请实施例中,作业环境中可能存在一些未被AI算法识别到的物体及其所属的物体类别,针对这些物体及其所属的物体类别,允许用户将这些物体及其所属的物体类别等信息添加至环境地图中。具体的,用户可以在与自移动设备绑定的终端设备的显示屏幕上或在自移动设备的显示屏幕上查看环境地图,并将环境地图中已经存在的物体及其所属物体类别与自移动设备所在作业环境中实际存在的物体及其所属物体类别进行比对;若作业环境中实际存在的物体及其所属的物体类别与环境地图中已经记载的物体及其所属物体类别不符合,用户可以更新环境地图,以便环境地图更加准确反映作业环境中实际存在的物体及其所属物体类别的信息,以使环境地图与作业环境更加契合。应理解,与作业环境更加契合的环境地图能够帮助提高自移动设备更加准确感知作业环境中存在物体的信息,有利于提高移动设备的作业性能。
其中,作业环境中实际存在的物体及其所属的物体类别与环境地图中已经记载的物体及其所属物体类别不符合的情形包括以下几种:
情形之一:作业环境中实际存在一些物体及其所属的物体类别并未出现在环境地图中;
情形之二:作业环境中实际存在一些物体及其所属物体类别与环境地图中标记的信息不一致。
为了便于理解和区分,将作业环境中实际存在但未出现在环境地图中物体称为第一物体。用户可以根据第一物体在作业环境中的位置信息,在环境地图中增加该第一物体及其所属物体类别等信息。
为了便于理解和区分,将作业环境中真实存在但在环境地图中标记有误的物体称为第二物体。用户可以在环境地图中将第二物体的相关信息进行修改,以与其真实信息相契合。
进一步的,为了满足用户对已知物体类别的修改需求,自移动设备还可以在接收到用户针对已知物体类别的修改请求时,展示已知物体类别,以及响应针对已知物体类别发起的第一修改操作,获取修改后的物体类别;已知物体类别是用户在环境地图上设定的和/或自移动设备基于历史图像数据识别到的。其中,第一修改操作包括以下至少一种:修改物体类别的名称、调整物体类别对应的物体、以及删除已知物体类别。
其中,针对修改物体类别的名称的修改操作,则修改前该物体类别的物体变成修改后物体类别下的物体。例如,将鞋子的物体类别从易脏污类别修改为易缠绕类别。
其中,针对调整物体类别对应的物体的修改操作,被调整的物体的物体类别发生改变。例如,易卡困类别下的物体从U型椅、吧台椅、移门滑轨变为U型椅和吧台椅,也即将移门滑轨从易卡困类别中移除。
其中,针对删除已知物体类别的修改操作,则被删除的已知物体类别下的物体后续将不会被识别为被删除的已知物体类别。
其中,在发生作业环境中实际存在的物体及其所属的物体类别与环境地图中已经记载的物体及其所属物体类别不符合的情形时,用户可以在环境地图中设定正确的物体及其所属物体类别,此时,由用户在环境地图中设定物体类别为已知物体类别。
需要指的是,若自移动设备获知已知物体类别信息被修改了,则后续可以结合修改后的已知物体类别信息并基于视觉传感器采集的图像数据识别自移动设备前方作业区域中的目标物体类别。
在基于图像数据识别前方作业区域中存在的目标物体类别之后,选择与目标物体类别适配的目标机器行为模式。应理解,与目标物体类别适配的目标机器行为模式是基于目标物体类别有针对性地为自移动设备选择的目标机器行为模式。自移动设备按照目标机器行为模式针对前方作业区域中存在的目标物体执行作业任务时,自身作业能力受到目标物体的影响较小。下面分情况介绍与目标物体类别适配的目标机器行为模式:
情况1:针对易卡困物体,目标机器行为模式可以是避障行为模式或加速通行行为模式。假如自移动设备在执行作业任务中遇到易卡困物体,若易卡困物体不可通行,则自移动设备按照避障行为模式避开易卡困物体;若易卡困物体可通行,则自移动设备按照加速通行行为模式快速从易卡困物体内部通行;以减少被易卡困物体困住卡死的概率。
以扫地机器人为例,若扫地机器人在执行清扫任务过程中遇到诸如U型椅或吧台椅等不可通行的易卡困物体,则扫地机器人放弃清扫U型椅或吧台椅的周边区域,并按照避障行为模式避开U型椅或吧台椅等不可通行的易卡困物体。
若扫地机器人在执行清扫任务过程中遇到诸如移门滑轨等可通行的易卡困物体,则扫地机器人放弃清扫移门滑轨的周边区域,并按照加速通行行为模式加速通过诸如移门滑轨等易卡困物体。
情况2:针对易缠绕物体,目标机器行为模式可以是减速作业行为模式。此时,自移动设备按照减速作业行为模式在作业过程中放缓作业速度,以减少被易缠绕物体缠绕的概率。
例如,若扫地机器人在执行清扫任务过程中遇得诸如衣服、地毯边缘、电线等易缠绕物体,扫地机器人按照减速作业行为模式可以适当地关闭边刷或滚刷,或者减慢边刷的转速,也即停止清扫作业或减慢清扫作业速度。当扫地机器人远离易缠绕物体后,再将恢复至正常清扫状态。
情况3:针对易脏污物体,目标机器行为模式可以是加强作业行为模式。此时,自移动设备按照加强作业行为模式提高作业能力对易脏污物体进行加强处理。
例如,若扫地机器人在执行清扫任务过程中遇到诸如垃圾桶、充电座、鞋子以及碗盆等易脏污物体,扫地机器人按照加强作业行为模式加强对于这类周围的清洁力度。实际应用中,扫地机器人可以加快边刷和滚刷的转速,以及加强风机的吸力。同时,扫地机器人也可以在这类物体的周围执行多次重复式的清扫,或执行多次绕圈清洁。
情况4:针对可移动物体,目标机器行为模式可以是语音提示行为模式。其中,语音提示行为模式可以实现自移动设备与可移动物体的交互,提示可移动物体避让自移动设备需要执行作业任务的区域。
例如,若扫地机器人在执行清扫任务过程中遇到人,则扫地机器人按照语音提示行为模式语音提示人离开当前位置,或者抬起双脚,让扫地机器人完成对人脚所占区域的清洁任务。
在本申请的上述或下述实施例中,为了准确且快速地选择与目标物体类别适配的目标机器行为模式,可以预先关联存储已知物体类别及其对应的机器行为模式,以便自移动设备可以根据目标物体类别,查询已知物体类别及其对应的机器行为模式,以得到与目标物体类别对应的机器行为模式,作为目标机器行为模式。
在本申请的上述或下述实施例中,已知物体类别及其对应的机器行为模式可以由自移动设备设定,也可以由用户设定,对此不做限制。
进一步可选的,还可以支持用户根据实际应用需求修改已知物体类别对应的机器行为模式。例如,对机器行为模式至少包括自移动设备执行作业任务所需的行为参数和行为动作的进行修改。其中,行为参数包括但不限于:作业次数、风机吸力大小、边刷转速、执行动作时相对目标物体的距离值和方向角度等。行为动作例如加速通行动作、减速作业、避障动作、加强作业动作以及语音提示动作等。
于是,在本申请的上述或下述实施例中,上述方法还包括:展示已知物体类别对应的机器行为模式,响应针对机器行为模式发起的第二修改操作,获取修改后的机器行为模式;其中,第二修改操作包括以下至少一种:修改已有行为参数、增加新的行为参数、删除已有行为参数、修改已有机器动作参数、增加新的机器动作参数以及删除已有机器动作参数。
在自移动设备选择与目标物体类别适配的目标机器行为模式,为了提高自移动设备的作业性能,可以在结构光数据的辅助下,按照目标机器行为模式控制自移动设备针对前方作业区域中存在的目标物体执行作业任务。例如,结构光数据可探测到物体的三维点云数据、轮廓、形状、高度、宽度、深度(也即物体距自移动设备的距离)、长度、厚度以及体积等信息,结合上述结构光数据,可以提高自移动设备作业性能。
本申请实施例提供的作业方法,充分利用并结合自移动设备上的结构光模组获得更为丰富的环境数据,对作业环境中的不同物体进行类型识别,进而针对不同类型的物体采用不同的机器行为模式执行作业任务,做到更有针对性地、精细化、有目的性的执行作业任务,不仅能缩短作业时间,还能提高作业能力,改善用户的使用体验。特别是针对扫地机器人,针对不同类型的物体可以采用不同的机器行为模式执行清洁任务,做到更有针对性地、精细化、有目的性的清洁作业,不仅能缩短清洁时间,还能提高清洁能力,改善用户的使用体验。
在本申请的上述或下述实施例中,为了提高自移动设备前方作业区域中存在的物体类别的识别准确度,在选择与目标物体类别适配的目标机器行为模式之前,还可以结合结构光数据对基于图像数据识别出的目标物体类别进行修正。例如,结合结构光数据识别物体的高度、宽度、长度或体积是否与目标物体类别匹配。又例如,考虑到同一物体类别的物体的轮廓具体相似性,还可以结合结构光数据识别物体的轮廓,基于轮廓信息对目标物体类别进行修正。
于是,在本申请的上述或下述实施例中,在选择与目标物体类别适配的目标机器行为模式之前,上述方法还包括:基于结构光数据识别前方作业区域中存在的目标物体轮廓;根据目标物体轮廓对目标物体类别进行修正。
示例性的,基于结构光数据识别前方作业区域中存在的目标物体轮廓时,可以先基于结构光数据获取目标物体的三维点云数据,基于目标物体的三维点云数据对目标物体进行三维重建,并对三维重建得到的目标物体进行轮廓特征提取,以获取目标物体轮廓。
在本申请的一些可选实施例中,可以预先提取属于任一物体类别下的物体的轮廓特征。若目标物体轮廓与属于目标物体类别下的物体的轮廓特征相匹配,则不需要对目标物体类别进行修正。若目标物体轮廓与属于目标物体类别下的物体的轮廓特征不匹配,则将所述目标物体轮廓对应的物体类别作为参考物体类别,根据所述参考物体类别对所述目标物体类别进行修正;其中,不同物体类别具有不完全相同的物体轮廓。在本申请一些可选的实施例中,根据参考物体类别对目标物体类别进行修正的一种实施过程是:在目标物体类别与参考物体类别之间的差异度小于设定阈值的情况下,将目标物体类别直接修正为参考物体类别;或者,在目标物体类别与参考物体类别之间的差异度大于或等于设定阈值的情况下,确定可以在目标物体类别和参考物体类别之间过度的中间态物体类别,将目标物体类别修正为中间态物体类别。
在本申请一些可选的实施例中,根据目标物体轮廓对所述目标物体类别进行修正的一种实施过程是:根据目标物体轮廓对目标物体类别进行更细粒度的划分,以得到目标物体类别下的子类别。例如,针对易卡困类别,既有诸如移门滑轨这样的非镂空型物体,也有诸如U型椅和吧台椅等这样的镂空型物体。自移动设备遇到非镂空型物体时,可以加速从非镂空型物体上快速通过,以避免被非镂空型物体困住卡死。而自移动设备遇到镂空型物体时,可以对镂空型物体的物体类别进行细化,识别该镂空型物体是否可以通行。
于是,在本申请一些可选的实施例中,根据目标物体轮廓对目标物体类别进行更细粒度的划分,以得到目标物体类别下的子类别的一种实施过程是:在目标物体类别为易卡困类别情况下,结合目标物体轮廓,确定目标物体轮廓对应目标物体是否为镂空型物体;在目标物体为镂空型物体的情况下,结合目标物体的镂空宽度和自移动设备的机身宽度,将目标物体类别划分为易卡困且不可通过和易卡困且可通 过两种子类别。
示例性的,可以根据目标物体轮廓中贴近工作面(例如地面、桌面以及玻璃面)的下边缘上多个位置点的高度信息以及对应的横向距离信息,识别目标物体是否为镂空型物体,以及是属于易卡困且不可通过子类别还是属于易卡困且可通过子类别。
进一步可选的,首先识别目标物体轮廓中贴近工作面(例如地面、桌面以及玻璃面)的下边缘上是否存在高于工作面的多个位置点。若存在高于工作面的多个位置点,确定目标物体为镂空型物体。在目标物体为镂空型物时,识别目标物体的下边缘上是否存在连续的镂空宽度大于机身宽度高度大于机身高度的多个位置点,若存在连续的镂空宽度大于机身宽度且高度大于机身高度的多个位置点,则目标物体类别划分为易卡困且可通过。若不存在连续的镂空宽度大于机身宽度或高度大于机身高度的多个位置点,则目标物体类别划分为易卡困且不可通过。
其中,目标物体高度根据连续的多个位置点的高度进行求平均值计算获取。镂空宽度是指连续的多个位置点对应的横向距离信息,可以通过连续的多个位置点中的第一个位置点和最后一个位置点的位置坐标计算得到,也即第位置点与最后一个位置点之间的距离信息。参见图5所示的圆拱形镂空型物体,圆圈代表的是圆拱形镂空型物体上的多个位置点,对多个位置点距离地面的高度进行求平均值,作为圆拱形镂空型物体的高度,若圆拱形镂空型物体的高度大于机身高度,再进一步计算多个位置点的横向距离信息l,若横向距离信息l大于机身宽度,则自移动设备可以从圆拱形镂空型物体的内部穿行,也即圆拱形镂空型物体的物体类别划分为易卡困且可通过。若横向距离信息l小于等于机身宽度,或圆拱形镂空型物体的高度小于等于机身高度,则自移动设备不可以从圆拱形镂空型物体的内部穿行,也即圆拱形镂空型物体的物体类别划分为易卡困且不可通过。
在本申请的上述或下述实施例中,在结构光数据的辅助下,按照目标机器行为模式控制自移动设备针对前方作业区域中存在的目标物体执行作业任务的一种实施过程是:基于结构光数据,识别前方作业区域中存在的目标物体的位置信息和/或外形参数;根据目标物体的位置信息和/或外形参数,按照目标机器行为模式控制自移动设备针对目标物体执行作业任务。
其中,目标物体的位置信息可以是该目标物体的三维点云数据,外形参数包括但不限于轮廓、高度、宽度、深度以及长度等信息。
下面分情况介绍根据目标物体的位置信息和/或外形参数,按照目标机器行为模式控制自移动设备针对目标物体执行作业任务的实施过程。
情形1:在目标物体类别为易卡困且不可通过的情况下,选择避障行为模式作为与目标物体类别适配的目标机器行为模式。针对易卡困且不可通过的目标物体,除了需要考虑目标物体的位置信息以判断自移动设备当前与目标物体的距离是否接近避障距离,还需要至少考虑外形参数中的轮廓参数,以减少自移动设备在避障过程中被目标物体的轮廓边缘损伤。相应地,根据目标物体的位置信息和/或外形参数,按照目标机器行为模式控制自移动设备针对目标物体执行作业任务的一种实施过程是:基于目标物体的位置信息和外形参数中的轮廓参数,按照避障行为模式控制自移动设备针对目标物体进行避障。
例如,若将U型椅或吧台椅划分为易卡困且不可通过的目标物体,则自移动设备当前与U型椅或吧台椅的距离接近避障距离时,开始进行避障,并在避障过程中实施监测自身是否触碰到U型椅或吧台椅的轮廓边缘。
情形2:在目标物体类别为易卡困且可通过的情况下,选择加速通行行为模式作为与目标物体类别适配的目标机器行为模式。
若目标物体为非镂空物体,例如移门滑轨,则控制自移动设备按照加速通行行为模式快速越过移门滑轨。
若目标物体为镂空物体,例如,U型椅或吧台椅。除了需要考虑目标物体的位置信息以判断自移动设备当前与目标物体的距离是否接近避障距离,还需要至少考虑外形参数中的镂空宽度和高度,以使得自移动设备从目标物体可以穿行的区域穿行出去,以减少自移动设备在穿行过程中与目标物体相撞。相应地,根据目标物体的位置信息和/或外形参数,按照目标机器行为模式控制自移动设备针对目标物体执行作业任务的一种实施过程是:基于目标物体的位置信息和外形参数中的镂空宽度和高度,按照加速通行行为模式控制自移动设备穿过目标物体的镂空区域以继续执行作业任务。
在本申请一些可选的实施例中,加速通行行为模式包括:指示加速动作的第一指示参数和加速动作所需的第一执行参数,第一执行参数包括方向参数、距离参数和速度参数。其中,第一指示参数主要指示要执行的行为动作是否为加速动作。第一执行参数是指执行加速动作所需的参数,例如方向参数、距离参数和速度参数中至少一种。其中,距离参数可以是包括在自移动设备与目标物体相距多远时开始启动加速通行模式,或者,在自移动设备与目标物体相距多远时结束加速通行模式。
若目标物体为非镂空物体,例如移门滑轨,在自移动设备在距离移门滑轨15㎝时,目标物体朝着移门滑轨呈45度夹角的方向,每秒30厘米的速度加速通过移门滑轨,以及在目标物体离开移门滑轨至少10㎝之后,可以退出加速通行模式,按照正常速度通行模式移动。
若目标物体为非镂空物体,例如,U型椅或吧台椅,相应地,基于目标物体的位置信息和外形参数中的镂空宽度和高度,按照加速通行模式控制自移动设备穿过目标物体的镂空区域以继续执行作业任务的一种实施过程是:基于目标物体的位置信息和外形参数中的镂空宽度和高度,结合方向参数,调整自移动设备的朝向,以使自移动设备朝向目标物体的镂空区域;按照距离参数和速度参数,控制自移动设备沿着当前朝向加速直至穿过目标物体的镂空区域为止。
应理解,自移动设备所朝向目标物体的镂空区域是指自移动设备可穿行的镂空区域。若目标物体为U型椅或吧台椅,在自移动设备在距离U型椅或吧台椅15㎝时,目标物体朝着U型椅或吧台椅呈45度夹角的方向,每秒30厘米的速度加速通过U型椅或吧台椅的可穿行的镂空区域,以及在目标物体离开U型椅或吧台椅至少10㎝之后,可以退出加速通行模式,按照正常速度通行模式移动。
情况3:在目标物体类别为易缠绕的情况下,选择减速作业行为模式与目标物体类别适配的目标机器行为模式,相应地,根据目标物体的位置信息和/或外形参数,按照目标机器行为模式控制自移动设备针对目标物体执行作业任务的一种实施过程是:基于目标物体的外形参数中的轮廓边缘位置,按照减速作业行为模式控制自移动设备针对目标物体执行作业任务。
例如,诸如衣服、电线、地毯等易缠绕物体,基于结构光数据可以识别出这类物体的轮廓边缘位置,基于这类物体的轮廓的边缘位置可以使得诸如扫地机器人等自移动设备在不缠绕上述易缠绕物体的前提下,减速作业以减少这类物体周围区域的发生漏扫的情形的概率。
在本申请一些可选的实施例中,减速作业行为模式包括:指示减速作业的第二指示参数和减速作业所需的第二执行参数,第二执行参数至少包括避障距离以及小于转速阈值的第一边刷转速。其中,转速 阈值、第一边刷转速根据实际应用需求设置。
若目标物体为诸如衣服或电线等只能在其周边区域作业,不能在其上表面作业的易缠绕物体,自移动设备可以基于第二指示参数和第二执行参数在距离目标物体大于避障距离的周围区域内根据第一边刷转速控制驱动其边刷执行清洁任务。
若目标物体为诸如地毯等不仅需要在其周边区域作业,还需要在其上表面作业(也即在目标物体的上方作业)的易缠绕物体,相应地,基于目标物体的外形参数中的轮廓边缘位置,按照减速作业行为模式控制自移动设备针对目标物体执行作业任务的一种实施过程是:
基于目标物体的轮廓边缘位置,结合避障距离,控制自移动设备在距离目标物体大于避障距离的周围区域内执行作业任务;以及在自移动设备爬升至目标物体上方执行作业任务时,基于外形参数中的轮廓上边缘的高度信息,根据第一边刷转速控制自移动设备驱动其边刷在目标物体上方执行清洁任务。其中,轮廓上边缘是指轮廓中远离工作面的边缘,是轮廓中相对其他边缘最高的边缘。
结合轮廓上边缘的高度信息,可以评估目标物体的作业难度,也可以对目标物体进一步进行分类。以地毯为例,有的地毯为长毛地毯,有的地毯为短毛地毯。长毛地毯的轮廓上边缘的高度高于短毛地毯的轮廓上边缘的高度,
长毛地毯相对短毛地毯较难清洁。短毛地毯和长毛地毯都需要增大风机的吸力,长毛地毯需要的风机的吸力比短毛地毯的风机的吸力还大,但是在硬质地板上不需要很大的风机吸力。因此,还可以根据轮廓上边缘的高度信息调整自移动设备滚刷中风机的吸力,既能保证一定的清洁力度下还能保证自移动设备的续航能力。因此,结合轮廓上边缘的高度信息可以进一步有针对性有目的性地控制自移动设备作业。
情况4:在目标物体类别为易脏污的情况下,选择加强作业行为模式作为与目标物体类别适配的目标机器行为模式。相应地,根据目标物体的位置信息和/或外形参数,按照目标机器行为模式控制自移动设备针对目标物体执行作业任务的一种实施过程是:基于目标物体的外形参数中的轮廓边缘位置,按照加强作业行为模式控制自移动设备针对目标物体执行作业任务。
在本申请一些可选的实施例中,加强作业行为模式包括:指示加强作业的第三指示参数和加强作业所需的第三执行参数,第三执行参数至少包括作业次数以及大于转速阈值的第二边刷转速;
相应地,基于目标物体的外形参数中的轮廓边缘位置,按照加强作业行为模式控制自移动设备针对目标物体执行作业任务,包括:
基于目标物体的外形参数中的轮廓边缘位置,根据作业次数控制自移动设备在目标物体周围区域多次执行作业任务;以及在每次执行作业任务过程中,根据第二边刷转速,控制自移动设备驱动其边刷在目标物体周围区域执行清洁任务。其中,第二边刷转速根据实际应用需求设置。第二边刷转速可以是大于转速阈值的较大的边刷转速。
应理解,自移动设备在距离目标物体大于避障距离的周围区域内多次执行作业任务。
情况5:在目标物体类别为可移动的情况下,选择语音提示行为模式作为与目标物体类别适配的目标机器行为模式。相应地,根据目标物体的位置信息和/或外形参数,按照目标机器行为模式控制自移动设备针对目标物体执行作业任务的一种实施过程是:基于目标物体的位置信息,按照语音提示行为模式控制自移动设备针对目标物体发出语音提示信息,以提示目标物体改变其状态;以及结合针对目标物 体采集到的结构光数据,识别目标物体的最新状态,并在最新状态符合语音提示要求的情况下,继续控制自移动设备针对目标物体执行作业任务。
在可移动的目标物体阻挡自移动设备无法继续向前移动时,自移动设备可以语音提示可移动的目标物体改变其姿态,使得自移动设备可以继续向前移动。
以可移动的目标物体是人为例,人所在的位置,扫地机器人通常打扫不了。所以,扫地机器人可以通过播放语音提示,提醒用户避让(用户站立时),或者提醒用户抬起双脚(用户坐着时),也即提醒用户改变其状态。由于人在人在坐着状态时,通过视觉传感器采集的图像数据仅仅可以识别出人的大概位置,但无法判断人的脚是否放在地面上,当扫地机器人语音提醒用户抬起双脚后,仅靠图像数据的识别结果是无法判断人脚是否已经抬起,而结构光组件可以通过比对提示语音前后该人的大概位置是否有障碍物的改变,来判断用户的双脚是否抬起。如已抬起,则扫地机器人穿过用户去清洁,否则,扫地机器人绕开用户清洁。
为了便于理解,以自主移动设备为家庭服务机器人例,结合家庭服务机器人在家庭环境中执行任务的场景,对本申请实施例提供的自移动设备的行进控制方法进行详细说明。
应用场景实例1:
家庭服务机器人主要工作在家庭环境中。如图6所示,是实际生活中较为常见的户型图,家庭服务机器人的作业区域可能是主卧、客厅、次卧、厨房、卫生间、阳台等区域。家庭服务机器人的作业区域内行进过程中,利用结构光模组中的视觉传感器(如RGB摄像头)采集家庭环境中的RGB图像数据,并基于RGB图像数据识别家庭服务机器人前方作业区域中存在的目标物体类别。
若遇到诸如移门滑轨的易卡困且可通行到障碍物,家庭服务机器人可以采集移门滑轨的结构光数据,具体是控制线激光发射器向移门滑轨发射线激光,并利用激光摄像头采集包括线激光形成在移门滑轨的激光条纹的激光图像。基于结构光数据更加准确地识别移门滑轨的位置、长度、高度、和角度等信息,家庭服务机器人按照移门滑轨的相关信息,调整机身姿态,使得家庭服务机器人与移门滑轨形成一个合适的夹角,同时与移门滑轨的距离达到避障距离时,加速通过移门滑轨,合适的角度和速度有助于提高家庭服务机器人的越障性能。
若遇到诸如U型椅、吧台椅等易卡困的障碍物,基于采集到的U型椅的结构光数据,可以识别出U型椅的两条椅腿的准确位置,使得家庭服务机器人能在避开U型椅腿的同时,又不至于漏扫U型椅腿中间的区域。基于采集到的吧台椅的结构光数据,可以识别出吧台椅整个底座圆盘的准确位置,使得家庭服务机器人能准确的绕着底座圆盘进行清扫而不发生卡困。
若遇到诸如衣服、电线、地毯等易缠绕的障碍物,基于采集到的这类障碍物的结构光数据可以准确定位这类物体的轮廓边缘位置,使得家庭服务机器人能在不缠绕的前提下,尽可能近地靠近这类物体,避免发生漏扫的问题。同时,结构光数据也能给出地毯毛的长度,确认地毯是长毛地毯还是短毛地毯,有助于家庭服务机器人调整合适的滚刷吸力(长毛地毯需要增大风机吸力,短毛地毯吸力比长毛小,但都比硬质地面吸力大),这样既保持清洁力度的前提下,又能保证家庭服务机器人的续航能力。
若遇到诸如垃圾桶、充电座、鞋子、碗盆等易脏污的障碍物,基于采集到的这类障碍物的结构光数据可以准确定位这类物体的轮廓边缘位置,使得家庭服务机器人能在不缠绕的前提下,尽可能近地靠近这类物体,避免发生漏扫的问题。
若遇到人,基于结构光数据可以通过比对提示语音前后该人的大概位置是否有障碍物的改变,来判断用户的双脚是否抬起。如已抬起,则家庭服务机器人穿过用户去清洁,否则,家庭服务机器人绕开用户清洁。
图7为本申请示例性实施例提供的一种自主移动设备的结构示意图。如图7所示,该自主移动设备包括:设备本体70,设备本体70上设置有一个或多个存储器71、一个或多个处理器72以及结构光模组73;结构光模组73包括:结构光组件731和视觉组件732。其中,结构光组件731至少包括激光摄像头7311和线激光发射器7312。视觉组件732至少包括视觉传感器7321。在图7中,以线激光发射器7312分布在激光摄像头7311两侧为例进行图示,但并不限于此。关于结构光模组73的其它实现结构可参见前述实施例中的描述,在此不再赘述。
其中,一个或多个存储器71用于存储计算机程序;一个或多个处理器72用于执行计算机程序,以用于:利用结构光模组中的结构光组件和视觉传感器分别采集前方作业区域中的结构光数据和图像数据;基于图像数据识别前方作业区域中存在的目标物体类别,选择与目标物体类别适配的目标机器行为模式;在结构光数据的辅助下,按照目标机器行为模式控制自移动设备针对前方作业区域中存在的目标物体执行作业任务。
进一步,除了上述提到的各种组件,本实施例的自主移动设备还可以包括一些基本组件,例如通信组件74、电源组件75、驱动组件76等等。
其中,一个或多个存储器主要用于存储计算机程序,该计算机程序可被主控制器执行,致使主控制器控制自主移动设备执行相应任务。除了存储计算机程序之外,一个或多个存储器还可被配置为存储其它各种数据以支持在自主移动设备上的操作。这些数据的示例包括用于在自主移动设备上操作的任何应用程序或方法的指令,自主移动设备所在环境/场景的地图数据,工作模式,工作参数等等。
通信组件被配置为便于通信组件所在设备和其他设备之间有线或无线方式的通信。通信组件所在设备可以接入基于通信标准的无线网络,如Wifi,2G或3G、4G、5G或它们的组合。在一个示例性实施例中,通信组件经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,通信组件还可以包括近场通信(NFC)模块,射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术等。
可选地,驱动组件可以包括驱动轮、驱动电机、万向轮等。可选地,本实施例的自主移动设备可实现为扫地机器人,则在实现为扫地机器人的情况下,自主移动设备还可以包括清扫组件,清扫组件可以包括清扫电机、清扫刷、起尘刷、吸尘风机等。不同自主移动设备所包含的这些基本组件以及基本组件的构成均会有所不同,本申请实施例仅是部分示例。

Claims (31)

  1. 一种结构光模组,其特征在于,包括:第一摄像头和分布于所述第一摄像头两侧的线激光发射器;所述结构光模组还包括:第二摄像头;
    其中,所述线激光发射器负责对外发射线激光,所述第一摄像头用于在所述线激光发射器发射线激光期间采集由所述线激光探测到的第一环境图像,所述第二摄像头用于采集其视场范围内的第二环境图像;所述第一环境图像是包含所述线激光在遇到物体后产生的激光条纹的激光图像,所述第二环境图像是不包含所述激光条纹的可见光图像。
  2. 根据权利要求1所述的模组,其特征在于,还包括:用于指示所述第二摄像头的工作状态的指示灯,且所述指示灯亮,表示所述第二摄像头处于工作状态。
  3. 根据权利要求2所述的模组,其特征在于,所述指示灯与所述第二摄像头对称设置在所述第一摄像头两侧。
  4. 根据权利要求1所述的结构光模组,其特征在于,所述第一摄像头的光轴相对于与地面平行的水平面向下倾斜第一角度,所述线激光发射器的光轴相对于所述水平面向下倾斜第二角度,所述第二角度小于第一角度。
  5. 根据权利要求4所述的结构光模组,其特征在于,所述第二摄像头的光轴与所述水平面平行。
  6. 根据权利要求3所述的模组,其特征在于,在安装位置上,所述激光发射器、所述指示灯、所述第一摄像头和所述第二摄像头位于同一高度。
  7. 根据权利要求3所述的模组,其特征在于,还包括:固定座;所述激光发射器、所述指示灯、所述第一摄像头和所述第二摄像头装配在所述固定座上。
  8. 根据权利要求7所述的模组,其特征在于,所述固定座包括:主体部和位于所述主体部两侧的端部;其中,所述指示灯、所述第一摄像头和所述第二摄像头装配在所述主体部上,所述线激光发射器装配在所述端部上;
    其中,所述端部的端面朝向参考面,以使所述线激光发射器的中心线与所述第一摄像头的中心线相交于一点;所述参考面是与所述主体部的端面或端面切线垂直的平面。
  9. 根据权利要求8所述的模组,其特征在于,所述主体部的中间位置开设有三个凹槽,所述指示灯、所述第一摄像头和所述第二摄像头安装于相应的凹槽内;所述端部上设有安装孔,所述线激光发射器安装于所述安装孔内。
  10. 根据权利要求7所述的模组,其特征在于,还包括:固设在所述固定座后方的模组控制器;
    所述线激光发射器、所述第一摄像头以及所述指示灯分别与所述模组控制器电连接;以及在所述结构光模组应用到自移动设备上时,所述模组控制器和所述第二摄像头分别与所述自移动设备的主控制器电连接;
    所述主控制器,用于向所述模组控制器发送所述第二摄像头的工作状态信息;所述模组控制器,用于根据所述第二摄像头的工作状态信息,控制所述指示灯的亮灭状态。
  11. 根据权利要求10所述的模组,其特征在于,还包括:装配于所述固定座上方的固定盖;所述固定盖与所述固定座之间形成腔体,以容纳所述线激光发射器、所述第一摄像头与所述模组控制器之间的连接线,以及容纳所述模组控制器和所述第二摄像头与所述主控制器之间的连接线。
  12. 根据权利要求4所述的模组,其特征在于,所述第一角度的角度范围为[11,12]度,所述第二角度的角度范围为[7.4,8.4]度。
  13. 根据权利要求1至12任一项所述的模组,其特征在于,所述线激光发射器的光学整形透镜为柱状镜或波浪镜。
  14. 根据权利要求13所述的模组,其特征在于,在所述线激光发射器的光学整形透镜为波浪镜时,在相对所述线激光发射器的光轴的角度范围为[-30,30]度内,所述线激光的光源强度最强。
  15. 根据权利要求13所述的模组,其特征在于,在所述线激光发射器的光学整形透镜为柱状镜时,在相对所述线激光发射器的光轴的角度范围为[-10,10]度内,所述线激光的光源强度最强。
  16. 根据权利要求1至12任一项所述的模组,其特征在于,所述线激光发射器的光轴与结构光模组基线的夹角范围为[50,60]度。
  17. 根据权利要求16所述的模组,其特征在于,所述线激光发射器的光轴与结构光模组基线的夹角为55.26度。
  18. 一种自移动设备,其特征在于,包括:设备本体,所述设备本体上设置有主控制器和结构光模组,所述主控制器与所述结构光模组电连接;
    其中,所述结构光模组包括:第一摄像头、分布于所述第一摄像头两侧的线激光发射器、第二摄像头和模组控制器;
    其中,所述模组控制器控制所述线激光发射器对外发射线激光,并控制所述第一摄像头在所述线激光发射器发射线激光期间采集由所述线激光探测到的第一环境图像,以及将第一环境图像发送给所述主控制器;所述主控制器控制所述第二摄像头采集其视场范围内的第二环境图像,并根据所述第一环境图像和所述第二环境图像对所述自移动设备进行功能控制;其中,所述第一环境图像是包含所述线激光在遇到物体后产生激光条纹的激光图像,所述第二环境图像是不包含所述激光条纹的可见光图像。
  19. 一种作业方法,其特征在于,适用于带有结构光模组的自移动设备,所述方法包括:
    利用所述结构光模组中的结构光组件和视觉传感器分别采集前方作业区域中的结构光数据和图像数据;
    基于所述图像数据识别前方作业区域中存在的目标物体类别,选择与所述目标物体类别适配的目标机器行为模式;
    在所述结构光数据的辅助下,按照所述目标机器行为模式控制所述自移动设备针对前方作业区域中存在的目标物体执行作业任务。
  20. 根据权利要求19所述的方法,其特征在于,在选择与所述目标物体类别适配的目标机器行为模式之前,还包括:
    基于所述结构光数据识别前方作业区域中存在的目标物体轮廓;
    根据所述目标物体轮廓对所述目标物体类别进行修正。
  21. 根据权利要求19-20任一项所述的方法,其特征在于,在所述结构光数据的辅助下,按照所述目标机器行为模式控制所述自移动设备针对前方作业区域中存在的目标物体执行作业任务,包括:
    基于所述结构光数据,识别前方作业区域中存在的目标物体的位置信息和/或外形参数;
    根据所述目标物体的位置信息和/或外形参数,按照所述目标机器行为模式控制所述自移动设备针 对所述目标物体执行作业任务。
  22. 根据权利要求21所述的方法,其特征在于,在所述目标物体类别为易卡困且不可通过的情况下,选择与所述目标物体类别适配的目标机器行为模式,包括:选择避障行为模式作为所述目标机器行为模式;
    相应地,根据所述目标物体的位置信息和/或外形参数,按照所述目标机器行为模式控制所述自移动设备针对所述目标物体执行作业任务,包括:
    基于所述目标物体的位置信息和外形参数中的轮廓参数,按照所述避障行为模式控制所述自移动设备针对所述目标物体进行避障。
  23. 根据权利要求19-20任一项所述的方法,其特征在于,在所述目标物体类别为易卡困且可通过的情况下,选择与所述目标物体类别适配的目标机器行为模式,包括:选择加速通行行为模式作为所述目标机器行为模式;
    相应地,根据所述目标物体的位置信息和/或外形参数,按照所述目标机器行为模式控制所述自移动设备针对所述目标物体执行作业任务,包括:
    基于所述目标物体的位置信息和外形参数中的镂空宽度和高度,按照所述加速通行行为模式控制所述自移动设备穿过所述目标物体的镂空区域以继续执行作业任务。
  24. 根据权利要求23所述的方法,其特征在于,所述加速通行行为模式包括:指示加速动作的第一指示参数和加速动作所需的第一执行参数,所述第一执行参数包括方向参数、距离参数和速度参数;
    相应地,基于所述目标物体的位置信息和外形参数中的镂空宽度和高度,按照所述加速通行模式控制所述自移动设备穿过所述目标物体的镂空区域以继续执行作业任务,包括:
    基于所述目标物体的位置信息和外形参数中的镂空宽度和高度,结合所述方向参数,调整所述自移动设备的朝向,以使所述自移动设备朝向所述目标物体的镂空区域;
    按照所述距离参数和速度参数,控制所述自移动设备沿着当前朝向加速直至穿过所述目标物体的镂空区域为止。
  25. 根据权利要求19-20任一项所述的方法,其特征在于,在所述目标物体类别为易缠绕的情况下,选择与所述目标物体类别适配的目标机器行为模式,包括:选择减速作业行为模式作为所述目标机器行为模式;
    相应地,根据所述目标物体的位置信息和/或外形参数,按照所述目标机器行为模式控制所述自移动设备针对所述目标物体执行作业任务,包括:
    基于所述目标物体的外形参数中的轮廓边缘位置,按照所述减速作业行为模式控制所述自移动设备针对所述目标物体执行作业任务。
  26. 根据权利要求25所述的方法,其特征在于,所述减速作业行为模式包括:指示减速作业的第二指示参数和减速作业所需的第二执行参数,所述第二执行参数至少包括避障距离以及小于转速阈值的第一边刷转速;
    相应地,基于所述目标物体的外形参数中的轮廓边缘位置,按照所述减速作业行为模式控制所述自移动设备针对所述目标物体执行作业任务,包括:
    基于所述目标物体的轮廓边缘位置,结合所述避障距离,控制所述自移动设备在距离所述目标物体 大于所述避障距离的周围区域内执行作业任务;以及
    在所述自移动设备爬升至所述目标物体上方执行作业任务时,基于所述外形参数中的轮廓外边缘的高度信息,根据所述第一边刷转速控制所述自移动设备驱动其边刷在所述目标物体上方执行清洁任务。
  27. 根据权利要求19-20任一项所述的方法,其特征在于,在所述目标物体类别为易脏污的情况下,选择与所述目标物体类别适配的目标机器行为模式,包括:选择加强作业行为模式作为所述目标机器行为模式;
    相应地,根据所述目标物体的位置信息和/或外形参数,按照所述目标机器行为模式控制所述自移动设备针对所述目标物体执行作业任务,包括:
    基于所述目标物体的外形参数中的轮廓边缘位置,按照所述加强作业行为模式控制所述自移动设备针对所述目标物体执行作业任务。
  28. 根据权利要求27所述的方法,其特征在于,所述加强作业行为模式包括:指示加强作业的第三指示参数和加强作业所需的第三执行参数,所述第三执行参数至少包括作业次数以及大于转速阈值的第二边刷转速;
    相应地,基于所述目标物体的外形参数中的轮廓边缘位置,按照所述加强作业行为模式控制所述自移动设备针对所述目标物体执行作业任务,包括:
    基于所述目标物体的外形参数中的轮廓边缘位置,根据所述作业次数控制所述自移动设备在所述目标物体周围区域多次执行作业任务;以及
    在每次执行作业任务过程中,根据所述第二边刷转速,控制所述自移动设备驱动其边刷在所述目标物体周围区域执行清洁任务。
  29. 根据权利要求19-20任一项所述的方法,其特征在于,在所述目标物体类别为可移动的情况下,选择与所述目标物体类别适配的目标机器行为模式,包括:选择语音提示行为模式作为所述目标机器行为模式;
    相应地,根据所述目标物体的位置信息和/或外形参数,按照所述目标机器行为模式控制所述自移动设备针对所述目标物体执行作业任务,包括:
    基于所述目标物体的位置信息,按照所述语音提示行为模式控制所述自移动设备针对所述目标物体发出语音提示信息,以提示所述目标物体改变其状态;以及
    结合针对所述目标物体采集到的结构光数据,识别所述目标物体的最新状态,并在所述最新状态符合语音提示要求的情况下,继续控制所述自移动设备针对所述目标物体执行作业任务。
  30. 一种自移动设备,其特征在于,包括:设备本体,所述设备本体上设置有一个或多个存储器、一个或多个处理器以及结构光模组;所述结构光模组包括:结构光组件和视觉传感器;
    所述一个或多个存储器,用于存储计算机程序;所述一个或多个处理器,用于执行所述计算机程序,以用于:
    利用所述结构光模组中的结构光组件和视觉传感器分别采集前方作业区域中的结构光数据和图像数据;
    基于所述图像数据识别前方作业区域中存在的目标物体类别,选择与所述目标物体类别适配的目标机器行为模式;
    在所述结构光数据的辅助下,按照所述目标机器行为模式控制所述自移动设备针对前方作业区域中存在的目标物体执行作业任务。
  31. 一种存储有计算机程序的计算机可读存储介质,其特征在于,当所述计算机程序被处理器执行时,致使所述处理器实现权利要求19-29任一项所述方法中的步骤。
PCT/CN2022/105817 2021-08-17 2022-07-14 一种结构光模组及自移动设备 WO2023020174A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP22857487.7A EP4385384A1 (en) 2021-08-17 2022-07-14 Structured light module and self-moving device
US18/442,785 US20240197130A1 (en) 2021-08-17 2024-02-15 Structured light module and self-moving device

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202110944997.6 2021-08-17
CN202110944998.0 2021-08-17
CN202110944998.0A CN113960562A (zh) 2021-08-17 2021-08-17 结构光模组及自移动设备
CN202110944997.6A CN113786125B (zh) 2021-08-17 2021-08-17 作业方法、自移动设备及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/442,785 Continuation US20240197130A1 (en) 2021-08-17 2024-02-15 Structured light module and self-moving device

Publications (1)

Publication Number Publication Date
WO2023020174A1 true WO2023020174A1 (zh) 2023-02-23

Family

ID=85239457

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/105817 WO2023020174A1 (zh) 2021-08-17 2022-07-14 一种结构光模组及自移动设备

Country Status (3)

Country Link
US (1) US20240197130A1 (zh)
EP (1) EP4385384A1 (zh)
WO (1) WO2023020174A1 (zh)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102551591A (zh) * 2010-11-24 2012-07-11 三星电子株式会社 保洁机器人及其控制方法
CN108553027A (zh) * 2018-01-04 2018-09-21 深圳悉罗机器人有限公司 移动机器人
CN110974088A (zh) * 2019-11-29 2020-04-10 深圳市杉川机器人有限公司 扫地机器人控制方法、扫地机器人及存储介质
CN111083332A (zh) * 2019-12-30 2020-04-28 科沃斯机器人股份有限公司 结构光模组、自主移动设备以及光源区分方法
CN111609852A (zh) * 2019-02-25 2020-09-01 北京奇虎科技有限公司 语义地图构建方法、扫地机器人及电子设备
CN112066994A (zh) * 2020-09-28 2020-12-11 河海大学常州校区 一种消防机器人局部自主导航方法及系统
CN212932957U (zh) * 2020-06-09 2021-04-09 深圳市视晶无线技术有限公司 激光探测装置
CN112902958A (zh) * 2019-11-19 2021-06-04 珠海市一微半导体有限公司 一种基于激光视觉信息避障导航的移动机器人
CN113055621A (zh) * 2021-03-11 2021-06-29 维沃移动通信有限公司 摄像头模组和电子设备
CN113786125A (zh) * 2021-08-17 2021-12-14 科沃斯机器人股份有限公司 作业方法、自移动设备及存储介质
CN113960562A (zh) * 2021-08-17 2022-01-21 科沃斯机器人股份有限公司 结构光模组及自移动设备

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102551591A (zh) * 2010-11-24 2012-07-11 三星电子株式会社 保洁机器人及其控制方法
CN108553027A (zh) * 2018-01-04 2018-09-21 深圳悉罗机器人有限公司 移动机器人
CN111609852A (zh) * 2019-02-25 2020-09-01 北京奇虎科技有限公司 语义地图构建方法、扫地机器人及电子设备
CN112902958A (zh) * 2019-11-19 2021-06-04 珠海市一微半导体有限公司 一种基于激光视觉信息避障导航的移动机器人
CN110974088A (zh) * 2019-11-29 2020-04-10 深圳市杉川机器人有限公司 扫地机器人控制方法、扫地机器人及存储介质
CN111083332A (zh) * 2019-12-30 2020-04-28 科沃斯机器人股份有限公司 结构光模组、自主移动设备以及光源区分方法
CN212932957U (zh) * 2020-06-09 2021-04-09 深圳市视晶无线技术有限公司 激光探测装置
CN112066994A (zh) * 2020-09-28 2020-12-11 河海大学常州校区 一种消防机器人局部自主导航方法及系统
CN113055621A (zh) * 2021-03-11 2021-06-29 维沃移动通信有限公司 摄像头模组和电子设备
CN113786125A (zh) * 2021-08-17 2021-12-14 科沃斯机器人股份有限公司 作业方法、自移动设备及存储介质
CN113960562A (zh) * 2021-08-17 2022-01-21 科沃斯机器人股份有限公司 结构光模组及自移动设备

Also Published As

Publication number Publication date
EP4385384A1 (en) 2024-06-19
US20240197130A1 (en) 2024-06-20

Similar Documents

Publication Publication Date Title
CN108247647B (zh) 一种清洁机器人
CN112415998B (zh) 一种基于tof摄像头的障碍物分类避障控制系统
CN109947109B (zh) 机器人工作区域地图构建方法、装置、机器人和介质
WO2021026831A1 (zh) 移动机器人及其控制方法和控制系统
CN111142526B (zh) 越障与作业方法、设备及存储介质
JP5803043B2 (ja) 移動式ロボットシステム及び移動式ロボットを作動させる方法
JP5946147B2 (ja) 可動式ヒューマンインターフェースロボット
WO2020140271A1 (zh) 移动机器人的控制方法、装置、移动机器人及存储介质
US20190351558A1 (en) Airport robot and operation method therefor
US20150073646A1 (en) Mobile Human Interface Robot
US20230409040A1 (en) Method for Controlling of Obstacle Avoidance according to Classification of Obstacle based on TOF camera and Cleaning Robot
JP5318623B2 (ja) 遠隔操作装置および遠隔操作プログラム
CN111123278B (zh) 分区方法、设备及存储介质
WO2011146259A2 (en) Mobile human interface robot
US10638899B2 (en) Cleaner
WO2020038155A1 (zh) 自主移动设备、控制方法及存储介质
CN113786125B (zh) 作业方法、自移动设备及存储介质
WO2023020174A1 (zh) 一种结构光模组及自移动设备
JP5552710B2 (ja) ロボットの移動制御システム、ロボットの移動制御プログラムおよびロボットの移動制御方法
US20210030234A1 (en) Mobile robot
CN215897823U (zh) 结构光模组及自移动设备
CN113960562A (zh) 结构光模组及自移动设备
CN113741441A (zh) 作业方法及自移动设备
TWI824503B (zh) 自移動設備及其控制方法
US20240197135A1 (en) Detection method for autonomous mobile device, autonomous mobile device and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22857487

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2022857487

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2022857487

Country of ref document: EP

Effective date: 20240315