WO2022120713A1 - 智能割草机以及智能割草系统 - Google Patents

智能割草机以及智能割草系统 Download PDF

Info

Publication number
WO2022120713A1
WO2022120713A1 PCT/CN2020/135252 CN2020135252W WO2022120713A1 WO 2022120713 A1 WO2022120713 A1 WO 2022120713A1 CN 2020135252 W CN2020135252 W CN 2020135252W WO 2022120713 A1 WO2022120713 A1 WO 2022120713A1
Authority
WO
WIPO (PCT)
Prior art keywords
lawn mower
intelligent
camera
smart
main body
Prior art date
Application number
PCT/CN2020/135252
Other languages
English (en)
French (fr)
Inventor
陈伟鹏
杨德中
Original Assignee
南京泉峰科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 南京泉峰科技有限公司 filed Critical 南京泉峰科技有限公司
Priority to CA3200096A priority Critical patent/CA3200096A1/en
Priority to EP20964651.2A priority patent/EP4224268A4/en
Priority to CN202080054020.0A priority patent/CN114945882A/zh
Priority to PCT/CN2020/135252 priority patent/WO2022120713A1/zh
Publication of WO2022120713A1 publication Critical patent/WO2022120713A1/zh
Priority to US18/301,774 priority patent/US20230259138A1/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0274Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01DHARVESTING; MOWING
    • A01D34/00Mowers; Mowing apparatus of harvesters
    • A01D34/006Control or measuring arrangements
    • A01D34/008Control or measuring arrangements for automated or remotely controlled operation
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/027Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means comprising intertial navigation means, e.g. azimuth detector
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level

Definitions

  • the present application relates to a lawn mower and a lawn mowing system, in particular, an intelligent lawn mower and an intelligent lawn mowing system.
  • the navigation and positioning of the existing intelligent lawn mower generally uses GPS with ordinary positioning accuracy for sub-regional identification, and uses the boundary line signal and the inertial measurement unit IMU to achieve accurate position estimation, but this solution usually has low positioning accuracy and cannot realize real-time. For positioning and navigation, it is difficult to obtain efficient path planning and complete area coverage.
  • the main purpose of the present application is to provide an intelligent lawn mower with higher positioning accuracy and deeper understanding of the surrounding environment.
  • An intelligent lawn mower comprising: a camera for collecting image data of the surrounding environment of the intelligent lawn mower; an inertial measurement unit for detecting pose data of the intelligent lawn mower; and a memory at least for storing and controlling the intelligent lawn mowing
  • the application program for the machine to work or walk; the processor is used to call the application program, integrate the image data collected by the camera and the pose data obtained by the inertial measurement unit, perform real-time positioning and map construction of the intelligent lawn mower, and generate navigation and mowing action commands.
  • the smart lawn mower further includes a main body, and the camera is mounted on the main body.
  • the camera is mounted on the front side of the main body.
  • the application program can distinguish grass and non-grass according to the feature points of the two-dimensional plane in the image data, compare the texture features of the grass, and use the boundary between the grass and the non-grass as the discrete anchor point, and use visual inertial fusion.
  • Real-time positioning and map building automatically generate mowing area boundaries.
  • the intelligent lawn mower also includes a cutting blade, and the application program can distinguish between grass and non-grass according to the feature points of the two-dimensional plane in the image data and the texture characteristics of the grass, and the current working plane is not. When on grass, stop turning the cutting blade.
  • the application program can judge the type of the current working plane according to the feature points of the two-dimensional plane in the image data and the texture features of the ground of common types preset by the application program, and the current working plane contains multiple
  • the intelligent lawn mower is controlled to drive to the ground with greater hardness among the plurality of ground types.
  • the application program further includes an object recognition program, and the application program can select a corresponding obstacle avoidance strategy according to the obstacle category recognized by the object recognition program.
  • the smart lawn mower further includes a global positioning system sensor, and the application uses the positioning result of the global positioning system sensor to filter and correct the result of the visual-inertial fusion real-time positioning and map construction.
  • the smart lawn mower further includes a light
  • the application can obtain the light intensity of the current environment according to the image data, and turn on the light when the light intensity is lower than the first light intensity threshold.
  • An intelligent lawnmower comprising: a main body; a camera for collecting image data of the surrounding environment of the intelligent lawnmower; a support rod for supporting the camera; an inertial measurement unit for detecting pose data of the intelligent lawnmower;
  • the memory is used to store at least the application program that controls the work or walking of the smart lawn mower;
  • the processor is used to call the application program, integrate the image data collected by the camera and the pose data obtained by the inertial measurement unit, and carry out the intelligent lawn mower real-time positioning and map building, and generate navigation and mowing action instructions.
  • the support rod is arranged on the upper surface of the main body.
  • the support rod is retractable and includes a first state with a length of a first length and a second state with a length of a second length, and the second length is greater than the first length.
  • the smart lawn mower further includes a accommodating cavity disposed in the middle of the main body for accommodating the support rod and the camera; when the support rod is in the first state, the camera and all the support rods are located in the accommodating cavity Inside, when the support rod is in the second state, the camera and part of the support rod are located outside the accommodating cavity.
  • the top of the accommodating cavity is provided with a cover plate for waterproof and dustproof, and the cover plate has a closed state and an open state; when the support rod is at the first length, the cover plate is in a closed state, and when all the When the support rod is at the second length, the cover is in an open state.
  • the cover plate is hingedly connected to the edge of the top of the receiving cavity.
  • the cover plate slides relative to the accommodating cavity.
  • a groove for accommodating a support rod is formed on the upper surface of the main body; the support rod is fixed to the upper surface of the main body through a damping shaft device, including being placed in the groove on the upper surface of the main body a first state and a second state of the grooves substantially perpendicular to the upper surface of the body.
  • An intelligent lawn mowing system comprising: an intelligent lawn mower, at least including: a camera for collecting image data of the surrounding environment of the intelligent lawn mower; and an inertial measurement unit for detecting pose data of the intelligent lawn mower; interactive A display interface; a memory, at least used to store an application program for controlling the work or walking of the smart lawn mower; a processor, configured to call the application program, fuse the image data collected by the camera and the pose data obtained by the inertial measurement unit, and perform Instant positioning and map construction of smart lawn mowers, and generate navigation and mowing action instructions.
  • the interactive display interface is located on the smart lawn mower.
  • the intelligent lawn mowing system further includes: a mobile terminal, and the interactive display interface is located on the mobile terminal.
  • the memory and processor are located on the smart lawn mower.
  • the smart lawn mowing system further includes: a mobile terminal, where the memory and the processor are located.
  • the user can view the real-time image collected by the camera through the interactive display interface, and superimpose a virtual fence on the real-time image, and the application program adds the anchor point of the virtual fence to the anchor point set of the boundary of the mowing area.
  • the user can view the real-time image captured by the camera through the interactive display interface, and superimpose the virtual obstacle on the real-time image, and the application program records the anchor point of the virtual obstacle and plans a path to circumvent the virtual obstacle. .
  • An intelligent lawn mowing system includes an intelligent lawn mower and a camera set in a work scene.
  • the camera includes a wireless communication device for wirelessly connecting with the smart lawn mower.
  • the intelligent lawn mower includes: a cutting blade for cutting grass; a main body for supporting the cutting blade; at least one wheel, which is rotatable and supports the main body; a wireless communication device for wirelessly connecting with a camera; a memory, at least An application for storing control of the operation or walking of the smart lawn mower; a processor, arranged to invoke said application for navigation and mowing control.
  • the camera is arranged on the roof.
  • the intelligent lawn mowing system further includes a charging pile, and the camera is arranged on the top of the charging pile.
  • the camera obtains image data of the working scene and sends the image data to the intelligent lawn mower through a wireless communication device
  • the application program uses the image data obtained by the camera to perform target tracking calculation to obtain the intelligent lawn mower.
  • the current position of the machine is estimated, and then the navigation and mowing action instructions are generated according to the current position estimate.
  • a plurality of the cameras obtain image data of working scenes from different perspectives, and first calculate the current position estimate of the smart lawn mower through distributed target tracking, and then send the position estimate to the smart mower. lawnmower.
  • the intelligent lawn mowing system further includes a cloud server, and each of the plurality of cameras uploads the acquired image data of the working scene to the cloud server through a wireless communication device, and the cloud server performs target tracking through a multi-view target tracking algorithm.
  • the current position estimate of the smart lawn mower is obtained by calculation, and the smart lawn mower obtains the current position estimate from the cloud server through the wireless communication device.
  • An intelligent walking tool system comprising: intelligent walking equipment; a camera for acquiring image data of the surrounding environment of the intelligent walking equipment; an inertial measurement unit for detecting pose data of the intelligent walking equipment; a memory, at least It is used to store and control the application program of the intelligent walking device to work or walk; the processor is used to fuse the image data collected by the camera and the pose data obtained by the inertial measurement unit to perform real-time positioning and map construction of the intelligent walking device, and generate navigation and work order.
  • the intelligent walking tool system further includes a mobile terminal, and the memory is located in the mobile terminal.
  • the intelligent walking tool system further includes a mobile terminal, and the processor is located in the mobile terminal.
  • the intelligent walking tool system further includes a mobile terminal, and the camera is located on the mobile terminal.
  • the intelligent walking tool system further includes a mobile terminal, and the inertial measurement unit is located in the mobile terminal.
  • the intelligent walking device further includes a main body, and the camera is arranged on the main body of the intelligent walking device.
  • the intelligent walking device further includes a main body, and the inertial measurement unit is arranged in the main body of the intelligent walking device.
  • the intelligent walking device further includes a main body, and the processor is arranged in the main body of the intelligent walking device.
  • the intelligent walking device further includes a main body, and the controller is arranged in the main body of the intelligent walking device.
  • the intelligent walking device further includes a main body, and the camera can move up and down relative to the main body.
  • the intelligent walking device further includes a support rod for supporting the camera.
  • the support rod is retractable and has a first state with a length of a first length and a second state with a length of a second length, and the second length is greater than the first length.
  • the intelligent walking device further includes an accommodating cavity, which is arranged on the main body and is used for accommodating the support rod and the camera.
  • the intelligent walking device further includes an interactive display interface configured for the user to view the real-time image obtained by the camera, and superimpose a virtual fence on the real-time image, and the application program adds the anchor point of the virtual fence to the work.
  • an interactive display interface configured for the user to view the real-time image obtained by the camera, and superimpose a virtual fence on the real-time image, and the application program adds the anchor point of the virtual fence to the work.
  • the intelligent walking device further includes an interactive display interface configured for the user to view the real-time image obtained by the camera, and superimpose the virtual obstacle on the real-time image, and the application program records the anchor point of the virtual obstacle. And plan a path around virtual obstacles.
  • the application program can judge the type of the current working plane according to the feature points of the two-dimensional plane in the image data and the texture features of the ground of common types preset by the application program, and the current working plane contains multiple
  • the intelligent lawn mower is controlled to drive to the ground with greater hardness among the plurality of ground types.
  • the application program further includes an object recognition program, and the application program can select a corresponding obstacle avoidance strategy according to the obstacle category recognized by the object recognition program.
  • the intelligent walking device further includes a global positioning system sensor, and the application uses the positioning result of the global positioning system sensor to filter and correct the result of the visual-inertial fusion real-time positioning and map construction.
  • the benefit of the present application is that by fusing visual and inertial sensors, on the one hand, higher-precision positioning is obtained, and on the other hand, a deep understanding of the environment is also obtained, so that the intelligent lawn mower has the ability to operate in navigation and obstacle avoidance.
  • FIG. 1 is a side view of an intelligent lawn mower according to an embodiment of the present application.
  • FIG. 2 is a side view of an intelligent lawn mower according to an embodiment of the present application.
  • FIG. 3A is a perspective view of a telescopic bracket of a camera of the smart lawn mower shown in FIG. 2;
  • 3B is a cross-sectional view of a retractable bracket of a camera of the smart lawn mower shown in FIG. 3A;
  • 3C is a cross-sectional view of the telescopic bracket of the camera of the smart lawn mower shown in FIG. 3A during telescopic transformation;
  • FIG. 4A is a side view of the smart lawn mower in a non-working state according to an embodiment of the present application
  • FIG. 4B is a side view of the smart lawn mower shown in FIG. 4A in a working state
  • 5A is a side view of the smart lawn mower in a non-working state according to an embodiment of the present application
  • Figure 5B is a side view of the smart lawn mower shown in Figure 5A in a working state
  • Fig. 6 is the schematic diagram of the inertial measurement unit of the intelligent lawn mower shown in Fig. 1;
  • FIG. 7 is a schematic diagram of a dual inertial measurement unit of an intelligent lawn mower according to an embodiment of the present application.
  • FIG. 8 is a system schematic diagram of an intelligent lawn mower according to an embodiment of the present application.
  • SLAM real-time positioning and map building
  • FIG. 10 is a flowchart of a sensor fusion algorithm according to an embodiment of the present application.
  • 11A is a display interface in a boundary recognition mode according to an embodiment of the present application.
  • 11B is a display interface in another boundary recognition mode according to an embodiment of the present application.
  • FIG. 12 is a schematic diagram of a road surface identification and selection function according to an embodiment of the present application.
  • FIG. 13A is a schematic diagram of an obstacle identification function according to an embodiment of the present application.
  • FIG. 13B is another schematic diagram of an obstacle identification function according to an embodiment of the present application.
  • 15 is a display interface when setting a virtual obstacle according to an embodiment of the present application.
  • 16 is a schematic diagram of a smart lawn mower and a camera set in a scene according to another embodiment of the present application;
  • FIG. 17A is a data transmission architecture diagram of the smart lawn mower shown in FIG. 16 and a camera set in the scene;
  • FIG. 17B is another data transmission architecture diagram of the smart lawn mower shown in FIG. 16 and the camera set in the scene;
  • 17C is a data transmission architecture diagram of the smart lawn mower shown in FIG. 16 , a camera set in the scene, and a cloud server;
  • FIG. 18 is a side view of an intelligent lawn mowing system according to another embodiment of the present application.
  • Figure 19A is a side view of the fixture of the smart lawn mower shown in Figure 18;
  • Fig. 19B is a side view of the clamp of the fixing device of the smart lawn mower shown in Fig. 19A when retracted;
  • Figure 19C is a side view of the clamp of the fixture of the smart lawn mower shown in Figure 19A when extended;
  • 20 is a side view of a smart lawn mower in the smart lawn mowing system according to another embodiment of the present application.
  • 21A is a schematic diagram of an inertial measurement unit of a mobile terminal in a smart lawn mowing system according to another embodiment of the present application.
  • 21B is a schematic diagram of a camera of a mobile terminal in a smart lawn mowing system according to another embodiment of the present application.
  • 21C is a schematic diagram of an interface of a mobile terminal in a smart lawn mowing system according to another embodiment of the present application.
  • 22A is a first data transmission architecture diagram of the smart lawn mowing system according to another embodiment of the present application.
  • 22B is a second data transmission architecture diagram of the smart lawn mowing system according to another embodiment of the present application.
  • 22C is a third data transmission architecture diagram of the smart lawn mowing system according to another embodiment of the present application.
  • 22D is a fourth data transmission architecture diagram of the smart lawn mowing system according to another embodiment of the present application.
  • FIG. 22E is a fifth data transmission architecture diagram of the smart lawn mowing system according to another embodiment of the present application.
  • an intelligent lawn mower 110 comprising: a cutting blade 112 for cutting grass; a main body 113 for installing the cutting blade 112 ; a wheel 114 , which can rotate and support the main body 113 Illumination lamp 119, used for lighting; Be located in camera assembly 120, be used for collecting the image information of the surrounding environment of lawn mower; Inertial measurement unit (Inertial Measurement Unit, IMU) 122, be used for collecting the pose information of lawn mower; A processor (not shown in FIG. 1 ), electrically connected with the camera assembly 120 and the inertial measurement unit 122, for calculating and processing the information collected by the camera assembly 120 and the inertial measurement unit 122; a memory (not shown in FIG.
  • the processor can call the control program 145 to merge the image information of the surrounding environment of the lawn mower collected by the camera assembly 120 and the pose information data of the lawn mower collected by the inertial measurement unit 122 to realize the real-time positioning and map construction of the lawn mower ( Simultaneous Localization And Mapping, SLAM), and generate corresponding navigation and mowing instructions according to preset logic and real-time data to control the behavior of the smart lawn mower 110.
  • SLAM Simultaneous Localization And Mapping
  • the camera assembly 120 may be installed on the front of the smart lawn mower 110 , see FIG. 1 .
  • the camera assembly 120 installed at the front of the lawn mower 110 can better capture the image information of the environment in front of the smart lawn mower 110. Compared with the image information of the side and rear of the lawn mower, the image in front of the lawn mower The information has more reference value in navigation and obstacle avoidance.
  • the camera assembly 120 can also be installed on the front and upper part of the lawn mower through the bracket 123, as shown in FIG. 2 . By lifting the bracket 123, the vertical distance between the camera assembly 120 and the ground increases, so that the field of view of the camera assembly 120 is increased and the line of sight is less likely to be blocked by near-ground obstacles such as weeds.
  • the stand 123 is a retractable device.
  • the stent 123 shown in Figures 3A-3C consists of pins 392 telescopic sleeves.
  • the tubular body portion of the telescopic sleeve of the pin 392 includes two hollow tubes inside and outside, and the wires of the camera assembly 120 pass through the cavity between the two tubes.
  • the outer tube 394 has a plurality of holes 395 arranged in sequence along the longitudinal direction of the outer tube 394 .
  • One end is connected to the bottom of the pin 392, and always gives the pin 392 an outward force, so that the head of the pin 392 protrudes outward through the hole of the inner tube 391 when it is not pushed by other external forces.
  • the outer tube 394 is sleeved on the inner tube 391, one of the holes 395 arranged in sequence in the outer tube 394 is aligned with the hole in the inner tube 391, and when there is no external force to push, the heads of the pins 392 will pass through in sequence.
  • the outer tube 394 is fixed relative to the inner tube 391 by means of a latch.
  • the length adjustment of the bracket 123 is realized by changing the position of the outer tube 394 of the telescopic sleeve of the pin 392 relative to the inner tube 391: first, overcoming the force of the elastic piece 393 itself, press the head of the pin 392 into the inner tube 391, and then press the head of the pin 392 into the inner tube 391.
  • the pin 392 fixes the outer tube 394 in its new position relative to the inner tube 391 .
  • the retractable bracket 123 makes the position adjustment of the camera assembly 120 more convenient, and at the same time enhances the protection of the camera assembly 120 and prolongs its working life.
  • the bracket 123 can also be extended and retracted through other structures, or the retractable structure is not a purely mechanical structure, but an electromechanical combination, and is electrically connected to the processor of the smart lawn mower 110 , and the processor can be based on the image information collected by the camera assembly 120 . , and adjust the length of the bracket 123 independently to adjust the height of the camera assembly 120 .
  • the present application does not limit the specific implementation, as long as the bracket 123 of the camera assembly 120 can be extended and retracted, it falls within the protection scope of the present application.
  • the main body 113 of the smart lawn mower 110 may be provided with an inwardly recessed accommodating cavity 115, see FIGS. 4A-4B.
  • the top opening of the accommodating cavity 115 is located on the upper surface of the main body 113 of the lawn mower, the bracket 123 is fixed in the accommodating cavity 115 by fastening mechanisms such as screws and nuts, and there is a cover plate 118 on the top of the accommodating cavity 115, and the cover plate 118 can be opened and closed.
  • the cover plate 118 is hinged on one side of the top opening of the receiving cavity 115, including a first position when opened (FIG. 4B) and a second position when closed (FIG. 4A).
  • the cover plate 118 is composed of a sliding cover and sliding cover guide rails that can slide back and forth, including a first position covering the top opening of the accommodating cavity 115 and a second position exposing the opening of the accommodating cavity 115 .
  • the advantage of matching the accommodating cavity 115 and the cover 118 with the retractable bracket 123 is that when the smart lawn mower 110 is not in use, the bracket 123 is shortened and the cover 118 is closed, so that the camera assembly 120 is hidden and stored in the main body 113 of the lawn mower.
  • it is relatively neat and beautiful, on the other hand, it can be waterproof, dustproof and lightproof, reducing the cleaning frequency of the camera and delaying aging.
  • the cover plate 118 is opened and the bracket 123 is elongated, so that the camera assembly 120 extends out of the accommodating cavity 115 of the smart lawn mower 110 to capture images around the smart lawn mower 110 .
  • the specific forms of the accommodating cavity 115 and the cover plate 118 are not limited in this application; in addition, the specific position of the accommodating cavity 115 can be determined according to the positions of the motors, PCB boards and other devices of the intelligent lawn mower 110, so as to facilitate the collection of intelligent
  • the image information around the lawn mower 110 and the influence on the arrangement of the components inside the main body 113 of the smart lawn mower 110 should be minimized, which is not limited in this application, and FIG. 4 is only an exemplary illustration.
  • the bracket 123 can also be configured in a foldable configuration, referring to FIGS. 5a-5b, on the upper surface of the main body 113 of the smart lawn mower 110, a groove 117 that can accommodate the bracket 123 and the camera assembly 120 is provided.
  • the bracket 123 is hinged with a point on the top surface of the main body 113 of the smart lawn mower 110, so that the bracket 123 can take the hinge point as the rotation point, and can overcome a certain friction force and rotate around the rotation point under the movement of the human hand.
  • the bracket 123 is rotated around the rotation point until it is flat, and stored in the groove 117 on the top surface of the main body 113 of the intelligent lawn mower 110, as shown in FIG.
  • the space occupied by the lawn mower 110 is enhanced, the protection of the camera is enhanced, and its working life is prolonged.
  • stand the stand 123 as shown in FIG. 5B, and adjust the stand angle of the stand as required.
  • a rotatable connection mechanism such as a damping shaft structure and a ball structure can be used, so that the user can freely adjust the camera assembly 120 as required before turning on the smart lawn mower 110.
  • the rotatable connection mechanism is not a purely mechanical structure, but an electromechanical combination, and is electrically connected to the processor of the smart lawn mower 110.
  • the processor can adjust the camera assembly 120 independently according to the image information collected by the camera assembly 120. Angle. It should be noted that the telescopic, folding, and rotating designs of the bracket 123 of the camera assembly 120 above are all examples, and are not limited to the specific implementations in the examples, and the scope of protection of the present application should not be limited based on examples.
  • Camera assembly 120 may include a single or dual (multiple) cameras.
  • a monocular camera is completely different from a dual (multi) eye camera.
  • the dual (multiple) eye camera is similar to human eyes, and the distance is determined mainly by disparity calculation of two (multiple) images respectively collected by the two (multiple) cameras at the same time. Therefore, the dual (multi-) eye camera can perform depth estimation without relying on other sensing devices when it is still, but its depth range and accuracy are limited by the binocular baseline (the distance between the optical centers of the two cameras) and the resolution.
  • the calculation of parallax is quite resource-intensive, and has the disadvantages of complex configuration, large calculation amount, and high energy consumption.
  • the image frame collected by the monocular camera is a two-dimensional projection of the three-dimensional space, which loses the depth information of the environment. Only when the camera is moved, the distance can be calculated by the parallax formed by the motion of the object on the image.
  • This shortcoming can be alleviated to a certain extent by fusing the pose data collected by the inertial measurement unit.
  • the algorithm of Monocular Vision Fusion Inertial Measurement System (VINS-Mono), due to its low cost, small size, and low power consumption, is widely used in positioning-dependent devices such as robots and drones.
  • VINS-Mono can calculate the movement and rotation of the camera itself according to the offset of the feature points between the front and rear frames captured by the camera, and then fuse the IMU data, and is not limited by signal interference like the GPS sensor. Therefore, the specific number of cameras included in the camera assembly 120 is not strictly limited in this application.
  • the camera assembly 120 may also include a depth camera, also known as an RGB-D camera.
  • RGB-D cameras can use infrared structured light or time-of-flight (ToF) principles, like a laser sensor, by actively emitting light to objects and receiving returned light to measure the relationship between objects and RGB-D. distance between cameras.
  • ToF time-of-flight
  • RGB-D cameras obtain depth through physical measurement methods, which saves a lot of calculations.
  • RGB-D cameras include Microsoft's Kinect, Intel's RealSense and so on.
  • the depth camera still has many problems such as narrow measurement range, high noise, small field of view, easy to be interfered by sunlight, and inability to measure transmissive materials, so it is usually used in indoor scenes more than outdoor scenes. If you want to apply the RGB-D camera on the smart lawn mower 110, it is inseparable from the fusion with other sensors, and it is suitable for use when the sunlight is not strong.
  • the inertial measurement unit 122 includes at least an accelerometer and a gyroscope.
  • An accelerometer is a sensor used to measure linear acceleration. When the rigid body is at rest relative to the earth, the linear acceleration is 0, but due to the influence of gravity, when using the accelerometer to measure the linear acceleration of the rigid body, there will be a reading of about 9.81m/s on the axis pointing vertically downward to the center of the earth 2 ; Similarly, under the action of gravity, when the reading of the accelerometer on the rigid body is 0, the rigid body is in a free fall state, and there is actually an actual acceleration of 9.81m/s 2 vertically downward.
  • MEMS Micro-Electro-Mechanical System
  • the interior of the MEMS accelerometer is a spring-mass microstructure, that is, on the deformation axis of the micro-spring-mass, there are When accelerating, the micro-spring will be deformed. Using microelectronics to measure the deformation of the microspring, the acceleration on the axis can be obtained. Due to such a structure, a MEMS accelerometer cannot measure the actual acceleration of a rigid body, but can only give an acceleration measurement along its measurement axis. In actual use, three sets of MEMS measurement systems are usually used to form an orthogonal three-axis measurement system. Solve for the actual acceleration.
  • a gyroscope is a sensor used to measure the rotational angular velocity of a rigid body.
  • the MEMS gyroscope can only measure the angular velocity component rotating around a single measurement axis, so it is also integrated and packaged as a three-axis gyroscope with three orthogonal measurement axes.
  • the rotation component on the measurement axis is finally synthesized, and the actual rotation angular velocity of the rigid body is finally synthesized.
  • the rotation angle around the x-axis of the reference coordinate system is defined as the roll angle (roll)
  • the rotation around the y-axis is the pitch angle (pitch)
  • the rotation around the z-axis is the yaw angle (yaw).
  • an inertial measurement unit 122 includes three single-axis accelerometers and three single-axis gyroscopes to measure the angular velocity and acceleration of an object in three-dimensional space, and thereby calculate the attitude of the object.
  • the inertial measurement unit 122 may also include a magnetometer.
  • the magnetometer is also called geomagnetism and magnetic sensor, which can be used to test the strength and direction of the magnetic field, and to locate the orientation of the device.
  • the six-axis or nine-axis sensor acts as an integrated sensor module, reducing the circuit board and overall space.
  • the data accuracy of the integrated sensor also involves correction after welding and assembly, as well as matching algorithms for different applications.
  • the IMU sensor is preferably arranged at the center of gravity of the object; so optionally, the inertial measurement unit 122 may be arranged at the center of gravity G of the smart lawn mower 110, as shown in FIG. 6 . Due to the low cost of the inertial measurement unit 122, in an embodiment, dual inertial measurement units 122 may also be provided to improve the accuracy and stability of the IMU data, as shown in FIG. 7 .
  • the relative angular velocity and relative acceleration between the target and the motion reference frame can be obtained according to the difference between the outputs of the two inertial measurement units 122;
  • the state of the measurement unit 122 when one inertial measurement unit 122 is abnormal, the system immediately switches to another inertial measurement unit 122 to ensure the stability of the positioning.
  • the system diagram of the smart lawn mower 110 is shown in FIG. 8 , including a power module 701 , a sensor module 702 , a control module 703 , a drive module 704 and an actuator 705 .
  • the power supply module 701 supplies power to the driving module 704 , the control module 703 and the sensor module 702 .
  • the power module 701 includes a battery pack to provide DC power.
  • the sensor module 702 includes at least the camera assembly 120 and the inertial measurement unit 122 .
  • the smart lawn mower 110 may also be equipped with other sensors, such as a GPS sensor, a collision sensor, a drop sensor, etc., and the information collected by the other sensors can also be comprehensively referenced during calculation processing.
  • the control module 703 includes: an input module 141 for accepting various raw data collected or detected by the sensor module 702; a processor 142 for logic operations, which can be a CPU or a microcontroller with a higher data processing speed; a memory 144 , which is used to store various data and control programs 145 ; the output module 143 is used to convert control instructions into motor drive commands and send them to the drive controller 161 of the motor drive switch.
  • the drive module 704 includes a motor drive switch circuit 162 , a drive controller 161 and a motor 163 .
  • the most common MOSFET switch is used in the motor driving switch circuit 162 shown in FIG. 8 , and the driving controller 161 controls the on-off of the MOSFET switch by applying a voltage to the gate of the MOSFET switch.
  • the orderly on-off of the MOSFET switches results in the orderly conduction of the motor windings, thereby driving the motor 163 to rotate.
  • FIG. 8 only shows a common motor driving circuit, and the present disclosure does not limit the specific implementation of the motor driving circuit.
  • the rotation of the motor 163 in turn drives the actuator 705 directly or indirectly through a transmission mechanism.
  • the actuator 705 of the smart lawn mower 110 mainly includes a blade 112 and a wheel 114 , optionally, the blade 112 and the wheel 114 are driven by independent motors 163 respectively.
  • the left and right rear wheels 114 can also be driven by independent motors 163 respectively, so as to realize more flexible turning and attitude adjustment.
  • the control program 145 stored in the memory 144 is mainly composed of two modules, namely a positioning and mapping module 146 and a function application module 147 , wherein the positioning and mapping module 146 is the basis of the function application module 147 .
  • the location mapping module 146 solves the basic questions of where the smart lawn mower 110 is, what is the map, and what is the surrounding environment, tracking the location of the smart lawn mower 110 as it moves and building an understanding of the real world, ie instant positioning and map construction (SLAM); based on the solution of basic problems, the function application module 147 can realize specific functions such as boundary delineation of mowing area, intelligent obstacle avoidance, road recognition and selection, navigation combination, and intelligent lighting. Of course, this classification is mainly to facilitate understanding and elaboration. In the specific implementation, the positioning and mapping module 146 and the function application module 147 are not two parts that are completely separated. The process of realizing the function application module 147 itself also deepens the understanding of the real world , and the result will also be fed back to the location mapping module 146, so as to continuously improve the map.
  • SLAM instant positioning and map construction
  • SLAM real-time localization and mapping
  • the type of data measured by the camera assembly 120 and the inertial measurement unit 122 (vision measures the coordinates of the object projected on the pixel plane, while the inertial measurement unit measures the three-dimensional acceleration and rotational angular velocity of the object) and measurement rate (vision is limited by frame rate and The image processing speed, the sampling rate of the camera can only reach dozens of frames per second, and the inertial measurement unit can easily reach the sampling rate of hundreds or even thousands of frames per second). Converting the motion measured by the inertial measurement unit into object coordinates (accumulation of deviation during integration) or converting the visual amount into motion (the calculated acceleration oscillates greatly due to positioning deviation during differentiation) will introduce additional errors, so data fusion Detection and optimization need to be introduced in the process.
  • the motion quantity detected by the inertial measurement unit is usually integrated into the object coordinates and then fused with the visual quantity.
  • the key modules in the entire flowchart can be decomposed into the following parts: image and IMU data preprocessing, initialization, local optimization, mapping, key frame extraction, loop closure detection and global optimization.
  • the main functions of each module are:
  • Image and IMU data preprocessing extract feature points from the image frames collected by the camera assembly 120, and use the KLT pyramid to perform optical flow tracking to prepare for the subsequent visual initialization to solve the pose of the smart lawn mower 110.
  • Initialization During initialization, firstly perform visual-only initialization to solve the relative pose of the smart lawn mower 110; then, perform alignment with the IMU pre-integration to solve the initialization parameters.
  • Local optimization local optimization of visual inertial navigation for sliding windows, that is, placing visual constraints and IMU constraints in a large objective function for nonlinear optimization; the local optimization here only optimizes the current frame and the previous n frames (for example, n is 4 ), the local optimization outputs a relatively accurate pose of the smart lawn mower 110 .
  • mapping Through the obtained pose, the depth of the corresponding feature points is calculated by the triangulation method, and the current environment map is reconstructed synchronously.
  • the map refers to the set of all landmark points. Once the location of the waypoints has been determined, the mapping can be said to be complete.
  • a key frame is a screened image frame that can be recorded but avoids redundancy.
  • the selection criterion of a key frame is that the displacement between the current frame and the previous frame exceeds a certain threshold or the number of matching feature points is less than a certain threshold.
  • Loopback detection also known as closed-loop detection, is to save the key frames of the previously detected images. When the smart lawn mower 110 returns to the same place it passed through, it can judge whether it has been there through the matching relationship of the feature points. here.
  • Global optimization is to use visual constraints and IMU constraints, plus the constraints of loop closure detection, to perform nonlinear optimization when loop closure detection occurs.
  • the global optimization is performed on the basis of the local optimization, outputs a more accurate pose of the smart lawn mower 110, and updates the map.
  • the output pose is a 6 degrees of freedom (6DoF) pose, which refers to the three-dimensional motion (movement) of the smart lawn mower 110 in the x-y-z directions plus pitch/yaw/roll (rotation).
  • 6DoF 6 degrees of freedom
  • the true scale of the trajectory of the smart lawn mower 110 can be estimated by aligning the pose sequence estimated by the IMU and the pose sequence estimated by the vision, and the IMU can well predict the pose of the image frame and the previous The position of the feature point in the next frame image at the moment improves the matching speed of the feature tracking algorithm and the robustness of the algorithm for fast rotation.
  • the gravity vector provided by the accelerometer in the IMU can convert the estimated position into the world coordinate system required for actual navigation.
  • SLAM outputs a more accurate (in centimeters) 6DOF pose, and It does not depend on the strength of satellite signals and is not interfered by other electromagnetic signals.
  • GPS Global Positioning System
  • the process of SLAM compared with GPS positioning with low computing and low power consumption, has the problem of high energy consumption, and because the smart lawn mower 110 works outdoors, the camera sensor needs to be cleaned frequently. Image frames are blurry and do not provide valid visual data.
  • the intelligent lawn mower 110 needs to repeatedly observe the same area, which not only realizes closed-loop motion, therefore, the system uncertainty is continuously accumulated until the occurrence of closed-loop motion.
  • the intelligent lawn mower 110 performs large closed-loop motion, and the system uncertainty may lead to the failure of the closed-loop inspection, making the SLAM global optimization fail, and the positioning deviation is large.
  • the satellite signal interference is less, and the GPS positioning results are usually relatively stable and accurate, and GPS has been widely used at present, and the price is low, so the smart lawn mower 110 can also be equipped with GPS sensors, using GPS +SLAM combined navigation.
  • the combined positioning method composed of the camera assembly 120, the inertial measurement unit 122, and the GPS three sensors can be seen in Figure 10.
  • VSLAM visual-only real-time localization and map construction
  • SLAM real-time positioning and map building
  • APIs application programming interfaces
  • ARCore is an augmented reality application launched by Google.
  • the software platform of the program based on the fusion of image data and IMU data for real-time localization and mapping (SLAM), has three major functions to integrate virtual content with the real world seen through the camera: 1. Understand and track its position and attitude relative to the real world; 2.
  • Environmental understanding allow machines to detect various surfaces (such as horizontal or vertical surfaces such as ground, desktop, and walls) through feature point clustering, and know their boundaries, Size and position; 3.
  • Light estimation Allows the machine to estimate the current lighting conditions of the environment.
  • Google's ARCore Apple's ARKit and Huawei's AR Engine are also software packages that can provide similar functions.
  • the function application module 147 of the control program 145 of the intelligent lawn mower 110 can distinguish between grass and non-grass according to the feature points of the two-dimensional plane in the image frame and the texture features of the grass. If the lawn mower is currently When the working surface is not grass, the rotation of the blade 112 is stopped; and along the boundary between grass and non-grass, the mowing area boundary is automatically generated in combination with the motion tracking function of software packages such as ARCore. Further, the intelligent lawn mower 110 can also cooperate with the interactive display interface to display the constructed map and the boundary of the mowing area through the interactive display interface, and allow the user to confirm the modification. In the confirmation process, in order to facilitate the user to more intuitively and carefully identify the boundary line, two identification modes can be set.
  • An identification mode is to display the boundary line of the mowing area on the two-dimensional map on the interactive display interface, see FIG. 11A , in the two-dimensional map, the lawn 222 is located between the house 223 and the road 224, and the boundary line 221 of the mowing area is Indicated by a thick dashed line.
  • the user can manually adjust the boundary line 221 in the two-dimensional map on the interactive display interface, for example, drag a certain boundary line 221 up, down, left, and right, or delete or add a boundary line 221 (drawing with a finger). If the user wishes, the user can also choose to directly enter this identification mode and draw all boundary lines 221 on the two-dimensional map on the interactive display interface with a finger.
  • Another identification mode is to superimpose the virtual fence 211 icon on the real image captured by the camera assembly 120 displayed on the interactive display interface in real time, see FIG. 11B , in this identification mode, the boundary line automatically generated by the smart lawn mower 110 will be
  • the virtual fence 211 is displayed in the form of an icon, and the user can manually adjust the position of the virtual fence 211 icon superimposed on the real image on the interactive display interface. For example, when the virtual fence 211 is pulled closer or farther, the user can also delete it and add a new paragraph. Virtual fence 211.
  • the motion tracking function of software packages such as ARCore, the user can check the appropriateness of the virtual fence 211 from various angles during the process of moving and switching the angle of the camera assembly 120 .
  • the virtual fence 211 icon superimposed on the real image is more intuitive and accurate, which is convenient for the user to determine the virtual fence 211 (ie the boundary line) according to the specific ground conditions (eg, terrain, vegetation type). ) of the exact location.
  • the user can combine the two modes, first check whether the boundary lines on the 2D map are in line with expectations as a whole, adjust those that do not conform, and then check the superimposed on the real image for the boundaries that need special attention.
  • 211 icon of the virtual fence refine if necessary.
  • the intelligent lawn mower 110 After the boundary of the mowing area is confirmed by the user, the intelligent lawn mower 110 will store the confirmed boundary line (including the virtual fence 211 ) in the form of discrete anchor point coordinates, the position of the boundary line (discrete anchor point) It will not change with the movement of the smart lawn mower 110. When the smart lawn mower 110 performs path planning, it is limited to work within the boundary of the mowing area. It is worth noting that the interactive display interface may be a component on the smart lawn mower 110, an independent display device, or a mobile phone, tablet, etc. that can interact with the smart lawn mower 110. Interactive display interface of the mobile terminal.
  • the function application module of the control program 145 of the smart lawn mower 110 can identify the materials of different planes.
  • the intelligent lawnmower 110 can also analyze the feature points of the two-dimensional plane in the image frame collected by the camera assembly 120, and according to the difference in plane texture (ie, the distribution law of feature points), the control program 145 presets The texture features of common types of planes, identify different types of ground (including water).
  • the smart lawn mower 110 walks across the ground of different materials at the same time, due to the different hardness and different materials of the ground facing the different supporting force and friction force of the wheels 114 of the smart lawn mower 110, it is easy to cause the smart lawn mower 110 to be bumpy, Tilt, direction and other issues. Therefore, when the smart lawn mower 110 walks on a non-lawn, for example, on the way from one lawn to another, and recognizes that there are various grounds with different textures (ie different hardness) in the front area 212, Then choose to walk on one of the harder surfaces. Referring to FIG.
  • the road surface selection program of the control program 145 will Plan the path, control the smart lawn mower 110 to adjust the direction to drive to the left until it detects that the area 128 in front is full of cement roads, and then adjust the direction to drive in the original direction.
  • This road selection is conducive to the travel of the smart lawn mower 110 Control, machine maintenance and security.
  • surfaces of different materials can be divided by means of the environment understanding function of software packages such as ARCore, and texture features of common planes can also be introduced for comparison, so as to assist the intelligent lawn mower 110 to judge the plane type.
  • the ground type-hardness comparison table stored in the memory After the determination of the plane type is obtained, according to the ground type-hardness comparison table stored in the memory, the ground with higher hardness is selected and the traveling direction of the intelligent lawn mower 110 is controlled accordingly.
  • the smart lawn mower 110 can identify water surfaces, steps, cliffs, etc. that may make the smart lawn mower 110 at risk of falling damage. Terrain, which makes the automatic generation of mowing area boundaries more complete.
  • the function application module of the control program 145 of the smart lawn mower 110 may further include an AI object recognition program, which calculates the category information of obstacles from the image data obtained by the camera assembly 120, thereby realizing smart mowing.
  • the active intelligent obstacle avoidance of the machine 110 adopts different obstacle avoidance strategies and appropriate avoidance distances for different types of obstacles, so as to take into account the coverage of mowing and the success rate of obstacle avoidance.
  • the object recognition program will output a category and its corresponding confidence probability (C:P), where the confidence probability P ranges from 0 to 1.
  • the smart lawn mower 110 can ignore these obstacles and drive according to the original path. Among them, although animal excrement is likely to contaminate the blade 112 and the chassis of the intelligent lawn mower 110, similar to the soil, these contaminations will be cleaned up to some extent during frequent cutting, so there is no need to avoid them. If the detected obstacles are animals, such as people, birds, squirrels, dogs, etc., then the first threshold distance D1 and the second threshold distance D2 can be preset.
  • the smart lawn mower 110 can keep a certain distance to avoid, In other words, adopt a long-distance avoidance strategy and issue a cleanup prompt to the user to prompt the user to clean up small-volume items on the lawn.
  • the smart lawn mower 110 can store the coordinates of the obstacle and the coordinates of the avoidance area while taking the avoidance action, before the end of the mowing, if the camera assembly If the image data collected at 120 shows that the obstacle at the coordinates of the obstacle has been removed, plan a return path and make up the area that was avoided before.
  • the smart lawn mower 110 can adopt a close-range avoidance strategy, that is, slow down and Get as close as possible to obstacles to maximize mowing coverage, for example, driving around obstacles at a distance of 0.1m, or, when the smart lawnmower 110 is equipped with a collision sensor, a slight collision at slow speeds can be very important to these large volumes Items do not cause much damage, so the closest avoidance can be achieved through collision sensors.
  • the intelligent lawn mower 110 can store the actual avoidance path and optimize it when the processor 142 is idle, so that when avoiding the same obstacle next time, the mowing coverage is maintained and the efficiency of the avoidance path is improved.
  • the user can also manually superimpose the virtual obstacles 215 on the real images captured by the camera assembly 120 in real time displayed on the interactive display interface, and adjust the orientation of the virtual obstacles 215 , size, size, as shown in Figure 15.
  • the user can check the appropriateness of the virtual obstacle 215 from various angles during the movement of the camera assembly 120 and the angle change.
  • the position and size information of the virtual obstacle 215 will be recorded in the form of anchor points, and the virtual obstacle 215 will not change with the movement of the smart lawn mower 110 .
  • the smart lawn mower 110 when the smart lawn mower 110 walks in the real work area, it can compare its current position with the position information of the virtual obstacle 215 in real time, and perform obstacle avoidance to avoid "collision" with the virtual obstacle 215 .
  • the function of the virtual barrier 215 facilitates the user to customize the special mowing range according to the specific situation. For example, there is an unfenced flowerbed on the lawn. This flowerbed looks like an ordinary lawn in some seasons. If the lawnmower accidentally steps into the flowerbed when mowing the grass, the user can add a virtual obstacle 215 with the same bottom area as the actual flowerbed area on the image of the flowerbed captured in real time by the camera assembly 120 displayed on the interactive display interface.
  • the larger dog house will be automatically determined as an immovable bulky object by the control program 145 as described above, and a close-range obstacle avoidance strategy will be adopted to improve the mowing coverage.
  • the user can display the image surrounding the dog house captured by the camera assembly 120 in real time on the interactive display interface, and superimpose the image.
  • the virtual obstacle 215 or the virtual fence 211 is used to enclose a larger non-working area.
  • ARCore tracks trackable objects such as planes and feature points over time
  • virtual obstacles can also be anchored to specific trackable objects, ensuring the The relationship remains stable. For example, if the virtual obstacle 215 is anchored to the dog house, when the dog house is moved later, the virtual obstacle 215 will track the movement of the dog house without requiring the user to reset the virtual obstacle.
  • the function application module of the control program 145 of the smart lawn mower 110 can detect the lighting state of the surrounding environment. With the light estimation function of software packages such as ARCore, the smart lawn mower 110 can know the light intensity L of the surrounding environment and adjust the lighting lamp 119 of the smart lawn mower 110 accordingly.
  • the control program 145 may preset a first light intensity threshold L1, and when the light intensity L of the surrounding environment is less than the first light intensity threshold L1, the intelligent lawn mower 110 turns on the lighting lamp 119 to fill in the light.
  • different working modes can be set, according to the light intensity and direction, the mowing time can be reasonably arranged and different working modes can be selected.
  • the light intensity L of the surrounding environment is less than the second light intensity threshold L2 (L2 ⁇ L1)
  • L2 the second light intensity threshold
  • the user if the user does not order to mow the grass immediately, return to the charging station and enter the charging mode or standby mode, because when there is no light, the lawn is most likely to be damaged by fungi and pests; if the user commands to mow immediately, turn on the light 119 and mow in silent mode to reduce the noise of the lawnmower disturbing quiet nights.
  • the grass can be mowed in the normal mode.
  • the image data collected by the camera assembly 120 can also be used as the basis for judging the mowing time and mode selection. For example, when dew is detected on the vegetation, if the user does not command to mow the grass immediately, it will return to the charging station and enter the charging mode or standby mode, because the dew will reduce the cutting efficiency and even cause the stall. In addition, wet lawns are prone to Leave a rut, affecting the appearance.
  • AR software packages such as ARCore usually do not have good object recognition capabilities.
  • ARCore's environment understanding function itself detects, distinguishes, and delineates 2D surfaces by clustering feature points on a plane. It is not through object recognition to determine what the surface of an object is.
  • the control program 145 of the smart lawn mower 110 introduces some common types of plane texture features to assist in the determination of the plane type, there is still a big gap between this and real object recognition. .
  • the realization of functions such as obstacle recognition and environment recognition also needs to rely on other AI software packages with object recognition functions, such as Google's TensorFlow, of which TensorFlow Lite is a set of tools that can help developers in Run TensorFlow models on mobile, embedded, and IoT devices. It supports on-device machine learning inference (no need to send data back and forth between device and server), has low latency, and has a small binary size.
  • the smart lawn mower 110 may also include a wireless network connection device 150, and the task of object recognition is handed over to the cloud server 200. Since the cloud server 200 has powerful cloud storage and cloud computing functions, the TensorFlow framework can be used to continuously improve the training set and model, so as to give a more accurate judgment.
  • the control program 145 can integrate the visual data and the IMU data, and even send the entire computing tasks of the positioning and mapping module 146 and the function application module 147 to the cloud. server 200.
  • the cloud server 200 fuses, locates, maps, and judges the uploaded data according to a preset program, and generates navigation and mowing instructions.
  • the control program 145 of the smart lawn mower 110 is only responsible for acquiring data from the camera 120 and the inertial measurement unit 122 locally, preprocessing and uploading the acquired data, and downloading instructions and outputs from the cloud server 200, while There is no need to perform AR and/or AI operations with high computational complexity, which reduces the requirements for the processor 142 of the smart lawn mower 110 and saves chip costs.
  • the control program 145 can also perform the fusion operation of the visual data and the IMU data, and even send the operation tasks of the entire positioning and mapping module 146 and the function application module 147 to Other devices capable of wireless data transmission with the smart lawn mower 110, for example, an application program of a mobile terminal.
  • the control program 145 of the smart lawn mower 110 can be understood as providing an application programming interface (API) to realize the communication function between the smart lawn mower 110 and the mobile terminal and define the application program between the smart lawn mower 110 and the mobile terminal Data communication protocols and formats between them.
  • API application programming interface
  • the application program of the mobile terminal can obtain the image and pose data from the smart lawn mower 110, and according to the preset program, after a series of AR and/or AI operations with high computational complexity, The navigation and mowing instruction data are generated, and then the instruction data is sent back to the intelligent lawn mower 110 through the application program interface, thereby realizing the control of the intelligent lawn mower 110 by the mobile terminal.
  • the application of the mobile terminal can also provide parameters for the user to select and modify, such as mowing time preference, mowing height preference, etc., so that the user can obtain customized intelligent control of the smart lawn mower 110 according to his own needs. Therefore, reserving the application program interface on the intelligent lawn mower 110 not only reduces the requirements for the processor 142 of the intelligent lawn mower 110 and saves the chip cost, but also facilitates the user to realize the intelligent lawn mower 110 through other devices. control.
  • a camera for collecting image information may also be installed in the environmental scene.
  • the smart lawn mower 210 itself does not have a camera, and instead, one or more cameras 190 are mounted on the roof and/or on top of the charging pile 180 . Since there is no need to install a bracket or reserve a storage cavity, the structure of the casing of the intelligent lawn mower 210 is more flexible.
  • the intelligent lawn mower 210 shown in FIG. 16 uses the appearance design of the power head, which is modern and beautiful.
  • One or more cameras 190 set in the scene are provided with a wireless connection device 191 for wirelessly connecting with the smart lawn mower 210, or connecting to a wireless network, for example, the user's home wifi network, so as to convert the collected image data Uploaded to the cloud server 200 .
  • One or more cameras 190 can be rotatable cameras that are common in the market, so as to obtain a wider viewing angle and more precise positioning.
  • the main components of the smart lawn mower 210 are similar to those of the smart lawn mower 110, and the same components will not be repeated here. The main difference between the two is that the smart lawn mower 210 is not directly disposed on the main body or connected through a bracket or the like.
  • the mechanism is installed on the main body to follow the camera that moves synchronously with the smart lawn mower 210; moreover, the smart lawn mower 210 is provided with a wireless connection device 250, which can receive image data sent by one or more cameras 190, or can access the Internet and the cloud
  • the server 200 realizes data interaction.
  • the smart lawn mower 110 since the sensors (camera assembly 120, inertial measurement unit 122, etc.) are integrated into the lawn mower body 113, a wired connection is adopted between the sensor and the control module , so the wireless connection device 150 is not necessary, but in order to improve computing power, upgrade convenience, use big data, reduce chip costs, etc., the smart lawn mower 110 may also have wireless connection devices 150 such as wireless network cards, mobile network receivers, etc. .
  • the smart lawn mower 210 in this embodiment since the camera 190 is separated from the main body of the smart lawn mower 210, the data transmission between each other depends on the wireless connection. Therefore, one or more cameras 190 are connected to the smart lawn mower.
  • the machines 210 all rely on wireless connection devices (the camera 190 includes the wireless connection device 191, and the smart lawn mower 210 includes the wireless connection device 250) to realize wireless transmission, for example, one or more cameras 190 respectively send the collected image data to the smart lawn mower
  • the lawn mower 210 performs arithmetic processing.
  • the high-level architecture of the control module of the smart lawn mower 210 may refer to the smart lawn mower 110 in the previous embodiment.
  • the viewing angle of the image information collected by the camera assembly 120 is different, and the control program 245 of the intelligent lawn mower 210 is also different from the control program 145 of the intelligent lawn mower 110: the control program 245 of the intelligent lawn mower 210 mainly uses the visual target tracking algorithm
  • the location of the smart lawn mower 210 in the area visible to the camera is estimated and navigation and mowing instructions are generated accordingly.
  • One or more cameras 190 may send raw image data or data after certain processing to the smart lawn mower 210 .
  • the control program 245 of the smart lawn mower 210 uses a single-view target tracking algorithm to estimate its position; when there are multiple cameras 190, the control program 245 of the smart lawn mower 210 uses multi-view target tracking The algorithm estimates its own position.
  • the multi-view target tracking algorithm includes a centralized multi-view target tracking algorithm and a distributed multi-view target tracking algorithm.
  • the data transmission mode between the multiple cameras 190 and the smart lawn mower is shown in FIG. 17B .
  • the intelligent lawn mower 210 in FIG. 17A actually plays the role of a fusion center (Fusion Center) in the centralized multi-view target tracking algorithm, and each camera 190 sends the collected image data to the intelligent lawn mower 210 respectively. Perform arithmetic processing.
  • each camera 190 completes the collection and processing of video data locally, and interacts and fuses information with cameras 190 from other perspectives through the network. For example, each camera 190 fuses a position estimate computed from its own captured images with position estimates obtained from adjacent cameras 190 to obtain a new position estimate and sends the new position estimate to the next adjacent camera 190 until reaching The desired accuracy is then sent to the smart lawn mower 210 by the camera 190 that achieves the desired accuracy.
  • the control program 245 of the smart lawn mower 210 generates navigation and mowing instructions based on the obtained position estimate, in combination with information from its other sensors (if any). Compared with centralized technology, distributed technology has the advantages of low bandwidth requirement, low system power consumption, high real-time performance and strong reliability.
  • the distributed multi-view target tracking algorithm reduces the requirements for the processor chip of the intelligent lawn mower 210, but the requirements for the data processing capability of the camera 190 are improved, which is suitable for using the camera when the lawn is large and the scene is complex
  • the centralized multi-view target tracking algorithm uses a small number of cameras when the lawn is small and the scene is relatively simple.
  • one or more cameras 190 and the smart lawn mower 210 are provided with wireless connection devices 191 such as wireless network cards, mobile network receivers, etc. that can be connected to the Internet, and the cloud server 200 realizes integrated computing of data from multiple devices .
  • One or more cameras 190, the smart lawn mower 210 and the cloud server 200 may perform data interaction in the architecture of FIG. 17C.
  • Each of the one or more cameras 190 uploads the captured raw image data or the preprocessed data to the cloud server 200 .
  • the cloud server 200 selects a single-view target tracking algorithm or a multi-view target tracking algorithm according to the obtained data of one or more cameras 190, and after calculating the real-time position estimate of the intelligent lawn mower 210, the corresponding position estimate and the map are calculated.
  • the information is sent to the smart lawn mower 210, and the control program 245 of the smart lawn mower 210 integrates data from other sensors (if any) to generate navigation and mowing instructions; alternatively, the smart lawn mower 210 also uses its own
  • the data collected by other sensors is uploaded to the cloud server 200 through the wireless network.
  • the cloud server 200 calculates and obtains the real-time position estimate of the intelligent lawn mower 210, it uploads the data according to the preset program stored in the cloud server 200 and the intelligent lawn mower 210. other sensor data, and directly make navigation and mowing behavior instructions corresponding to the current situation and send them to the smart lawn mower 210 .
  • the present application also proposes a lower-cost solution, namely, the intelligent lawn mowing system 100 , including the intelligent lawn mower 310 and the mobile terminal 130 .
  • the mobile terminal 130 may be a mobile phone, a tablet computer, or a wristband, or a device equipped with a camera, an inertial measurement unit (IMU), and a computing unit. Since the mobile terminal 130 provides the camera and the inertial measurement unit, the smart lawn mower 310 itself does not need to include the camera or the inertial measurement unit, which reduces the production cost.
  • Data transmission may be implemented between the smart lawn mower 310 and the mobile terminal 130 through wired communication or wireless communication. As shown in FIG.
  • the intelligent lawn mowing system 100 may adopt an intelligent lawn mower 310, including: a cutting blade 312 for cutting grass; a main body 313 for mounting the cutting blade 312; a wheel 314, which can rotate and support the main body 313; the fixing device 316 is arranged on the main body 313, and is used for fixing the mobile terminal 130 to the intelligent lawn mower 310; the interface 311 is arranged on the main body 313, and is used to cooperate with the interface 131 of the mobile terminal 130 to form a wired connection to realize data Transmission; a controller (not shown), electrically connected to the interface 311 , when the interface 311 is connected to the mobile terminal 130 , controls the behavior of the smart lawn mower 310 according to the instruction data received by the interface 311 .
  • the structure of the fixing device 316 is shown in FIGS. 19A-19C .
  • the fixing device 316 includes a first baffle 381 , a second baffle 382 , a support plate 383 , a support rod 384 and a base 385 .
  • the first baffle 381 and the second baffle 382 are parallel, respectively located at both ends of the support plate 383 and protrude outward from the same side of the support plate 383 to form opposite barbs, so as to facilitate the use of mobile terminals such as mobile phones and tablets. 130 is fixed between the first baffle 381 and the second baffle 382 .
  • the surfaces of the support plate 383 , the first baffle 381 , and the second baffle 382 that are in contact with the mobile terminals 130 such as mobile phones and tablets are also covered with silicone linings.
  • the friction between the two baffles 382 and the mobile terminals 130 such as mobile phones and tablets prevents the mobile terminals 130 such as mobile phones and tablets from shaking off due to bumps such as uneven ground during the traveling of the smart lawn mower 310 .
  • the silicone lining also has a certain elasticity, which can buffer the collision between mobile terminals 130 such as mobile phones and tablets and the support plate 383, the first baffle 381, and the second baffle 382 during the bumping process, reducing the collision between mobile phones and tablets, etc.
  • This article does not limit the lining materials of the support plate 383 and the first baffle 381 and the second baffle 382, as long as they play a role of anti-slip and buffering, various materials such as silicone and rubber can be used.
  • the distance between the first baffle 381 and the second baffle 382 is L1.
  • the size currently most mobile terminals such as mobile phones and tablets are between 4 inches and 12 inches
  • L1 can be 10cm
  • the distance between the first baffle 381 and the second baffle 382 can be changed, in other words , the second baffle 382 can translate relative to the first baffle 381, or the first baffle 381 can translate relative to the second baffle 382, so that the distance between the two baffles changes, so as to firmly clamp different sizes mobile terminals 130 such as mobile phones and tablets.
  • the first baffle 381 can be translated in a direction away from or close to the second baffle 382 .
  • the translation movement of the first baffle 381 in the direction away from the second baffle 382 is referred to as outward stretching
  • the translation movement of the first baffle 381 in the direction close to the second baffle 382 is referred to as stretching. internal contraction.
  • the second baffle 382 is fixedly connected to the support plate 383
  • the first baffle 381 is fixedly connected to the top end of the extension rod 387 on the back of the support plate 383 away from the second baffle 382 .
  • One end of the tension spring 386 is connected to the second baffle 382, and the other end is connected to the end of the extension rod 387 close to the second baffle 382, so the tension of the tension spring 386 always pulls the extension rod 387 toward the second baffle 382, even if the extension rod 387 shrinks inward.
  • the whole composed of the support plate 383, the telescopic mechanism and the first and second baffles 382 can also be called a chuck.
  • the tension spring 386 pulls the extension rod 387 toward the second baffle 382 until the first baffle 381 abuts the end of the support plate 383 . Under the reaction force of the contact surface with the end of the support plate 383 , it is fixed at the first position in conflict with the end of the support plate 383 .
  • the user needs to first grab the first baffle 381 to stretch the extension rod 387 outward, and then place the mobile terminal 130 such as the mobile phone and tablet on the support plate 383 and the first Between the plate 381 and the second baffle 382, release the first baffle 381, so that the first baffle 381 and the extension rod 387, under the action of the tension spring 386, shrink inward until the first baffle 381
  • the first baffle 381 is fixed at the second position in conflict with the edge of the mobile terminal 130 under the tension of the tension spring 386 and the reaction force of the contact surface of the edge of the mobile terminal 130 .
  • Second position of plate 381 The maximum distance between the first baffle 381 and the second baffle 382 is L2, the difference between L2 and L1 is ⁇ L, and ⁇ L represents the expansion and contraction amount of the chuck of the fixing device 316 .
  • L2 can be 19cm, then ⁇ L is 9cm.
  • the fixing device 316 of the mobile terminal 130 can fix the mobile terminal 130 such as a mobile phone or a tablet with a width or length between 10cm and 19cm.
  • the mobile phone can be vertically sandwiched between the first baffle 381 and the second baffle 382 , that is, the first baffle 381 and the second baffle 381 .
  • the baffle 382 clamps the longer side of the mobile phone; if the size of the mobile terminal 130 is large, such as a tablet computer, the tablet computer can be sandwiched between the first baffle 381 and the second baffle 382, that is, the first baffle 381 and the second bezel 382 clamp the shorter sides of the tablet.
  • the structures are different, many of them can firmly hold the mobile terminals 130 of different sizes. Due to the wide use and low price, the specific structure of the collets is not made in this application. There is no limitation, as long as the mobile terminals 130 of different sizes can be fixedly clamped.
  • the base 385 of the fixing device 316 can be directly fixed on the surface of the main body 313 of the intelligent lawn mower 310 through fastening mechanisms such as screws and nuts, as shown in FIG. 18 , this design has little structural change to the existing intelligent lawn mower and low cost ; but lacking in aesthetics and cleanliness.
  • this design has little structural change to the existing intelligent lawn mower and low cost ; but lacking in aesthetics and cleanliness.
  • the main body 313 of the smart lawn mower 310 is provided with an inwardly recessed accommodating cavity 315 , the top opening of the accommodating cavity 315 is located on the upper surface of the main body 313 of the intelligent lawn mower 310 , and the base 385 of the fixing device 316 It is fixed in the accommodating cavity 315 by fastening mechanisms such as screws and nuts, and there is a cover plate 318 on the top of the accommodating cavity 315, and the cover plate 318 can be opened and closed.
  • the cover plate 318 is hinged on one side of the top opening of the accommodating cavity 315, including a first position when opened and a second position when closed.
  • the cover plate 318 is composed of a sliding cover and a sliding cover guide rail that can slide back and forth, including a first position covering the top opening of the accommodating cavity 315 and a second position exposing the opening of the accommodating cavity 315 .
  • the advantage of the accommodating cavity 315 and the cover plate 318 is that when the intelligent lawn mower 310 is not in use, the fixing device 316 is hidden and stored in the main body 313 of the intelligent lawn mower 310, which is neat and beautiful on the one hand, and waterproof and dustproof on the other hand. Protects against light, reduces the need for cleaning of fixtures 316 and delays aging. As shown in FIG.
  • the interface 311 can also be disposed on the inner wall of the accommodating cavity 315 , so as to reduce the intrusion of substances such as dust and water.
  • the specific forms of the accommodating cavity 315 and the cover plate 318 are not limited in this application; in addition, the specific position of the accommodating cavity 315 can be determined according to the positions of the motors, PCB boards and other devices of the intelligent lawn mower 310, so as to facilitate the collection of intelligent
  • the image information around the lawn mower 310 is suitable to minimize the influence on the arrangement of the internal components of the main body 313 of the intelligent lawn mower 310, which is not limited in this application, and FIG. 20 is only an exemplary illustration.
  • the fixing device 316 of the mobile terminal 130 is hidden and stored in the main body 313 of the smart lawn mower 310; therefore, before the smart lawn mower 310 is equipped with the mobile terminal 130 to work, the chuck of the fixing device 316 needs to be extended out of the smart mower 310.
  • the outside of the main body 313 of the lawn mower 310 is convenient for the camera 132 of the mobile terminal 130 to collect image information around the smart lawn mower 310 .
  • the support rod 384 of the fixing device 316 can be designed as a retractable structure, for example, refer to the inner and outer double tube structure of the bracket 123 in the first embodiment.
  • the inner tube of the support rod 384 is pulled out, so that the length of the entire support rod 384 is lengthened, so that the chuck extends out of the main body 313 of the smart lawn mower 310 .
  • the inner tube of the support rod 384 is pushed back inward, so that the length of the entire support rod 384 is shortened, so as to be completely accommodated in the smart lawn mower 310 in cavity 315.
  • the present application does not limit the specific telescopic structure of the support rod 384 of the fixing device 316, as long as the effect of elongation and shortening can be achieved. Other structures that achieve a similar effect, such as flexible or collapsible support rods 384, also fall within the scope of this application.
  • a rotatable connection is formed between the support rod 384 and the collet through the damping shaft structure or the ball structure 388 .
  • the advantage of this is that, when the smart lawn mower 310 is loaded with the mobile terminal 130, the user can freely adjust the angle of the collet according to the actual working conditions and the specific position of the camera 132 of the mobile terminal 130, that is, the mobile terminal 130 is locked.
  • the fixed angle is the angle at which the camera 132 of the mobile terminal 130 collects image information of the environment around the smart lawn mower 310 .
  • the present application does not limit the specific structure of the rotary connection, as long as the effect of rotation can be achieved.
  • the support rod 384 is composed of multiple short rods connected in sequence, which can be folded to save space, and the angle of the collet can be adjusted by using the hinge point between the short rods.
  • the mobile terminal 130 includes: a camera 132 for collecting image data of the surrounding environment of the smart lawn mower 310; an inertial measurement unit 133 for detecting the position and attitude data of the smart lawn mower 310; an interface 131, At least for data transmission, but also for charging; a memory (not shown) for storing the application program 135 for controlling the operation of the smart lawn mower 310 ; a processor (not shown) for electrically connecting with the camera 132 and the inertial measurement unit 133 The connection is used to call the application program 135 to calculate and process the information collected by the camera 132 and the inertial measurement unit 133 .
  • the processor can call the application program 135 to integrate the data obtained by the camera 132 and the inertial measurement unit 133 to realize the real-time positioning and map construction (SLAM) of the intelligent lawn mower 310, and generate corresponding data according to preset logic and real-time data. navigation and mowing instructions to control the behavior of the smart mower 310.
  • Some common mobile terminals 130 such as mobile phones and tablets in the market include a monocular camera 132 , and some include a dual (multi-) camera 132 . In principle of ranging, the monocular camera 132 is completely different from the dual (multi-) camera 132 .
  • the dual (multi)-eye camera 132 is similar to human eyes, and the distance is mainly determined by the disparity calculation of the two images.
  • Depth estimation can be performed while still, so that the accuracy of the data is better, but the disparity calculation consumes resources. Disadvantages of large amount of calculation and high energy consumption.
  • the image frames collected by the monocular camera 132 lose the depth information of the environment, this shortcoming can be alleviated to a certain extent by fusing the pose data collected by the inertial measurement unit 133 .
  • the movement and rotation of the camera itself are calculated by combining the pose data collected by the inertial measurement unit 133 by calculating the offset of the feature points between them. Therefore, in this application, the number of cameras 132 that the mobile terminal 130 has is not strictly limited.
  • the inertial measurement unit 133 at least includes an accelerometer and a gyroscope, and may further include a magnetometer.
  • its IMU data includes 9 items of data including accelerometer (3 axes), gyroscope (3 axes), and magnetometer (3 axes).
  • a sensor position offset compensation parameter can be set when the application 135 performs IMU data processing, and the sensor position offset compensation parameter can include 3-axis data (X, Y, Z).
  • X represents the front and rear distance between the inertial measurement unit 133 of the mobile terminal 130 and the center of gravity G of the smart lawn mower 310
  • a positive value indicates that the center of gravity G of the smart lawn mower 310 is in front of the inertial measurement unit 133 of the mobile terminal 130
  • the value is Negative means that the center of gravity G of the smart lawn mower 310 is behind the inertial measurement unit 133 of the mobile terminal 130 .
  • Y represents the left and right distance between the inertial measurement unit 133 of the mobile terminal 130 and the center of gravity G of the smart lawn mower 310
  • a positive value indicates that the center of gravity G of the smart lawn mower 310 is on the right side of the inertial measurement unit 133 of the mobile terminal 130
  • a negative value represents It is shown that the center of gravity G of the smart lawn mower 310 is on the left side of the inertial measurement unit 133 of the mobile terminal 130 .
  • Z represents the vertical distance between the inertial measurement unit 133 of the mobile terminal 130 and the center of gravity G of the smart lawn mower 310
  • a positive value indicates that the center of gravity G of the smart lawn mower 310 is below the inertial measurement unit 133 of the mobile terminal 130
  • a negative value represents intelligent The center of gravity G of the lawn mower 310 is above the inertial measurement unit 133 of the mobile terminal 130 .
  • the mobile terminal 130 may also include other sensors such as GPS sensors, and corresponding sensor fusion (Sensor Fusion) logic codes are preset in the application program 135 .
  • Application 135 The process of visual-inertial fusion SLAM, and processes involving more sensor fusion, including those involving mowing area boundary generation, road surface selection, intelligent obstacle avoidance, virtual fence and virtual obstacle setting, intelligent lighting, mowing timing The application of specific functions such as selection is similar to the control program 145 of the smart lawn mower 110, and will not be repeated here.
  • the specific communication method between the smart lawn mower 310 and the mobile terminal 130 is not limited.
  • a male type C interface may be provided on the second baffle 382 of the fixing device 316.
  • the female type C interface of the mobile terminal is inserted into the male type C interface of the fixing device 316, so as to realize the data transmission between the mobile terminal 130 and the intelligent lawn mower 310.
  • this connection method limits the type of interface. If the interface type of the user's mobile terminal 130 is different from the preset interface type of the smart lawn mower 310, an adapter needs to be used.
  • the smart lawn mower 310 has a USB data transmission interface 311. If the mobile terminal 130 has a type C data transmission interface 131, through a A USB-type C data cable, one end is connected to the USB data transmission interface 311 of the smart lawn mower 310, and the other end is connected to the type C data transmission interface 131 of the mobile terminal 130, so that the connection between the mobile terminal 130 and the intelligent lawn mower 310 can be realized. data transmission.
  • the data transmission interface 131 of the user's mobile terminal 130 is an Android data interface
  • a USB-Android data cable is required, one end is connected to the USB data transmission interface 311 of the smart lawn mower 310, and the other end is connected to the Android data transmission interface of the mobile terminal 130
  • the interface 131 can realize data transmission between the mobile terminal 130 and the smart lawn mower 310 .
  • the advantage of using an independent data line for transmission is that it can adapt to the extension or rotation of the fixing device 316 .
  • the charging heads of mobile terminals 130 such as mobile phones and tablets generally use USB transmission interfaces, that is to say, the end of the charging cables of mobile terminals 130 such as mobile phones and tablets connected to the charging head is basically a USB transmission interface, which not only improves the The universality of the USB data transmission interface 311 of the smart lawn mower 310 is improved, and because this data cable is the charging cable of the mobile terminal 130 such as a mobile phone and a tablet, it can also be prepared by the user to further reduce the cost of the smart lawn mower 310. the cost of.
  • the application program 135 of the mobile terminal 130 calls the image data collected by the camera 132 and the pose data collected by the inertial measurement unit 133, and fuses the two types of data for real-time positioning and map construction (SLAM).
  • the application 135 developed for the Apple mobile terminal 130 can call the ARKit development tool set
  • the application 135 developed for the Android mobile terminal 130 can call the ARCore development tool set.
  • the application program 135 of the mobile terminal 130 generates specific navigation and mowing instructions according to the preset program according to the output result of the real-time positioning and map construction (SLAM), and returns it to the smart lawn mower 310, as indicated by the solid arrow in FIG. 22A . Show.
  • the preset program can specifically include multiple application functions, such as automatic generation of mowing boundaries, virtual fence setting, road recognition, intelligent obstacle avoidance, virtual obstacle setting, etc.; the preset program can also call resources with object recognition function Packages such as TensorFlow Lite to implement object recognition functionality.
  • object recognition function Packages such as TensorFlow Lite to implement object recognition functionality.
  • the smart lawn mower 310 itself may also include other sensors such as collision sensors, drop sensors, etc.
  • the smart lawn mower 310 can send the data collected by these sensors to the mobile terminal 130, as shown by the dashed arrows in FIG. 22A .
  • specific navigation and mowing instructions are generated according to preset programs, and then the instructions are transmitted to the smart lawn mower 310 through wired transmission, as shown by the solid arrows in FIG. 22A.
  • the mobile terminal 130 further includes a wireless network connection device 134, which can realize data transmission with the cloud server 200, so that The application 135 of the mobile terminal 130 does not need to complete all operations locally on the mobile terminal 130, but partially or completely completes it on the cloud server 200.
  • All image data and the angular velocity and acceleration data collected by the inertial measurement unit 133 are all uploaded to the cloud server 200 for fusion; or, data preprocessing is performed locally on the mobile terminal 130, such as feature point extraction of image frames, etc.
  • the processed data is sent to the cloud server 200 for fusion, so as to reduce the dependence on the wireless communication rate.
  • the cloud server 200 can also run other program logics. With the help of cloud computing and cloud storage capabilities, the cloud server 200 can perform functions such as obstacle recognition, boundary recognition, road recognition, and path planning. Take advantage of applications.
  • the mobile terminal 130 can also upload the user's settings and preferences to the cloud server 200, for example, mowing height preference, lawn printing anchors, etc.; the cloud server 200 can also autonomously obtain relevant information such as weather and seasons from the Internet, so as to generate navigation and mowing commands to control the behavior of the smart mower 310.
  • the application program 135 of the mobile terminal 130 obtains the instruction from the cloud server 200, it transmits the instruction to the smart lawn mower 310 through wired transmission.
  • wireless data transmission may also be used between the smart lawn mower 310 and the mobile terminal 130 .
  • wireless data transmission such as Bluetooth, ZigBee, NFC, etc.
  • this solution requires both the smart lawn mower 310 and the mobile terminal 130 to have matching short-range wireless communication devices, for example, the smart lawn mower 310 and the mobile terminal 130 both have Bluetooth. Compared with the wired communication shown in FIGS.
  • the short-distance wireless communication scheme is adopted, which essentially only changes the wired interface between the smart lawn mower 310 and the mobile terminal 130 to a wireless interface, and other aspects (transmission content) , system architecture, etc.) are not different.
  • the mobile terminal 130 is provided with a wireless network connection device 134 such as a wireless network card or a wlan module
  • the smart lawn mower 310 is provided with a wireless network connection device 350 such as a wireless network card or a wlan module, as shown in FIG. 22D .
  • both the mobile terminal 130 and the smart lawn mower 310 can connect to the cloud server 200 through the wireless network.
  • the application 135 of the mobile terminal 130 can upload all the image data collected by the camera 132 and the angular velocity and acceleration data collected by the inertial measurement unit 133 to the cloud server 200 for AR fusion; or, perform feature point extraction locally on the mobile terminal 130, etc.
  • the data is preprocessed, and then the preprocessed data is sent to the cloud server 200 for AR fusion, so as to reduce the dependence on the communication rate.
  • the smart lawn mower 310 can also upload information collected by other sensors such as collision sensors, drop sensors, etc. (if any, indicated by dashed arrows in FIG. 22D ) to the cloud server 200, and these information can also be used as parameters to participate in The operation decision process of the cloud server 200 goes.
  • the cloud server 200 makes instructions for navigation and mowing behavior according to various uploaded data and built-in programs, the result is directly returned to the smart lawn mower 310 . 22B, the cloud server 200 returns the calculation result to the mobile terminal 130 and then the mobile terminal 130 returns the result to the smart lawn mower 310.
  • the cloud server 200 directly returns the result to the smart lawn mower 310, which has the advantage of reducing delay.
  • the above solution also has a complementary implementation method, as shown in FIG. 22E .
  • mobile terminals 130 such as mobile phones generally have the functions of mobile network reception 137 and wifi hotspot 138
  • the mobile network signals received by the mobile terminal 130 can be converted into wifi signals and sent out, while the smart lawn mower 310 has wireless network cards or wlan modules.
  • the network connection device 350 can implement wireless communication with the cloud server 200 through the wifi network sent by the wifi hotspot 138 of the mobile terminal 130 .
  • the cloud server 200 may not be able to automatically identify
  • the ID of the smart lawn mower 310 can be added as an identification code when the application 135 and the smart lawn mower 310 upload data.
  • the ID of the smart lawn mower 310 is the credential.
  • the above-mentioned intelligent lawn mowing system 100 integrating the intelligent lawn mower 310 and the mobile terminal 130 reduces the hardware requirements for the intelligent lawn mower 310 , and not only saves the camera 132 and the inertial measurement unit 133 .
  • the requirements for the processing chip of the smart lawn mower 310 are reduced, thereby saving chip costs.
  • the application program 135 on the mobile terminal 130 is more convenient to upgrade, maintain and expand with the help of various application market platforms, for example, the application program 135V1.0.0
  • the version 135V1.2.0 of the application can mainly rely on local computing, but upload the pictures that need to be calculated for object recognition to the cloud server 200, and use big data to more accurately determine the type of obstacles.
  • fixing the mobile terminal 130 and the smart lawn mower 310 when the smart lawn mower 310 is working will also bring a certain degree of inconvenience to the user, because many people are accustomed to keeping their mobile phones in their hands these days. Leave the phone away for a while only when charging.
  • the smart lawn mower 310 can be configured to: when the mobile terminal 130 is connected, use The battery pack of the smart lawn mower 310 charges the battery of the mobile terminal 130 .
  • a charging threshold can be set, such as 70%.
  • the connected mobile terminal 130 is charged; if the remaining power of the battery pack of the smart lawn mower 310 is less than 70%, the connected mobile terminal 130 is not charged.
  • the terminal 130 is charged.
  • 70% is only an example and does not limit the scope of protection of this case, as long as a threshold of the remaining power of the smart lawn mower 310 is set to determine whether the smart lawn mower 310 is the connected mobile terminal 130
  • the charging solutions all fall within the protection scope of this application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Environmental Sciences (AREA)
  • Electromagnetism (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Harvester Elements (AREA)
  • Guiding Agricultural Machines (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

本申请公开了一种智能割草机,包括:摄像头,用于采集智能割草机周围环境的图像数据;惯性测量单元,用于检测智能割草机的位姿数据;存储器,至少用于存储控制智能割草机工作或行走的应用程序;处理器,用于调用所述应用程序,融合摄像头采集的图像数据和惯性测量单元获取的位姿数据,进行智能割草机的即时定位与地图构建,并生成导航和割草动作指令。

Description

智能割草机以及智能割草系统 技术领域
本申请涉及一种割草机及割草系统,特别是,智能割草机及智能割草系统。
背景技术
随着智能家居的兴起和普及,智能割草机的技术日趋进步,被家庭接受的程度逐步提高。因为不需要人力的推动和跟随,大大减轻了用户的劳动辛苦,节约了用户的时间。现有智能割草机的导航定位一般利用普通定位精度的GPS做分区域识别,利用边界线信号以及惯性测量单元IMU实现准确位置的推算,但这种方案通常定位精度较低,不能实现实时的定位导航,难以得到高效的路径规划和完全的区域覆盖。而高精度的定位方案,比如基于卫星信号的RTK方案,或者是基于无线电信号的UWB方案等等,这些方案的硬件成本以及系统可靠性一直是限制其应用的瓶颈。此外,对于自主工作的智能割草机来说,即使不计成本获得了高精度的定位,也是远远不够的,由于缺乏对于周边环境的深度理解,割草机无法自如地应对路面、障碍物、光照等方面的复杂情形。
发明内容
为了解决相关技术的不足,本申请的主要目的在于提供一种具有更高定位精度、对周边环境理解更加深入的智能割草机。
为了实现上述目标,本申请采用如下的技术方案:
一种智能割草机,包括:摄像头,用于采集智能割草机周围环境的图像数据;惯性测量单元,用于检测智能割草机的位姿数据;存储器,至少用于存储控制智能割草机工作或行走的应用程序;处理器,用于调用所述应用程序,融合摄像头采集的图像数据和惯性测量单元获取的位姿数据,进行智能割草机的即时定位与地图构建,并生成导航和割草动作指令。
可选地,所述智能割草机还包括主体,摄像头安装于主体。
可选地,所述摄像头安装于主体的前侧。
可选地,所述应用程序可以根据图像数据中的二维平面的特征点,对照草地的纹理特征,区分草地与非草地,并且以草地与非草地的界线为离散锚点,通过视觉惯性融合的即时定位与地图构建,自动生成割草区域边界。
可选地,所述智能割草机还包括切割刀片,所述应用程序可以根据图像数 据中的二维平面的特征点,对照草地的纹理特征,区分草地与非草地,并且在当前工作平面不是草地时,停止转动切割刀片。
可选地,所述应用程序可以根据图像数据中的二维平面的特征点,对照应用程序预设的常见类型的地面的纹理特征,判断当前工作平面的类型,且在当前工作平面包含多个地面类型时,控制所述智能割草机驶向多个地面类型中的硬度较大的地面。
可选地,所述应用程序还包括物体识别程序,所述应用程序可以根据所述物体识别程序识别出的障碍物类别选择对应的避障策略。
可选地,所述智能割草机还包括全球定位系统传感器,所述应用程序使用全球定位系统传感器的定位结果对视觉惯性融合的即时定位与地图构建的结果进行滤波校正。
可选地,所述智能割草机还包括照明灯,所述应用程序可以根据图像数据获取当前环境的光照强度,并在光照强度低于第一光照强度阈值时开启照明灯。
一种智能割草机,包括:主体;摄像头,用于采集智能割草机周围环境的图像数据;支撑杆,用于支撑摄像头;惯性测量单元,用于检测智能割草机的位姿数据;存储器,至少用于存储控制智能割草机工作或行走的应用程序;处理器,用于调用所述应用程序,融合摄像头采集的图像数据和惯性测量单元获取的位姿数据,进行智能割草机的即时定位与地图构建,并生成导航和割草动作指令。
可选地,所述支撑杆设置于主体的上表面。
可选地,所述支撑杆可伸缩,包括长度为第一长度的第一状态和长度为第二长度的第二状态,第二长度大于第一长度。
可选地,所述智能割草机还包括设置于主体的中部的容纳腔,用于容纳支撑杆和摄像头;当所述支撑杆处于第一状态时,摄像头和全部支撑杆位于所述容纳腔以内,当所述支撑杆处于第二状态时,摄像头和部分支撑杆位于所述容纳腔以外。
可选地,所述容纳腔顶部有用于防水防尘的盖板,所述盖板有闭合状态和打开状态;当所述支撑杆处于第一长度时,所述盖板处于闭合状态,当所述支撑杆处于第二长度时,所述盖板处于打开状态。
可选地,所述盖板铰链连接至所述容纳腔的顶部的边缘。
可选地,所述盖板相对于所述容纳腔滑动。
可选地,所述主体的上表面形成有用于容纳支撑杆的凹槽;所述支撑杆通 过阻尼转轴装置固定至所述主体的上表面,包括放置于所述主体的上表面的凹槽里的第一状态和基本垂直于所述主体的上表面的凹槽的第二状态。
一种智能割草系统,包括:智能割草机,至少包括:摄像头,用于采集智能割草机周围环境的图像数据;和惯性测量单元,用于检测智能割草机的位姿数据;互动显示界面;存储器,至少用于存储控制智能割草机工作或行走的应用程序;处理器,被设置为调用所述应用程序,融合摄像头采集的图像数据和惯性测量单元获取的位姿数据,进行智能割草机的即时定位与地图构建,并生成导航和割草动作指令。
可选地,所述互动显示界面位于所述智能割草机。
可选地,所述智能割草系统还包括:移动终端,所述互动显示界面位于所述移动终端。
可选地,所述存储器和处理器位于所述智能割草机。
可选地,所述智能割草系统还包括:移动终端,所述存储器和处理器位于所述移动终端。
可选地,用户可以通过所述互动显示界面查看所述摄像头采集的实时图像,并在实时图像上叠加虚拟围栏,所述应用程序将虚拟围栏的锚点加入割草区域边界的锚点集合。
可选地,用户可以通过所述互动显示界面查看所述摄像头采集的实时图像,并在实时图像上叠加虚拟障碍物,所述应用程序记录虚拟障碍物的锚点并规划路径绕开虚拟障碍物。
一种智能割草系统,包括:智能割草机和设置于工作场景的摄像头。其中,所述摄像头包括无线通信设备,用于与所述智能割草机无线连接。所述智能割草机包括:切割刀片,用于切割草;主体,用于支撑所述切割刀片;至少一个车轮,可转动并且支撑主体;无线通信设备,用于与摄像头无线连接;存储器,至少用于存储控制智能割草机工作或行走的应用程序;处理器,被设置为调用所述应用程序,进行导航和割草控制。
可选地,所述摄像头被设置于屋顶。
可选地,所述智能割草系统还包括充电桩,所述摄像头被设置于充电桩的顶部。
可选地,所述摄像头获取工作场景的图像数据并通过无线通信设备将图像数据发送给智能割草机,所述应用程序用所述摄像头获取的图像数据进行目标跟踪计算得到所述智能割草机的当前位置估计,再根据当前位置估计生成导航 和割草动作指令。
可选地,所述设置于工作场景的摄像头为多个。
可选地,多个所述摄像头获取不同视角的工作场景的图像数据,并先通过分布式目标跟踪计算得出所述智能割草机的当前位置估计,再将位置估计发送给所述智能割草机。
可选地,所述智能割草系统还包括云端服务器,多个所述每个摄像头通过无线通信设备将获取的工作场景的图像数据上传到云端服务器,云端服务器通过多视角目标跟踪算法进行目标跟踪计算得到所述智能割草机的当前位置估计,所述智能割草机通过无线通信设备从云端服务器获取当前位置估计。
一种智能行走工具系统,包括:智能行走设备;摄像头,用于获取所述智能行走设备的周围环境的图像数据;惯性测量单元,用于检测所述智能行走设备的位姿数据;存储器,至少用于存储控制智能行走设备工作或行走的应用程序;处理器,用于融合摄像头采集的图像数据和惯性测量单元获取的位姿数据以进行智能行走设备的即时定位与地图构建,并生成导航和工作指令。
可选地,所述的智能行走工具系统还包括移动终端,所述存储器位于所述移动终端中。
可选地,所述的智能行走工具系统还包括移动终端,所述处理器位于所述移动终端中。
可选地,所述的智能行走工具系统还包括移动终端,所述摄像头位于所述移动终端。
可选地,所述的智能行走工具系统还包括移动终端,所述惯性测量单元位于所述移动终端中。
可选地,所述智能行走设备还包括主体,所述摄像头设置在所述智能行走设备的主体。
可选地,所述智能行走设备还包括主体,所述惯性测量单元设置在所述智能行走设备的主体里。
可选地,所述智能行走设备还包括主体,所述处理器设置在所述智能行走设备的主体里。
可选地,所述智能行走设备还包括主体,所述控制器设置在所述智能行走设备的主体里。
可选地,所述智能行走设备还包括主体,所述摄像头相对于所述主体可上下移动。
可选地,所述智能行走设备还包括支撑杆,用于支撑摄像头。
可选地,所述支撑杆可伸缩,具有长度为第一长度的第一状态和长度为第二长度的第二状态,第二长度大于第一长度。
可选地,所述智能行走设备还包括容纳腔,设置于主体,用于容纳所述支撑杆和摄像头。
可选地,所述智能行走设备还包括互动显示界面,被配置为供用户查看所述摄像头获取的实时图像,并在实时图像上叠加虚拟围栏,所述应用程序将虚拟围栏的锚点加入工作区域边界的锚点集合。
可选地,所述智能行走设备还包括互动显示界面,被配置为供用户查看所述摄像头获取的实时图像,并在实时图像上叠加虚拟障碍物,所述应用程序记录虚拟障碍物的锚点并规划路径绕开虚拟障碍物。
可选地,所述应用程序可以根据图像数据中的二维平面的特征点,对照应用程序预设的常见类型的地面的纹理特征,判断当前工作平面的类型,且在当前工作平面包含多个地面类型时,控制所述智能割草机驶向多个地面类型中的硬度较大的地面。
可选地,所述应用程序还包括物体识别程序,所述应用程序可以根据所述物体识别程序识别出的障碍物类别选择对应的避障策略。
可选地,所述智能行走设备还包括全球定位系统传感器,所述应用程序使用全球定位系统传感器的定位结果对视觉惯性融合的即时定位与地图构建的结果进行滤波校正。
本申请的有益之处在于通过融合视觉和惯性传感器,一方面获得了更高的精度定位,另一方面也获得了对于环境的深度理解,使得智能割草机在导航、避障等操作上具备优势。
附图说明
图1是本申请一实施例的智能割草机的侧视图;
图2是本申请一实施例的智能割草机的侧视图;
图3A是图2所示的智能割草机的摄像头的可伸缩支架的立体图;
图3B是图3A所示的智能割草机的摄像头的可伸缩支架的剖面图;
图3C是图3A所示的智能割草机的摄像头的可伸缩支架在伸缩变换时的剖面图;
图4A是本申请一实施例的智能割草机在非工作状态的侧视图;
图4B是图4A所示的智能割草机在工作状态的侧视图;
图5A是本申请一实施例的智能割草机在非工作状态的侧视图;
图5B是图5A所示的智能割草机在工作状态的侧视图;
图6是图1所示的智能割草机的惯性测量单元的示意图;
图7是本申请一实施例的智能割草机的双惯性测量单元的示意图;
图8是本申请一实施例的智能割草机的系统示意图;
图9是本申请一实施例的即时定位与地图构建(SLAM)算法流程图;
图10是本申请一实施例的传感器融合算法流程图;
图11A是本申请一实施例的一种边界辨认模式下的显示界面;
图11B是本申请一实施例的另一种边界辨认模式下的显示界面;
图12是本申请一实施例的路面识别与选择功能的示意图;
图13A是本申请一实施例的障碍物识别功能的示意图;
图13B是本申请一实施例的障碍物识别功能的另一示意图;
图14是本申请一实施例的避障算法的流程图;
图15是本申请一实施例的设置虚拟障碍物时的显示界面;
图16是本申请另一实施例的智能割草机与设置于场景的摄像头的示意图;
图17A是图16所示的智能割草机与设置于场景的摄像头的一种数据传输架构图;
图17B是图16所示的智能割草机与设置于场景的摄像头的另一种数据传输架构图;
图17C是图16所示的智能割草机、设置于场景的摄像头和云端服务器的数据传输架构图;
图18是本申请另一实施例的智能割草系统的侧视图;
图19A是图18所示的智能割草机的固定装置的侧视图;
图19B是图19A所示的智能割草机的固定装置的夹头在收缩时的侧视图;
图19C是图19A所示的智能割草机的固定装置的夹头在伸长时的侧视图;
图20是本申请另一实施例的智能割草系统中的智能割草机的侧视图;
图21A是本申请另一实施例的智能割草系统中的移动终端的惯性测量单元的示意图;
图21B是本申请另一实施例的智能割草系统中的移动终端的摄像头的示意图;
图21C是本申请另一实施例的智能割草系统中的移动终端的接口的示意图;
图22A是本申请另一实施例的智能割草系统的第一种数据传输架构图;
图22B是本申请另一实施例的智能割草系统的第二种数据传输架构图;
图22C是本申请另一实施例的智能割草系统的第三种数据传输架构图;
图22D是本申请另一实施例的智能割草系统的第四种数据传输架构图;
图22E是本申请另一实施例的智能割草系统的第五种数据传输架构图。
具体实施方式
以下结合附图和具体实施例对本申请作具体的介绍。
如图1所示,本申请提出了一种智能割草机110,包括:切割刀片112,用于切割草;主体113,用于安装所述切割刀片112;车轮114,可以转动并且支撑主体113;照明灯119,用于照明;位于摄像头组件120,用于采集割草机的周围环境的图像信息;惯性测量单元(Inertial Measurement Unit,IMU)122,用于采集割草机的位姿信息;处理器(图1中未示出),与摄像头组件120和惯性测量单元122电连接,用于计算处理通过摄像头组件120和惯性测量单元122采集到的信息;存储器(图1中未示出),用于存储控制智能割草机110工作的控制程序145。所述处理器可以调用控制程序145融合摄像头组件120采集的割草机的周围环境的图像信息和惯性测量单元122采集的割草机的位姿信息数据实现割草机的即时定位与地图构建(Simultaneous Localization And Mapping,SLAM),并根据预设的逻辑和实时的数据生成相应的导航和割草指令以控制智能割草机110的行为。
可选地,摄像头组件120可以安装在智能割草机110的前部,参见图1。安装在割草机110前部的摄像头组件120可以较好地采集智能割草机110前方的环境的图像信息,相较于割草机的侧、后方的图像信息,割草机的前方的图像信息在导航、避障等方面更具备参考价值。可选地,摄像头组件120还可以通过支架123被安装在割草机的前上方,如图2所示。通过支架123的提升,摄像头组件120与地面的垂直距离增大,使得摄像头组件120的视野范围增大且视线更不容易受到杂草等近地面障碍物的遮蔽。
可选地,支架123是可伸缩装置。如图3A-3C所示的支架123由销钉392伸缩套管组成。销钉392伸缩套管的管体部分包括内外两根中空的管子,摄像头组件120的电线从两根管子中间的空腔穿过。外管394上多个有沿外管394 的长度方向依次排开的孔395。内管391上有一个孔,内管391腔内垂直于该孔的方向上有一个头部圆滑的销钉392,销钉392连接至一个弹片393,弹片393的一端固定在内管391内壁上,另一端连接销钉392的底部,并始终给销钉392一个向外的力,使得销钉392的头部在未受其他外力推挤时,穿过内管391的孔向外伸出。当外管394套在内管391上,将外管394的多个依次排开的孔395中的一个与内管391的孔对齐,无外力推挤时,销钉392的头部会依次穿过内管391的孔和外管394上的与内管391的孔对齐的孔395并向外伸出,以闩的方式将外管394相对于内管391固定。对支架123的长度调节通过改变销钉392伸缩套管的外管394相对于内管391的位置实现:首先克服弹片393自身的作用力将销钉392的头部往内管391里按,在销钉392的头部大致与外管394上的孔395处于同一平面时,迅速滑动外管394至理想位置,重新将外管394的另一个孔395与内管391的孔对齐,再使得销钉392自然释放至头部伸出内管391的孔和外管394上的与内管391的孔对齐的另一个孔395,此时销钉392将外管394相对于内管391固定在新的位置。可伸缩的支架123使得摄像头组件120的位置调整更加便利,同时增强了对摄像头组件120的防护,延长了其工作寿命。支架123也可以通过其他结构实现伸缩,或者,可伸缩的结构并非纯机械结构,而是机电结合,与智能割草机110的处理器电连接,处理器可以根据摄像头组件120采集到的图像信息,自主调节支架123的长度以调节摄像头组件120的高度。本申请不对具体的实施方式加以限制,只要摄像头组件120的支架123可以伸缩,即落入本申请的保护范围。
进一步地,与可伸缩的支架123相配合,智能割草机110的主体113可以设有一个向内凹陷的容纳腔115,参见图4A-4B。容纳腔115的顶部开口位于割草机主体113的上表面,支架123通过螺丝螺母等紧固机构固定在容纳腔115内,容纳腔115顶部有一个盖板118,盖板118可以打开和关闭。例如,盖板118铰接在容纳腔115顶部开口的一侧,包括打开时的第一位置(图4B)和关闭时的第二位置(图4A)。或者,盖板118由可以来回滑动的滑盖和滑盖导轨组成,包括覆盖容纳腔115顶部开口的第一位置和暴露容纳腔115开口的第二位置。容纳腔115和盖板118配合可伸缩支架123的好处是,在不使用智能割草机110时,缩短支架123,关闭盖板118,使得摄像头组件120隐藏收纳于割草机主体113内,一方面比较整洁美观,另一方面可以防水防尘防光照,减少对摄像头的清洁频次并延缓老化。在智能割草机110工作前,打开盖板118,拉长支架123,使得摄像头组件120伸出智能割草机110的容纳腔115以便采集智能割草机110周围的图像。容纳腔115和盖板118的具体形态在本申请中不设限制;此外,容纳腔115的具体位置,可以根据智能割草机110的电机、PCB板等装置的位置来决定,以方便采集智能割草机110周围的图像信息,并对智 能割草机110的主体113内部各元件的排布影响最小化为宜,在本申请中不设限制,图4只是示例性展示。
此外,支架123也可以被配置为可折叠的构造,参见图5a-5b,在智能割草机110的主体113的上表面,设置了可以容纳支架123和摄像头组件120的凹槽117。支架123与智能割草机110的主体113的顶面的一点铰接,使得支架123可以以铰接点为旋转点,在人手的搬动下,克服一定的摩擦力,绕旋转点旋转。非工作时间,将支架123绕旋转点旋转至放平,收纳于智能割草机110的主体113的顶面的凹槽117内,如图5A,提高了美观度与整洁度,减少了收纳智能割草机110时需要占用的空间,同时增强了对摄像头的防护,延长了其工作寿命。工作时间,将支架123立起,如图5B,并可根据需要调节支架的站立角度。更进一步地,支架123与摄像头组件120之间,可以采用阻尼转轴结构、滚珠结构等可旋转的连接机构,这样,使用者在开启智能割草机110之前,可以自由地根据需要调整摄像头组件120的角度;或者,可旋转的连接机构并非纯机械结构,而是机电结合,与智能割草机110的处理器电连接,处理器可以根据摄像头组件120采集到的图像信息,自主调节摄像头组件120的角度。需要注意的是,以上摄像头组件120的支架123的伸缩、折叠、转动设计皆为示例,并不限定于示例中的具体实施方式,不应当根据示例限缩本申请的保护范围。
摄像头组件120可以包括单个或双(多)个摄像头。在测距原理上,单目摄像头与双(多)目摄像头截然不同。双(多)目摄像头类似人类的双眼,主要通过同一时间两(多)个摄像头分别采集的两(多)幅图像的视差计算来确定距离。因而双(多)目摄像头可以在静止的时候不依赖其他传感设备进行深度估计,但其深度量程和精度受双目的基线(两个摄像头光心之间的距离)与分辨率所限,且视差的运算相当消耗资源,存在配置复杂,运算量大,耗能高的缺点。单目摄像头采集的图像帧是三维空间的二维投影,丢失了环境的深度信息,只有移动摄像头时,才能通过物体在图像上的运动形成的视差来计算距离的远近。这个缺点可以通过融合惯性测量单元采集的位姿数据得到一定程度的缓解。例如,单目视觉融合惯性测量系统(VINS-Mono)的算法,由于其成本低,体积小,功耗低,广泛应用例如在机器人,无人机等依赖定位的设备上。VINS-Mono可以根据摄像头拍摄到的前后帧之间特征点的偏移,再融合IMU数据计算出摄像头本身的移动和旋转,且不像GPS传感器受信号干扰等限制。因而,本申请中也不对摄像头组件120所包含的摄像头的具体数目做出严格限制。
除了普通的单、双(多)目摄像头以外,摄像头组件120也可以包括深度相机,又称RGB-D相机。RGB-D相机的最大特点是可以通过红外结构光或飞行时间(Time-of-Flight,ToF)原理,像激光传感器一样通过主动向物体发射光 并接收返回的光,测出物体与RGB-D相机之间的距离。相比于双(多)目摄像头通过软件计算,RGB-D相机通过物理的测量手段获得深度,节省了大量的计算。目前常用的RGB-D相机有微软公司的Kinect、英特尔公司的RealSense等。但受限于传感器的精度以及测量范围,深度相机还存在测量范围窄、噪声大、视野小、易受日光干扰、无法测量透射材质等诸多问题,因此通常应用于室内场景多于室外场景。若想在智能割草机110上应用RGB-D相机,则离不开与其他传感器的融合,并且适宜在日光照射并不强烈时使用。
惯性测量单元122至少包括加速度计和陀螺仪。加速度计是用来测量线性加速度的传感器。刚体相对于地球的静止状态时,线性加速度为0,但由于受到重力影响,使用加速度计测量刚体的线性加速度时,在竖直向下指向地心的轴线上会有读数约为9.81m/s 2;同理,在重力的作用下,当刚体上的加速度计的读数为0时,刚体处于自由落体状态,实际上有竖直向下9.81m/s 2的实际加速度。微电子机械系统(Micro-Electro-Mechanical System,MEMS)传感器被广泛应用在智能家电中,MEMS的加速度计的内部为弹簧-质量块的微结构,即在微弹簧-质量块的形变轴线上有加速度时,会使得微弹簧产生形变。使用微电子的方式测量微弹簧的形变,即可取得测量出轴线上的加速度。由于这样的结构,MEMS加速度计不能测量刚体的实际加速度,只能给出沿其测量轴的加速度测量值。在实际使用中,通常使用三套MEMS测量系统,共同组成正交的三轴测量系统,分别测量实际加速度在三个正交测量轴上的加速度分量,通过三个正交测量轴上的加速度分量解算出实际的加速度。陀螺仪是用来测量刚体旋转角速度的传感器。与MEMS加速度计类似的,MEMS陀螺仪也只能测量绕单个测量轴旋转的角速度分量,故使用时也是集成封装为具有三个正交测量轴的三轴陀螺仪,分别测量刚体旋转角速度在三个测量轴上的旋转分量,最终合成刚体的实际旋转角速度。通常的x-y-z坐标系中,规定绕参考坐标系x轴旋转的角为滚转角(roll),绕y轴旋转为俯仰角(pitch),绕z轴旋转为偏航角(yaw)。
一般情况下,一个惯性测量单元122包含三个单轴的加速度计和三个单轴的陀螺仪,测量物体在三维空间中的角速度和加速度,并以此解算出物体的姿态。进一步地,惯性测量单元122还可以包括磁力计。磁力计也叫地磁、磁感器,可用于测试磁场强度和方向,定位设备的方位,磁力计的原理跟指南针原理类似,可以测量出当前设备与东南西北四个方向上的夹角。六轴或九轴传感器作为集成化传感器模块,减少了电路板和整体空间。集成化传感器的数据准确度除了器件本身的精度外,还涉及到焊接装配后的矫正,以及针对不同应用的配套算法。合适的算法可以将来自多种传感器的数据融合,弥补了单个传感器在计算准确的位置和方向时的不足。一般而言,IMU传感器最好设置在物体重心;所以可选地,惯性测量单元122可以设置在智能割草机110的重心G上, 如图6所示。由于惯性测量单元122成本低廉,在一实施例中,也可以设置双惯性测量单元122来提高IMU数据的精度和稳定性,如图7所示。一方面,可以根据两个惯性测量单元122输出的差异得到目标物与运动参考系之间的相对角速度和相对加速度;另一方面,双双惯性测量单元122的冗余设计,通过实时监测两个惯性测量单元122的状态,在一个惯性测量单元122出现异常时,系统立即切换到另一个惯性测量单元122,保证定位的稳定性。
智能割草机110的系统图如图8所示,包括电源模块701,传感器模块702,控制模块703,以及驱动模块704和执行机构705。其中,电源模块701向驱动模块704、控制模块703和传感器模块702供电。为了适应智能割草机110的自主移动的工作需求,可选地,电源模块701包括电池包,提供直流电。传感器模块702至少包括:摄像头组件120和惯性测量单元122。智能割草机110还可能配备了其他传感器诸如GPS传感器、碰撞传感器、跌落传感器等,其他传感器采集的信息,亦可在运算处理时综合参考。控制模块703包括:输入模块141,用于接受传感器模块702采集或检测到的各种原始数据;处理器142,用于逻辑运算,可以为CPU或较高数据处理速度的微控制器;存储器144,用于存储各种数据和控制程序145;输出模块143,用于将控制指令转化为成电机驱动命令,并发送给电机驱动开关的驱动控制器161。驱动模块704包括电机驱动开关电路162,驱动控制器161和电机163。图8中所示的电机驱动开关电路162中使用的是最为常见的MOSFET开关,驱动控制器161通过向MOSFET开关的栅极施加电压以控制MOSFET开关的通断。MOSFET开关的有序通断致使电机绕组的有序导通,从而驱动电机163转动。图8仅展示了一种常见的电机驱动电路,本揭示并不限制电机驱动电路的具体实施方式。电机163的转动继而直接或者通过传动机构间接驱动执行机构705。智能割草机110的执行机构705主要包括刀片112和轮子114,可选地,刀片112和轮子114分别由独立的电机163驱动。可选地,左右两个后轮114也可以分别由独立的电机163驱动,从而实现更灵活的转弯和姿态调整。存储于存储器144上的控制程序145主要由两大模块组成,分别是定位建图模块146和功能应用模块147,其中定位建图模块146的是功能应用模块147的基础。定位建图模块146解决了智能割草机110在哪里,地图是什么,周边环境如何的基础问题,在智能割草机110移动时跟踪它的位置和构建对现实世界的理解,也就是即时定位与地图构建(SLAM);基于基础问题的解决,功能应用模块147才可能实现诸如割草区域边界划定、智能避障、路面识别与选择、导航组合、智能照明等具体的功能。当然,这个分类主要是便于理解和阐述,具体实现中,定位建图模块146和功能应用模块147并不是完全割裂的两个部分,实现功能应用模块147的过程本身也加深了对于现实世界的理解,其结果也会反馈给定位建图模块146,从而不断完善地图。
对于智能割草机110来说,即时定位与地图构建(SLAM)的实现,需要融合来自摄像头组件120的图像数据以及来自惯性测量单元122的位姿数据(又称,传感器融合)。其原因在于,以摄像头为例的视觉传感器在大多数纹理丰富的场景中效果很好,但是如果遇到玻璃,白墙等特征较少的场景,基本上无法工作。惯性测量单元虽然可以测得角速度和加速度,但为了得到物体位置或姿态必须对其进行时间积分,再者,基于微电子机械系统(MEMS)的惯性部件都不可避免的存在系统偏差,两者叠加,长时间下来有非常大的累积误差/漂移,但是对于短时间内的快速运动,其相对位移数据有很高的精度。快速运动中,摄像头会出现运动模糊,或者两帧之间重叠区域太少以至于无法进行特征匹配,有了惯性测量单元,即使在摄像头数据无效的那段时间内,也能得到一个较好的位姿估计。如果摄像头放在原地固定不动,那么根据视觉信息得到的位姿估计也是固定不动的。所以,慢速运动中,视觉数据可以有效地估计并修正惯性测量单元读数中的漂移,使得在慢速运动后的位姿估计依然有效。由此可见,视觉数据和IMU数据的互补性很强,融合摄像头组件120和惯性测量单元122两者的数据,能够提高定位和建图的精度和稳定性。
因为摄像头组件120和惯性测量单元122测量的数据类型(视觉测量物体在像素平面上投影的坐标,而惯性测量单元测量的是物体的三维加速度和转动角速度)和测量速率(视觉受制于帧率和图像处理速度,摄像头采样率只能达到几十帧每秒,惯性测量单元则可以轻松达到数百甚至上千帧每秒的采样率)都存在较大差异,对两者数据进行融合时无论是将惯性测量单元测量到的运动量转换为物体坐标(积分时偏差累积)或是把视觉量转变为运动量(微分时由于定位偏差导致计算出的加速度大幅度震荡)都会引入额外的误差,因此数据融合过程中需要引入检测与优化。一般来说,比起对视觉量做微分,融合中通常选择将惯性测量单元检测到的运动量积分为物体坐标后与视觉量进行融合。例如图9所示,整个流程图中的重点模块可以分解为以下部分:图像和IMU数据预处理、初始化、局部优化、建图、关键帧提取,回环检测和全局优化。各个模块的主要作用是:
图像和IMU数据预处理:对摄像头组件120采集的图像帧,提取特征点,利用KLT金字塔进行光流跟踪,为后面仅视觉初始化求解智能割草机110的位姿做准备。对惯性测量单元122采集的IMU数据,进行预积分,得到当前时刻的位姿、速度、旋转角,同时计算在后端优化中将要用到的相邻帧间的预积分增量,及预积分的协方差矩阵和雅可比矩阵。
初始化:初始化中,首先进行仅视觉的初始化,解算出智能割草机110的相对位姿;然后再与IMU预积分进行对齐求解初始化参数。
局部优化:对于滑动窗口进行视觉惯导局部优化,即将视觉约束和IMU约束放在一个大目标函数中进行非线性优化;这里的局部优化只优化当前帧及之前的n帧(例如,n为4)的窗口中的变量,局部优化输出较为精确的智能割草机110的位姿。
建图:通过得到的位姿,采用三角法计算相应特征点的深度,同步进行当前环境地图的重建。在SLAM模型中,地图是指所有路标点的集合。一旦确定了路标点的位置,就可以说完成了建图。
关键帧提取:关键帧就是筛选出来的能够记录下来但又避免冗余的图像帧,关键帧的选择标准是当前帧和上一帧之间的位移超过一定阈值或匹配的特征点数小于一定阈值。
回环检测:回环检测,又称闭环检测,是将前面检测的图像关键帧保存起来,当智能割草机110再回到原来经过的同一个地方,通过特征点的匹配关系,判断是否已经来过这里。
全局优化:全局优化是在发生回环检测时,利用视觉约束和IMU约束,再加上回环检测的约束,进行非线性优化。全局优化在局部优化的基础上进行,输出更为精确的智能割草机110的位姿,并对地图进行更新。
以上算法中,局部优化是滑动窗口内图像帧的优化,全局优化是所有关键帧的优化。仅用局部优化精度低,全局一致性差,但是速度快,IMU利用率高;仅用全局优化精度高,全局一致性好,但是速度慢,IMU利用率低;两者结合,可以优势互补,使得定位结果更加精准。输出的位姿是6个自由度(6DoF)的位姿,指的是智能割草机110在x-y-z方向上的三维运动(移动)加上俯仰/偏转/滚动(旋转)。在融合过程中,通过对齐IMU估计的位姿序列和视觉估计的位姿序列可以估计出智能割草机110的轨迹的真实尺度,而且IMU可以很好地预测出图像帧的位姿以及上一时刻特征点在下帧图像的位置,提高特征跟踪算法匹配速度和应对快速旋转的算法鲁棒性,最后IMU中加速度计提供的重力向量可以将估计的位置转为实际导航需要的世界坐标系中。
相较于全球定位系统(GPS)输出的精确度较差(以米为单位)的2D/3D位置,SLAM输出的精确度较高(以厘米为单位)的6个自由度的位姿,且不依赖于卫星信号的强弱、不受其他电磁信号的干扰。但是SLAM的过程,比起低运算低功耗的GPS定位,存在耗能大的问题,且由于智能割草机110在户外工作,摄像头传感器需要经常清理,如果清理不及时,可能会导致采集的图像帧模糊,不能提供有效的视觉数据。而且,为精确求解SLAM问题,智能割草机110需对相同区域进行重复观测,既实现闭环运动,因此系统不确定性不断累加直到闭环运动的发生。尤其当草坪广阔,周边空旷,缺乏特征参照物时, 智能割草机110进行大闭环运动,系统不确定性将可能导致闭环检查的失败,使得SLAM全局优化失败,定位偏差大。而在草坪广阔、周边空旷的环境里,卫星信号干扰少,通常GPS定位结果较为稳定与准确,且GPS目前已被普遍使用,价格低廉,因此智能割草机110也可以配备GPS传感器,采用GPS+SLAM组合导航。
由摄像头组件120、惯性测量单元122、GPS这三种传感器共同构成的组合定位方式可以参见图10,首先判断各传感器数据的可靠性,当所有传感器都失效时,停止行进并发出维修提醒;当有两种传感器失效时,使用剩下的一种传感器定位导航一个较短的时间段,例如3s,并在此期间持续检测失效传感器的数据有效性是否恢复,并将恢复的传感器数据加入后续定位导航计算中,如果在这个较短的时间段内,没有其他传感器恢复,则就地停止,并发出维修提醒;当仅有一种传感器失效,使用剩下的两种传感器进行定位导航,如果是GPS传感器失效,则使用AR融合视觉惯性SLAM进行定位导航,如果是摄像头失效,则用IMU数据验证GPS结果的自洽性,并对无法自洽的绝对定位数据进行滤波和修正,如果是IMU失效,则进行仅视觉的即时定位与地图构建(VSLAM)并在每处理完一帧图像后将VSLAM的结果与此时的GPS定位结果同时送入卡尔曼滤波器,并且持续检测失效传感器的数据有效性是否恢复,并将恢复的传感器数据加入后续定位导航计算中,如果割草工作完成,回到充电站后,仍有传感器未恢复,则发出异常提醒;当三个传感器都正常工作,利用GPS定位结果对AR融合视觉惯性SLAM生成的位姿和环境地图进行滤波校正。
在实际应用中,可以通过开源的AR软件包实现即时定位与地图构建(SLAM)的过程并调用不同的应用程序接口(API)实现丰富的功能,例如,ARCore是谷歌公司推出的搭建增强现实应用程序的软件平台,以融合图像数据和IMU数据实现即时定位与地图构建(SLAM)为基础,其三大功能将虚拟内容与通过摄像头看到的现实世界实现整合:1.运动跟踪:让机器可以理解和跟踪它相对于现实世界的位置和姿态;2.环境理解:让机器可以通过特征点聚类来检测各类表面(例如地面、桌面、墙壁等水平或垂直的表面),知晓其边界、大小和位置;3.光照估计:让机器可以估测环境当前的光照条件。除了谷歌公司的ARCore以外,苹果公司的ARKit,华为公司的AR Engine也是可以提供相似功能的软件包。
在一实施例中,智能割草机110的控制程序145的功能应用模块147可以根据图像帧中的二维平面的特征点,对照草地的纹理特征,区分草地与非草地,如果割草机当前所在的工作表面不是草地,停止刀片112转动;并且沿着草地和非草地的界线,结合ARCore等软件包的运动跟踪功能自动生成割草区域边界。进一步地,智能割草机110还可以配合互动显示界面,将构建的地图和割 草区域边界通过互动显示界面展示,并让用户确认修改。在确认过程中,为了便于用户更加直观与仔细地辨认边界线,可以设定两种辨认模式。一种辨认模式是在互动显示界面上展示二维地图上的割草区域的边界线,参见图11A,二维地图中,草坪222位于房子223和道路224之间,割草区域的边界线221以粗虚线表示。用户可以手动调整互动显示界面上的二维地图中的边界线221,例如,上下左右拖动某一段边界线221,或者删除,新增(用手指画)一段边界线221。如果用户愿意,用户也可以选择直接进入这种辨认模式,用手指在互动显示界面上的二维地图上画出所有的边界线221。另一种辨认模式是在互动显示界面上展示的摄像头组件120实时采集的现实图像上叠加虚拟围栏211图标,参见图11B,这种辨认模式下,智能割草机110自动生成的边界线会以虚拟围栏211图标的形式展现出来,用户可以手动调整互动显示界面上的现实图像上叠加的虚拟围栏211图标的位置,例如,将虚拟围栏211拉近或者推远,用户也可以删除,新增一段虚拟围栏211。并且,借助ARCore等软件包的运动跟踪功能,用户可以在摄像头组件120移动、切换角度的过程中,从各个角度检查虚拟围栏211的恰当性。相较于二维地图上的边界线221,叠加在现实图像上的虚拟围栏211图标更加直观与精确,便于用户依据具体的地面情况(例如,地形,植被类型)决定虚拟围栏211(即边界线)的精确位置。在确认过程中,用户可以将两种模式结合,先整体查看二维地图上的边界线是不是符合预期,对于不符合的进行调整,再对于有特别需要注意的边界处,查看现实图像上叠加的虚拟围栏211图标,对于有需要的进行精修。当割草区域边界由用户确认后,智能割草机110会将确认的边界线(包括虚拟围栏211)以的离散的锚点坐标的形式存储下来,该边界线(离散的锚点)的位置不会随智能割草机110的移动而变化,智能割草机110进行路径规划时,被限制在割草区域边界范围内工作。值得注意的是,所述的互动显示界面,既可以是智能割草机110上的部件,也可以是独立的显示设备,也可以是能与智能割草机110进行数据交互的手机、平板等移动终端的互动显示界面。
在一实施例中,智能割草机110的控制程序145的功能应用模块可以识别不同平面的材质。除了识别草坪与非草坪,智能割草机110还可以分析摄像头组件120采集的图像帧里的二维平面的特征点,根据平面纹理(即特征点分布规律)的不同,对照控制程序145预设的常见类型平面的纹理特征,识别出不同类型的地面(包括水面)。如果智能割草机110同时跨不同材质的地面行走,由于不同硬度、不同材质的地面对智能割草机110的轮子114的支撑力和摩擦力等不同,易导致智能割草机110颠簸、倾斜、方向打歪等问题。所以,当智能割草机110行走在非草坪上,例如,从一块草坪行走至另一块草坪的途中,且识别到正前方区域212内有特征点纹理不同(即硬度不同)的多种地面,则 选择在其中一个硬度较大的地面行走。参见图12,当智能割草机110检测到正前方区域212内有多种路面:水泥路面和泥土路面,其中,水泥路面位于左侧,泥土路面位于右侧,控制程序145的路面选择程序将规划路径,控制智能割草机110调整方向向左前方行驶直至检测到正前方区域128内都是水泥路面,再调整方向至原定方向行驶,这种路面选择有利于智能割草机110的行进控制、机器维护和安全保障。在路面选择程序中,可以借助ARCore等软件包的环境理解功能将不同材质的表面加以划分,也可以引入常见平面的纹理特征进行比对,从而辅助智能割草机110做出平面类型的判断。得到平面类型的判断后,再根据存储器中存储的地面类型-硬度对照表,选择硬度较大的地面并控制据此智能割草机110的行进方向。此外,通过与常见平面的纹理特征的比对,和平面与平面之间位置关系的判断,智能割草机110可以识别出水面、台阶、悬崖等可能使智能割草机110有跌落损坏风险的地形,使自动生成割草区域边界的功能更加完善。
在一实施例中,智能割草机110的控制程序145的功能应用模块还可以包括AI物体识别程序,从摄像头组件120获取的图像数据中,计算出障碍物的类别信息,从而实现智能割草机110的主动智能避障,对不同类别的障碍物采用不同的避障策略和恰当的避让距离,以兼顾割草覆盖度和避障成功率。如图13A-13B,对于一个框选物体,物体识别程序会输出一个类别及其对应的置信概率(C:P),其中,置信概率P的取值范围在0到1之间。控制程序145还可以包括一个置信阈值P1,例如,P1=0.7,采纳大于置信阈值的判断并进入避障策略的选择,如图13A中的(bird:0.99);而对于小于等于置信阈值的判断不予采纳,如图13B中的(bird:0.55)和(bird:0.45),此时如果障碍物与智能割草机110之间的距离D大于识别阈值距离D3,则继续正常行驶,并使用下一帧或者下n帧图片进行物体识别,等待控制程序145在智能割草机110靠近障碍物的过程中作出置信概率更高的物体识别判断,如果障碍物与智能割草机110之间的距离D小于等于识别阈值距离D3,则采取远距离避让策略,例如,以0.5m为距绕开障碍物行驶。
如图14所示,根据障碍物的类别采取不同避障策略,如果被检测到的障碍物是落叶、树枝、松果、甚至动物排泄物这些可以被刀片112切割的并且可以自然腐化的物质,则智能割草机110可以忽略这些障碍物,按照原定路径行驶。其中,虽然动物排泄物很可能弄脏智能割草机110的刀片112和底盘,但是和泥土类似,这些脏污在频繁的切割中多多少少会被清理掉,所以不需要躲避。如果被检测到的障碍物是动物,例如人、鸟、松鼠、狗等,那么可以预设第一阈值距离D1和第二阈值距离D2,当智能割草机110与检测到的动物障碍物之间的距离D大于第一阈值距离D1时,按照原定路径正常行驶;当智能割草机 110与检测到的动物障碍物之间的距离D小于等于第一阈值D1距离且大于第二阈值距离D2时,放慢速度行驶并发出警示音,提示人、鸟、松鼠、狗之类的动物发现智能割草机110并主动避让;当智能割草机110与检测到的动物障碍物之间的距离D小于等于第二阈值距离D2时,为了避免不慎对人和动物造成伤害,采取远距离避让策略。如果被检测到的障碍物是塑料玩具、铲子、绳索等可移动的(临时性的)小体积物品,为了避免不慎对这些小体积物品造成损害,智能割草机110可以保持一定距离避让,或者说,采取远距离避让策略,并向用户发出清理提示,提示用户清理草坪上的小体积物品。此外,对于动物障碍物和可移动的(临时性的)障碍物,智能割草机110可以在采取避让行动的同时储存障碍物的坐标和避让区域的坐标,在割草结束之前,如果摄像头组件120采集的图像数据显示该障碍物坐标处的障碍物已移除,则规划返回路径、补割之前避让的区域。如果被检测到的障碍物是树木、园林家具(例如,长椅、秋千)等不可移动的(永久性的)大体积物品,智能割草机110可以采取近距离避让策略,即放慢速度并尽量靠近障碍物,以尽量提高割草覆盖度,例如,以0.1m为距绕开障碍物行驶,或者,当智能割草机110配备了碰撞传感器时,慢速时轻微的碰撞对这些大体积物品不会造成什么损害,因此可以通过碰撞传感器实现最近距离的避让。同时,智能割草机110可以将实际避让路径存储起来并在处理器142空闲时进行优化,使得下一次避让同一障碍物时,保持割草覆盖度的同时提高规避路径的效率。
除了从摄像头组件120获取的图像中识别出真实的障碍物,用户也可以手动在互动显示界面上展示的摄像头组件120实时采集的现实图像上叠加虚拟障碍物215,并调整虚拟障碍物215的朝向、尺寸、大小,如图15所示。借助ARCore等软件包的运动跟踪功能,用户可以在摄像头组件120移动、角度变换的过程中,从各个角度检查虚拟障碍物215的恰当性。虚拟障碍物215的位置、大小信息将被以锚点的方式记录,该虚拟障碍物215不会随着智能割草机110的移动而改变。这样当智能割草机110在真实的工作区域行走时,可以根据自己当前位置,实时与虚拟障碍物215的位置信息进行比对,并进行避障,避免“碰撞”到虚拟障碍物215。虚拟障碍物215的功能方便了用户根据具体情况定制特殊的割草范围,例如,草坪上有一片没有围挡的花圃,这片花圃在有的季节看起来就像一块普通的草坪,为了避免智能割草机在割草时误踏入这片花圃,用户可以在互动显示界面上展示的摄像头组件120实时采集的花圃图像上添加一个底面积与实际花圃面积相同的虚拟障碍物215。再比如,草坪上有一个狗屋,体积较大的狗屋会被如上所述的控制程序145自动判定为不可移动的大体积物品,并采取近距离避障策略以提高割草覆盖度。但考虑到狗有可能会在狗屋里,为了避免智能割草机110的运行对狗造成打扰与惊吓,用户可以在互动显示界 面上展示的摄像头组件120实时采集的狗屋图像的周边,叠加虚拟障碍物215或者虚拟围栏211以围出一圈面积较大的非工作区域。进一步地,由于ARCore会随着时间的推移,跟踪例如平面和特征点的可跟踪对象,因此也可以将虚拟障碍物锚定到特定的可跟踪对象,确保虚拟障碍物与可跟踪对象之间的关系保持稳定。例如,将虚拟障碍物215锚定到狗屋上,那么后期挪动狗屋时,虚拟障碍物215会跟踪狗屋的移动,而不需要用户重新设定虚拟障碍物。
在一实施例中,智能割草机110的控制程序145的功能应用模块可以检测周围环境的光照状态。借助ARCore等软件包的光照估计功能,智能割草机110可以得知周围环境的光照强度L并据此调节智能割草机110的照明灯119。控制程序145可以预设第一光照强度阈值L1,当周围环境的光照强度L小于第一光照强度阈值L1时,智能割草机110开启照明灯119补光。除此之外,还可以设定不同的工作模式,根据光照强度及方向,合理安排割草的时间并选择不同的工作模式。例如,当检测到周围环境的光照非常弱,例如,当周围环境的光照强度L小于第二光照强度阈值L2时(L2<L1),如果用户没有命令立即割草,则返回充电站,进入充电模式或者待机模式,因为没有光照时,草坪最容易被真菌和病虫害损害;如果用户命令立即割草,则开启照明灯119并以静音模式割草,以减少割草机噪声对安静夜晚的打扰。当检测到周围环境的光照十分强烈,例如,当周围环境的光照强度L大于第三光照强度阈值L3时(L3>L1),如果用户没有命令在此时割草,则返回充电站,进入充电模式或者待机模式,因为强烈的阳光容易将断草晒死;如果用户命令立即割草,则以快速模式割草,减少割草机暴露在烈日下的时间以减少UV照射造成的老化。当检测到周围环境的光照适宜,例如,当周围环境的光照强度L大于等于第一光照强度阈值L1且小于等于第三光照强度阈值L3时,可以以常规模式割草。
除了环境的光照状态,摄像头组件120所采集的图像数据,结合AI物体识别运算也可以作为割草时间和模式选择的判断依据。例如,当检测到植被上有露水时,如果用户没有命令立即割草,则返回充电站,进入充电模式或者待机模式,因为露水会降低切割效率,甚至引起堵转,此外,潮湿的草坪等容易留下车辙,影响美观。当检测到植被上有霜冻或冰雪,如果用户没有命令立即割草,则返回充电站,进入充电模式或者待机模式,因为寒冷的天气也不利于断草切口的恢复。
值得一提的是,诸如ARCore的AR软件包本身通常并没有良好的物体识别能力,比如,ARCore的环境理解功能本身是通过平面上的特征点聚类来检测、区分、划定2D表面,而不是通过物体识别去判断物体的表面是什么,即使智能割草机110的控制程序145引入了一些常见类型的平面的纹理特征以辅助平面类型判断,但是这与真正的物体识别还有很大差距。所以在实际运用中,障碍 物识别、环境识别等功能的实现还需要依赖其他具备物体识别功能的AI软件包,例如,谷歌公司的TensorFlow,其中,TensorFlow Lite是一组工具,可帮助开发者在移动设备、嵌入式设备和IoT设备上运行TensorFlow模型。它支持设备端机器学习推断(无需在设备与服务器之间来回发送数据),延迟较低,并且二进制文件很小。当然,智能割草机110也可以包括无线网络连接设备150,将物体识别的工作交给云端服务器200,由于云端服务器200拥有强大的云存储和云计算功能,可以使用TensorFlow框架不断完善训练集和模型,从而给出更加精准的判断。
事实上,当智能割草机110包括无线网络连接设备150时,控制程序145可以将视觉数据与IMU数据的融合运算,乃至整个定位建图模块146和功能应用模块147的运算任务都发送给云端服务器200进行。云端服务器200根据预设程序对上传的数据进行融合、定位、建图、判断并生成导航和割草指令。这时,智能割草机110的控制程序145,在本地只用负责从摄像头120和惯性测量单元122获取数据、对获取的数据进行预处理和上传,以及从云端服务器200下载指令和输出,而不用进行运算复杂度高的AR和/或AI运算,降低了对于智能割草机110的处理器142的要求,节约了芯片成本。类似地,当智能割草机110包括无线网络连接设备150时,控制程序145也可以将视觉数据与IMU数据的融合运算,乃至整个定位建图模块146和功能应用模块147的运算任务都发送给能与智能割草机110进行无线数据传输的其他设备,例如,移动终端的应用程序进行。这时,智能割草机110的控制程序145可以理解为提供了一个应用程序接口(API),实现智能割草机110与移动终端的通信功能并且定义智能割草机110与移动终端的应用程序之间的数据通信协议与格式等。通过这个应用程序接口,移动终端的应用程序能够获取来自智能割草机110的图像和位姿数据,并根据预设程序,在经过一系列运算复杂度较高的AR和/或AI运算后,生成导航和割草指令数据,再通过这个应用程序接口将指令数据回传给智能割草机110,由此实现了移动终端对智能割草机110的控制。移动终端的应用程序还可以提供可供用户选择、修改的参数,例如,割草时间偏好,割草高度偏好等,方便用户根据自己的需求获取对智能割草机110的定制化的智能控制。因此,在智能割草机110上预留应用程序接口,不仅降低了对于智能割草机110的处理器142的要求,节约了芯片成本,而且方便了用户通过其他设备实现对智能割草机110的控制。
在另一实施例中,用于采集图像信息的摄像头还可以被安装于环境场景。例如,参见图16,智能割草机210本身不具备摄像头,作为替代地,一个或多个摄像头190被安装在屋顶和/或充电桩180的顶部。由于不需要安装支架或者预留收纳腔,智能割草机210的机壳构造更加灵活,例如,图16所示的智能割 草机210使用了动力头的外观设计,现代靓丽。设置于场景中的一个或多个摄像头190具备无线连接设备191,用于与智能割草机210无线连接,或者是连接到无线网络,例如,用户的家庭wifi网络,以将采集到的图像数据上传到云端服务器200。一个或多个摄像头190可以采用市面常见上的可旋转摄像头,从而获得更宽广的视角和更精确的定位。智能割草机210的主要部件与智能割草机110相似,这里对于两者相同的组件不再重复,两者的不同主要在于:智能割草机210没有直接设置于主体上或者通过支架等连接机构安装到主体上从而跟随智能割草机210同步移动的摄像头;而且,智能割草机210具备无线连接设备250,可以接收一个或多个摄像头190发送的图像数据,或者可以接入互联网与云端服务器200实现数据交互。值得注意的是,对于上一实施例中的智能割草机110而言,由于传感器(摄像头组件120、惯性测量单元122等)集成至割草机主体113,传感器与控制模块之间采用有线连接,所以无线连接设备150不是必须,但是出于提升运算能力、升级便利、运用大数据、降低芯片成本等考虑,智能割草机110也可以具备例如无线网卡、移动网络接收器等无线连接设备150。而对于本实施例中的智能割草机210而言,由于摄像头190与智能割草机210主体分离,彼此之间的数据传输依赖于无线连接,因此,一个或多个摄像头190与智能割草机210都依赖无线连接设备(摄像头190包括无线连接设备191,智能割草机210包括无线连接设备250)来实现无线传输,例如,一个或多个摄像头190分别将采集到的图像数据发送给智能割草机210进行运算处理。
智能割草机210的控制模块的高阶架构可以参照上一实施例的智能割草机110,但是,由于设置在场景中的一个或多个摄像头190采集的图像信息与位于智能割草机110上的摄像头组件120采集的图像信息的视角不同,智能割草机210的控制程序245也与智能割草机110的控制程序145不同:智能割草机210的控制程序245主要利用视觉目标跟踪算法估计智能割草机210在摄像头可见区域中的位置,并据此生成导航和割草指令。一个或多个摄像头190可以将原始的图像数据或者经过一定处理的数据发送给智能割草机210。当仅有一个摄像头190时,智能割草机210的控制程序245采用单视角目标跟踪算法估计自身的位置;当有多个摄像头190时,智能割草机210的控制程序245采用多视角目标跟踪算法估计自身的位置。多视角目标跟踪算法包括集中式的多视角目标跟踪算法和分布式的多视角目标跟踪算法:集中式技术下,多个摄像头190和智能割草机之间的数据传递方式如图17A;分布式技术下,多个摄像头190和智能割草机之间的数据传递方式如图17B。图17A中的智能割草机210,实际上担任了集中式的多视角目标跟踪算法中的融合中心(Fusion Center)的角色,各摄像头190分别将采集到的图像数据发送给智能割草机210进行运算处理。图17B中,各摄像头190在本地完成视频数据的采集和处理,并通过网络与其它视 角的摄像头190进行信息的交互和融合。例如,每个摄像头190融合其自身采集的图像计算得到的位置估计和从相邻摄像头190获得的位置估计得到新的位置估计,并将新的位置估计发送给下一个相邻摄像头190,直到达到期望的精度,再由达到期望的精度的摄像头190将位置估计发送给智能割草机210。智能割草机210的控制程序245根据得到的位置估计,结合自身其他传感器的信息(如果有的话),生成导航和割草指令。与集中式技术相比,分布式技术具有带宽需求低、系统功耗小、实时性高、可靠性强等优势。分布式的多视角目标跟踪算法,降低了对于智能割草机210的处理器芯片的要求,但是对于摄像头190的数据处理能力的要求有所提升,适用于草坪较大,场景较为复杂时使用摄像头190数目较多的情况;而集中式的多视角目标跟踪算法,在草坪较小,场景较为简单时使用摄像头190数目较少的情况。
或者,一个或多个摄像头190与智能割草机210都具备例如无线网卡、移动网络接收器等可以接入互联网的无线连接设备191,并通过云端服务器200实现来自多台设备的数据的整合计算。一个或多个摄像头190、智能割草机210与云端服务器200可以以图17C的架构进行数据交互。一个或多个摄像头190各自将采集到的原始的图像数据或者经过预处理的数据上传到云端服务器200。云端服务器200根据得到的一个或多个摄像头190的数据,选择单视角目标跟踪算法或者多视角目标跟踪算法,并在计算得到智能割草机210的实时位置估计后,将对应的定位估计和地图信息发送给智能割草机210,再由智能割草机210的控制程序245综合其他传感器的数据(如果有的话),生成导航和割草指令;或者,智能割草机210也将自身的其他传感器采集到的数据通过无线网络上传至云端服务器200,云端服务器200在计算得到智能割草机210的实时位置估计后,再根据储存在云端服务器200的预设程序和智能割草机210上传的其他传感器数据,直接做出对应当前情形的导航和割草行为指令并发送给智能割草机210。
本申请还提出一种成本较低的解决方案,即智能割草系统100,包括智能割草机310和移动终端130。移动终端130可以是手机,平板电脑,或者手环等具备摄像头,惯性测量单元(IMU),和计算单元的设备。由于移动终端130提供了摄像头和惯性测量单元,智能割草机310本身则不需要包括摄像头或者惯性测量单元,降低了生产成本。智能割草机310和移动终端130之间可以通过有线通信或者无线通信,实现数据传输。如图18所示,智能割草系统100可以采用智能割草机310,包括:切割刀片312,用于切割草;主体313,用于安装所述切割刀片312;车轮314,可以转动并且支撑主体313;固定装置316,设置于主体313,用于将移动终端130固定安装至智能割草机310;接口311,设置于主体313,用于与移动终端130的接口131配合形成有线连接,实现数据传 输;控制器(未示出),与接口311电连接,当接口311与移动终端130连接时,根据接口311接收的指令数据控制智能割草机310的行为。
在一实施例中,固定装置316的结构参见图19A-19C,图19A中,该固定装置316包括第一挡板381,第二挡板382,支撑板383,支撑杆384和底座385。其中,第一挡板381和第二挡板382平行,分别位于支撑板383的两端且从支撑板383的同侧向外突出并形成相对的倒钩,从而便于将手机、平板等移动终端130固定在第一挡板381和第二挡板382之间。具体地,支撑板383、第一挡板381、第二挡板382的与手机、平板等移动终端130接触的表面还布有硅胶内衬,增大支撑板383、第一挡板381、第二挡板382与手机、平板等移动终端130之间的摩擦力,防止手机、平板等移动终端130在智能割草机310行进过程中因诸如地面不平的原因造成的颠簸而抖落。同时,硅胶内衬也具有一定的弹性,可以在颠簸过程中缓冲手机、平板等移动终端130和支撑板383、第一挡板381、第二挡板382之间的碰撞,减少手机、平板等移动终端130和支撑板383、第一挡板381、第二挡板382的磨损,提升使用寿命。此文不对支撑板383和第一挡板381、第二挡板382的内衬材料加以限制,只要起到防滑和缓冲作用,各种硅胶、橡胶等材料均可。
如图19B-19C所示,未安装移动终端130时,第一挡板381和第二挡板382之间的距离为L1,例如,为适配市面上常见的手机、平板等移动终端130的尺寸(目前绝大部分手机、平板等移动终端在4英寸至12英寸之间),L1可以为10cm,并且,第一挡板381和第二挡板382之间的距离可以改变,换句话说,第二挡板382可以相对于第一挡板381平移,或者第一挡板381可以相对于第二挡板382平移,使得两挡板之间的距离发生变化,从而牢固地夹持不同尺寸的手机、平板等移动终端130。例如,通过在支撑板383背面设置拉簧386和延长杆387,可以使得第一挡板381向远离或靠近第二挡板382的方向平移。为便于描述,将第一挡板381向远离第二挡板382的方向平移的运动称为向外拉伸,将第一挡板381向靠近第二挡板382的方向平移的运动称为向内收缩。具体地,第二挡板382固定连接至支撑板383,而第一挡板381固定连接至支撑板383背面的延长杆387的远离第二挡板382的顶端。拉簧386的一端连接第二挡板382,另一端连接延长杆387的靠近第二挡板382的一端,因此拉簧386的拉力始终将延长杆387向第二挡板382拉动,即使延长杆387向内收缩。由支撑板383、伸缩机构和第一、第二挡板382构成的整体也可以称为夹头。
未安装移动终端130时,拉簧386将延长杆387拉向第二挡板382,直至第一挡板381抵触支撑板383的端部,此时,第一挡板381在拉簧386的拉力和支撑板383的端部的接触面的反作用力下,固定在与支撑板383的端部相抵触的第一位置。需要安装手机、平板等移动终端130时,需要用户首先抓住第一 挡板381将延长杆387向外拉伸,然后将手机、平板等移动终端130平放在支撑板383上、第一挡板381和第二挡板382之间,再松开第一挡板381,使第一挡板381和延长杆387,在拉簧386拉力的作用下,向内收缩,直至第一挡板381抵触移动终端130的边缘,此时第一挡板381在拉簧386的拉力和移动终端130的边缘的接触面的反作用力下,固定在与移动终端130的边缘相抵触的第二位置。可以理解的是,夹持不同尺寸的移动终端130时,会有多个具体位置不完全相同的第二位置,在此我们统称这些固定在与移动终端130的边缘相抵触的位置为第一挡板381的第二位置。第一挡板381和第二挡板382之间的最大距离是L2,L2和L1的差距是ΔL,ΔL表示了该固定装置316的夹头的伸缩量。举例来说,L2可以是19cm,那么ΔL就是9cm,这个移动终端130的固定装置316可以固定住宽度或长度为10cm到19cm之间的手机、平板等移动终端130。实际上,在实际使用中,如果移动终端130尺寸较小,例如手机,则可以将手机竖向夹在第一挡板381和第二挡板382之间,即第一挡板381和第二挡板382夹住手机较长的边;如果移动终端130尺寸较大,例如平板电脑,则可以将平板电脑横向夹在第一挡板381和第二挡板382之间,即第一挡板381和第二挡板382夹住平板电脑较短的边。目前市面上有诸多在售的夹头,结构虽有所差异,但其中不少都能牢固夹持尺寸不同的移动终端130,由于使用广泛,价格低廉,本申请不对夹头的具体结构做出限制,只要是能固定夹持尺寸不同的移动终端130即可。
固定装置316的底座385可以直接通过螺丝螺母等紧固机构固定在智能割草机310主体313的表面,如图18所示,这种设计对现有智能割草机的结构改动小,成本低;但在美观程度和整洁程度上有所欠缺。可选地,如图20,智能割草机310主体313设有一个向内凹陷的容纳腔315,容纳腔315的顶部开口位于智能割草机310主体313的上表面,固定装置316的底座385通过螺丝螺母等紧固机构固定在容纳腔315内,容纳腔315顶部有一个盖板318,盖板318可以打开和关闭。例如,盖板318铰接在容纳腔315顶部开口的一侧,包括打开时的第一位置和关闭时的第二位置。或者,盖板318由可以来回滑动的滑盖和滑盖导轨组成,包括覆盖容纳腔315顶部开口的第一位置和暴露容纳腔315开口的第二位置。容纳腔315和盖板318的好处是,在不使用智能割草机310时,将固定装置316隐藏收纳于智能割草机310主体313内,一方面比较整洁美观,另一方面可以防水防尘防光照,减少固定装置316的清洁需求并延缓老化。如图20所示,接口311也可以设置在容纳腔315的内壁上,从而减少灰尘、水等物质入侵。容纳腔315和盖板318的具体形态在本申请中不设限制;此外,容纳腔315的具体位置,可以根据智能割草机310的电机、PCB板等装置的位置来决定,以方便采集智能割草机310周围的图像信息,对智能割草机310的主 体313的内部各元件的排布影响最小化为宜,在本申请中不设限制,图20只是示例性展示。
非工作时间,移动终端130的固定装置316隐藏收纳于智能割草机310主体313内;因此,在智能割草机310搭载移动终端130工作前,需要将固定装置316的夹头伸出智能割草机310主体313之外以便于移动终端130的摄像头132采集智能割草机310周围的图像信息。为了实现这个目的,可以将固定装置316的支撑杆384设计成可伸缩的结构,例如,参考第一实施例中支架123的内外双管结构。在智能割草机310搭载移动终端130工作前,将支撑杆384的内管向外拉出,使得整个支撑杆384的长度加长,从而使得夹头伸出至智能割草机310主体313之外。在智能割草机310不工作时或者不搭载移动终端130工作时,将支撑杆384的内管向里推回,使得整个支撑杆384的长度缩短,从而完全收纳于智能割草机310的容纳腔315中。本申请不对固定装置316的支撑杆384的具体伸缩结构加以限制,能达到伸长和缩短的效果即可。其他达到类似效果的结构,比如挠性或者可折叠的支撑杆384,也落在本申请的保护范围内。
从图19A中还可以看出,支撑杆384和夹头之间,通过阻尼转轴结构或者滚珠结构388形成了可旋转的连接。这样做的好处是,在智能割草机310装载移动终端130时,用户可以根据实际工况的需要和移动终端130的摄像头132的具体位置,自由调节夹头的角度,也即移动终端130被固定的角度,即移动终端130的摄像头132采集智能割草机310周围的环境的图像信息时的角度。本申请不对旋转连接的具体结构加以限制,能实现旋转的效果即可。在一些例子中,支撑杆384由多节依次连接的短杆组成,既可以折叠省空间,又可利用短杆之间的铰接点,调节夹头的角度。借助所述固定装置316,当移动终端130被固定在智能割草机310的主体313上,移动终端130的位置相对于智能割草机310静止,可以认为,移动终端130的摄像头132采集的周围环境的图像信息即智能割草机310的周围环境的图像信息,移动终端130的惯性测量单元133采集的位姿信息即智能割草机310的位姿信息。
参见图21A-21C,移动终端130包括:摄像头132,用来采集智能割草机310周围环境的图像数据;惯性测量单元133,用于检测智能割草机310的位置和姿态数据;接口131,至少用于数据传输,也可用于充电;存储器(未示出),用于存储控制智能割草机310工作的应用程序135;处理器(未示出),与摄像头132和惯性测量单元133电连接,用于调用应用程序135,计算处理通过摄像头132和惯性测量单元133采集到的信息。所述处理器可以调用所述应用程序135融合摄像头132和惯性测量单元133获取的数据实现智能割草机310的即时定位与地图构建(SLAM),并根据预设的逻辑和实时的数据生成相应的导航和割草指令以控制智能割草机310的行为。市面上常见的手机、平板等移动终端 130有的包括单目摄像头132,有的包括双(多)目摄像头132。在测距原理上,单目摄像头132与双(多)目摄像头132截然不同。双(多)目摄像头132类似人类的双眼,主要通过两幅图像的视差计算来确定距离,可以在静止的时候进行深度估计,使数据的准确度更优,但视差的运算相当消耗资源,存在运算量大,耗能高的缺点。单目摄像头132采集的图像帧虽然丢失了环境的深度信息,但这个缺点可以通过融合惯性测量单元133采集的位姿数据得到一定程度的缓解,例如,根据单目摄像头132拍摄到的前后帧之间特征点的偏移,再融合惯性测量单元133采集的位姿数据计算出摄像头本身的移动和旋转。因而,本申请中也不对移动终端130具有的摄像头132的数目做出严格限制。
惯性测量单元133至少包括加速度计和陀螺仪,进一步地,还可以包括磁力计。以安卓手机为例,其IMU数据包括加速度计(3轴)、陀螺仪(3轴)、磁力计(3轴)共9项数据。一般情况下,IMU被放置在物体的重心位置,但是被固定在固定装置316上的移动终端130的惯性测量单元133与智能割草机310的重心G一般存在几十厘米(例如,30厘米)的直线距离,为了缓解这个问题,可以在应用程序135进行IMU数据处理时设置传感器位置偏移补偿参数,传感器位置偏移补偿参数可以包括3轴数据(X,Y,Z)。其中,X表示移动终端130的惯性测量单元133和智能割草机310的重心G的前后距离,数值为正表示智能割草机310的重心G在移动终端130的惯性测量单元133的前方,数值为负表示智能割草机310的重心G在移动终端130的惯性测量单元133的后方。Y表示移动终端130的惯性测量单元133和智能割草机310的重心G的左右距离,正值表示智能割草机310的重心G在移动终端130的惯性测量单元133的右侧,负值表示表示智能割草机310的重心G在移动终端130的惯性测量单元133的左侧。Z表示移动终端130的惯性测量单元133和智能割草机310的重心G的上下距离,正值表示智能割草机310的重心G在移动终端130的惯性测量单元133的下方,负值表示智能割草机310的重心G在移动终端130的惯性测量单元133的上方。
除了摄像头132和惯性测量单元133,移动终端130还可以包括例如GPS传感器等其他传感器,并在应用程序135中预设相应的传感器融合(Sensor Fusion)的逻辑代码。应用程序135进行视觉惯性融合SLAM的过程,以及涉及更多传感器融合的过程,包括涉及割草区域边界生成、路面选择、智能避障、虚拟围栏和虚拟障碍物设定、智能照明、割草时机选择等具体功能的应用,由于与智能割草机110的控制程序145类似,这里不再赘述。
实现智能割草机310和移动终端130之间的通信的方式多种多样,参见图22A-22E。在本申请中,智能割草机310和移动终端130之间的具体的通信方式不受限制,例如,固定装置316的第二挡板382上可以设置一个公type C接口, 当移动终端130固定在固定装置316时,将移动终端的母type C接口插在固定装置316公type C接口上,即可实现移动终端130与智能割草机310之间的数据传输。但是这种连接方式限制了接口的类型,如果用户的移动终端130的接口类型和智能割草机310的预设的接口类型不同的话,则需要使用转接头。采用独立的数据线连接两个接口可以解决接口不适配的问题,如图22A,智能割草机310有一个USB数据传输接口311,如果移动终端130有一个type C数据传输接口131,通过一根USB-type C数据线,一头连接智能割草机310的USB数据传输接口311,一头连接移动终端130的type C数据传输接口131,即可实现移动终端130与智能割草机310之间的数据传输。而如果用户的移动终端130的数据传输接口131是安卓数据接口,则需要一根USB-安卓数据线,一头连接智能割草机310的USB数据传输接口311,一头连接移动终端130的安卓数据传输接口131,即可实现移动终端130与智能割草机310之间的数据传输。采用独立的数据线传输的好处还在于,可以适应固定装置316的伸缩或者转动。此外,例如手机、平板等移动终端130的充电头普遍使用USB传输接口,也就是说,手机、平板等移动终端130的充电线的与充电头连接的一端基本都是USB传输接口,这样不仅提高了智能割草机310的USB数据传输接口311的普适性,而且因为这根数据线即手机、平板等移动终端130的充电线,所以也可以由用户自备,进一步降低智能割草机310的成本。
采取有线连接时,移动终端130的应用程序135调用摄像头132采集的图像数据和惯性测量单元133采集的位姿数据,并融合两类数据进行即时定位与地图构建(SLAM),这个过程可以调用开源的AR资源包,例如,针对于苹果移动终端130开发的应用程序135可以调用ARKit开发工具集合,针对于安卓移动终端130开发的应用程序135可以调用ARCore开发工具集合。移动终端130的应用程序135根据即时定位与地图构建(SLAM)输出的结果,依照预设程序生成具体的导航和割草指令,返回给智能割草机310,如图22A中的实线箭头所示。预设程序具体可以包含多个应用功能,例如,自动生成割草边界,虚拟围栏设定,路面识别,智能避障,虚拟障碍物设定等;预设程序也可以调用具备物体识别功能的资源包,例如TensorFlow Lite来实现物体识别功能。或者,考虑到智能割草机310自身可能还包括诸如碰撞传感器、跌落传感器等其他传感器,智能割草机310可以将这些传感器采集的数据发送至移动终端130,如图22A中的虚线箭头所示。由移动终端130的应用程序135统筹后,根据预设程序生成具体的导航和割草指令,再通过有线传输将指令传达给智能割草机310,如图22A中的实线箭头所示。
更进一步地,在上述智能割草机310和移动终端130之间的通信的基础上,如图22B所示,移动终端130还包括无线网络连接设备134,可以与云端服务器 200实现数据传输,这样移动终端130的应用程序135无需将所有的运算都在移动终端130本地完成,而是部分或全部在云端服务器200完成,例如,在即时定位与地图构建(SLAM)过程中,将摄像头132采集的所有图像数据和惯性测量单元133采集的角速度与加速度数据全部上传到云端服务器200进行融合;或者,先在移动终端130本地进行数据预处理,例如,图像帧的特征点提取等,然后将经过预处理的数据发送到云端服务器200进行融合,以减少对无线通信速率的依赖。除了即时定位与地图构建(SLAM),云端服务器200还可以运行其他程序逻辑,借助于云计算和云存储方面的能力,云端服务器200可以在障碍物识别、边界识别、路面识别、路径规划等功能应用上发挥优势。移动终端130还可以将用户的设置与偏好一起上传至云端服务器200,例如,割草高度偏好,草坪打印锚点等;云端服务器200也可以自主从互联网获取例如天气季节等相关信息,从而生成导航和割草指令以控制智能割草机310的行为。移动终端130的应用程序135从云端服务器200获取指令后,再通过有线传输将指令传达给智能割草机310。
或者,智能割草机310和移动终端130之间也可以采用无线数据传输。如图22C,因为智能割草机310搭载移动终端130工作时,智能割草机310和移动终端130之间的距离始终很近,因此智能割草机310和移动终端130之间可以实现短距离无线通信,例如蓝牙、ZigBee、NFC等,这种方案需要智能割草机310和移动终端130两者具备匹配的短距离无线通信设备,例如智能割草机310和移动终端130都具备蓝牙。与图22A-22B所示的有线通信相比,采取短距离无线通信的方案,其实质上只是将智能割草机310和移动终端130之间的有线接口改成无线接口,其他方面(传输内容、系统架构等)并无差异。
或者,移动终端130具备无线网卡或wlan模块等无线网络连接设备134,且智能割草机310具备无线网卡或wlan模块等无线网络连接设备350,如图22D。当用户的草坪被无线网络全面覆盖时,移动终端130和智能割草机310都可以通过无线网络连接云端服务器200。移动终端130的应用程序135可以将摄像头132采集的所有图像数据和惯性测量单元133采集的角速度与加速度数据全部上传到云端服务器200进行AR融合;或者,在移动终端130本地进行例如特征点提取等数据预处理,然后将预处理过后的数据发送到云端服务器200进行AR融合,以减少对于通信速率的依赖。同时,智能割草机310也可以将诸如碰撞传感器、跌落传感器等其他传感器采集的信息(如果有的话,图22D中以虚线箭头表示)上传到云端服务器200,这些信息也可以作为参数参与到云端服务器200的运算决策过程中去。云端服务器200根据上传的各种数据和内置程序做出导航和割草行为的指令后,将结果直接返回给智能割草机310。相较于图22B中,云端服务器200将计算结果返回给移动终端130再由移动终端130返回给 智能割草机310,云端服务器200直接将结果返回给智能割草机310具有减少延迟的好处。
当用户的草坪由于面积过大等原因而未能实现无线网络的全面覆盖时,上述方案还有一个弥补性的落实方法,参见图22E。由于手机等移动终端130普遍具有移动网络接收137和wifi热点138功能,可以将移动终端130接收到的移动网络信号转化为wifi信号发出去,而智能割草机310具有无线网卡或wlan模块等无线网络连接设备350,可以通过移动终端130的wifi热点138发出的wifi网络与云端服务器200实现无线通信。当智能割草机310与移动终端130不处在同一wifi网络时,例如,智能割草机310通过移动终端130的热点网络上网,而移动终端130通过移动网络上网,云端服务器200可能无法自动识别出智能割草机310和移动终端130的配对,此时应用程序135和智能割草机310上传数据时可以增加智能割草机310的ID作为识别码,智能割草机310获取指令时可以以智能割草机310的ID为凭证。
相比于第一实施例,上述将智能割草机310与移动终端130集成的智能割草系统100,减少了对于智能割草机310的硬件需求,不仅节省了摄像头132和惯性测量单元133的成本,还通过将对于计算资源需求较高的AR运算转嫁到移动终端130的应用程序上的方式,降低了对于智能割草机310的处理芯片的要求,从而节省了芯片成本。另外,日常情况下,人们使用移动终端130更为频繁;移动终端130上的应用程序135,借助于各种应用市场的平台,其升级、维护、拓展都更加便捷,例如,应用程序135V1.0.0版可以是纯本地运算,应用程序135V1.2.0版可以主要依赖于本地运算,但将需要进行物体识别计算的图片上传至云端服务器200,借助大数据更精准地判断障碍物类型。当然,从另一个角度来说,将移动终端130与智能割草机310在智能割草机310工作时固定,也会为用户带来一定程度的不便,因为很多人如今习惯了手机不离手,只在充电的时候才使手机离开一会。为了尽量缓解手机分离为用户带来的焦虑,也为了防止移动终端130剩余电量过少,不足以完成一次完整的割草任务,可以将智能割草机310配置为:连接移动终端130时,使用智能割草机310的电池包为移动终端130的电池充电。同时,为了避免智能割草机310自身的电量不足时仍然坚持为移动终端130充电而造成诸如工作时间骤减、电池包过度放电等问题,可以设定一个充电阈值,例如70%。即,如果智能割草机310的电池包的剩余电量大于等于70%,则给连接的移动终端130充电;如果智能割草机310的电池包的剩余电量小于70%,则不给连接的移动终端130充电。需要注意的是,此处,70%只是示例,并不限制本案的保护范围,只要是设定一个智能割草机310的剩余电量的阈值来决定智能割草机310是否为连接的移动终端130充电的方案,均落在本申请的保护范围内。

Claims (49)

  1. 一种智能割草机,包括:
    摄像头,用于采集智能割草机周围环境的图像数据;
    惯性测量单元,用于检测智能割草机的位姿数据;
    存储器,至少用于存储控制智能割草机工作或行走的应用程序;
    处理器,用于调用所述应用程序,融合摄像头采集的图像数据和惯性测量单元获取的位姿数据,进行智能割草机的即时定位与地图构建,并生成导航和割草动作指令。
  2. 如权利要求1所述的智能割草机,还包括:主体,所述摄像头安装于主体。
  3. 如权利要求2所述的智能割草机,其中:所述摄像头安装于所述主体的前侧。
  4. 如权利要求1所述的智能割草机,其中:所述应用程序可以根据图像数据中的二维平面的特征点,对照草地的纹理特征,区分草地与非草地,并且以草地与非草地的界线为离散锚点,通过视觉惯性融合的即时定位与地图构建,自动生成割草区域边界。
  5. 如权利要求1所述的智能割草机,还包括:切割刀片,所述应用程序可以根据图像数据中的二维平面的特征点,对照草地的纹理特征,区分草地与非草地,并且在当前工作平面不是草地时,停止转动切割刀片。
  6. 如权利要求1所述的智能割草机,其中:所述应用程序可以根据图像数据中的二维平面的特征点,对照应用程序预设的常见类型的地面的纹理特征,判断当前工作平面的类型,且在当前工作平面包含多个地面类型时,控制所述智能割草机驶向多个地面类型中的硬度较大的地面。
  7. 如权利要求1所述的智能割草机,其中:所述应用程序还包括物体识别程序,所述应用程序可以根据所述物体识别程序识别出的障碍物类别选择对应的避障策略。
  8. 如权利要求1所述的智能割草机,还包括:全球定位系统传感器,所述应用程序使用全球定位系统传感器的定位结果对视觉惯性融合的即时定位与地图构建的结果进行滤波校正。
  9. 如权利要求1所述的智能割草机,还包括:照明灯,所述应用程序可以根据图像数据获取当前环境的光照强度,并在光照强度低于第一光照强度阈值时开启照明灯。
  10. 一种智能割草机,包括:
    主体;
    摄像头,用于采集智能割草机周围环境的图像数据;
    支撑杆,用于支撑摄像头;
    惯性测量单元,用于检测智能割草机的位姿数据;
    存储器,至少用于存储控制智能割草机工作或行走的应用程序;
    处理器,用于调用所述应用程序,融合摄像头采集的图像数据和惯性测量单元获取的位姿数据,进行智能割草机的即时定位与地图构建,并生成导航和割草动作指令。
  11. 如权利要求10所述的智能割草机,其中:所述支撑杆设置于主体的上表面。
  12. 如权利要求10所述的智能割草机,其中:所述支撑杆可伸缩,包括长度为第一长度的第一状态和长度为第二长度的第二状态,第二长度大于第一长度。
  13. 如权利要求12所述的智能割草机,其中:所述智能割草机还包括设置于主体的中部的容纳腔,用于容纳支撑杆和摄像头;当所述支撑杆处于第一状态时,摄像头和全部支撑杆位于所述容纳腔以内,当所述支撑杆处于第二状态时,摄像头和部分支撑杆位于所述容纳腔以外。
  14. 如权利要求13所述的智能割草机,其中:所述容纳腔顶部有用于防水防尘的盖板,所述盖板有闭合状态和打开状态;当所述支撑杆处于第一长度时,所述盖板处于闭合状态,当所述支撑杆处于第二长度时,所述盖板处于打开状态。
  15. 如权利要求14所述的智能割草机,其中:所述盖板铰链连接至所述容纳腔的顶部的边缘。
  16. 如权利要求14所述的智能割草机,其中:所述盖板相对于所述容纳腔滑动。
  17. 如权利要求10所述的智能割草机,其中:
    所述主体的上表面形成有用于容纳支撑杆的凹槽;
    所述支撑杆通过阻尼转轴装置固定至所述主体的上表面,包括放置于所述主体的上表面的凹槽里的第一状态和基本垂直于所述主体的上表面的凹槽的第二状态。
  18. 一种智能割草系统,包括:
    智能割草机,至少包括:
    摄像头,用于采集智能割草机周围环境的图像数据;
    惯性测量单元,用于检测智能割草机的位姿数据;
    互动显示界面;
    存储器,至少用于存储控制智能割草机工作或行走的应用程序;
    处理器,被设置为调用所述应用程序,融合摄像头采集的图像数据和惯性测量单元获取的位姿数据,进行智能割草机的即时定位与地图构建,并生成导航和割草动作指令。
  19. 如权利要求18所述的智能割草系统,其中:所述互动显示界面位于所述智能割草机。
  20. 如权利要求18所述的智能割草系统,还包括:移动终端,所述互动显示界面位于所述移动终端。
  21. 如权利要求18所述的智能割草系统,其中:所述存储器和处理器位于所述智能割草机。
  22. 如权利要求18所述的智能割草系统,还包括:移动终端,所述存储器和处理器位于所述移动终端。
  23. 如权利要求18所述的智能割草系统,其中:用户可以通过所述互动显示界面查看所述摄像头采集的实时图像,并在实时图像上叠加虚拟围栏,所述应用程序将虚拟围栏的锚点加入割草区域边界的锚点集合。
  24. 如权利要求18所述的智能割草系统,其中:用户可以通过所述互动显示界面查看所述摄像头采集的实时图像,并在实时图像上叠加虚拟障碍物,所述应用程序记录虚拟障碍物的锚点并规划路径绕开虚拟障碍物。
  25. 一种智能割草系统,包括智能割草机和设置于工作场景的摄像头:
    其中,所述摄像头包括无线通信设备,用于与所述智能割草机无线连接;
    所述智能割草机包括:
    切割刀片,用于切割草;
    主体,用于支撑所述切割刀片;
    至少一个车轮,可转动并且支撑主体;
    无线通信设备,用于与摄像头无线连接;
    存储器,至少用于存储控制智能割草机工作或行走的应用程序;
    处理器,被设置为调用所述应用程序,进行导航和割草控制。
  26. 如权利要求25所述的智能割草系统,其中:所述摄像头被设置于屋顶。
  27. 如权利要求25所述的智能割草系统,还包括:充电桩,所述摄像头被设置于充电桩的顶部。
  28. 如权利要求25所述的智能割草系统,其中:所述摄像头获取工作场景的图像数据并通过无线通信设备将图像数据发送给智能割草机,所述应用程序用所述摄像头获取的图像数据进行目标跟踪计算得到所述智能割草机的当前位置估计,再根据当前位置估计生成导航和割草动作指令。
  29. 如权利要求25所述的智能割草系统,其中:所述设置于工作场景的摄像头为多个。
  30. 如权利要求29所述的智能割草系统,其中:多个所述摄像头获取不同视角的工作场景的图像数据,并先通过分布式目标跟踪计算得出所述智能割草机的当前位置估计,再将位置估计发送给所述智能割草机。
  31. 如权利要求29所述的智能割草系统,其中:所述智能割草系统还包括云端服务器,每个摄像头通过无线通信设备将获取的工作场景的图像数据上传到云端服务器,云端服务器通过多视角目标跟踪算法进行目标跟踪计算得到所述智能割草机的当前位置估计,所述智能割草机通过无线通信设备从云端服务器获取当前位置估计。
  32. 一种智能行走工具系统,包括:
    智能行走设备;
    摄像头,用于获取所述智能行走设备的周围环境的图像数据;
    惯性测量单元,用于检测所述智能行走设备的位姿数据;
    存储器,至少用于存储控制智能行走设备工作或行走的应用程序;
    处理器,用于融合摄像头采集的图像数据和惯性测量单元获取的位姿数据以进行智能行走设备的即时定位与地图构建,并生成导航和工作指令。
  33. 如权利要求32所述的智能行走工具系统,还包括:
    移动终端,所述存储器位于所述移动终端中。
  34. 如权利要求32所述的智能行走工具系统,还包括:
    移动终端,所述处理器位于所述移动终端中。
  35. 如权利要求32所述的智能行走工具系统,还包括:
    移动终端,所述摄像头位于所述移动终端。
  36. 如权利要求32所述的智能行走工具系统,还包括:
    移动终端,所述惯性测量单元位于所述移动终端中。
  37. 如权利要求32所述的智能行走工具系统,其中:所述智能行走设备还包括:主体,所述摄像头设置在所述智能行走设备的主体。
  38. 如权利要求32所述的智能行走工具系统,其中:所述智能行走设备还包括:主体,所述惯性测量单元设置在所述智能行走设备的主体里。
  39. 如权利要求32所述的智能行走工具系统,其中:所述智能行走设备还包括:主体,所述处理器设置在所述智能行走设备的主体里。
  40. 如权利要求32所述的智能行走工具系统,其中:所述智能行走设备还包括:主体,所述控制器设置在所述智能行走设备的主体里。
  41. 如权利要求37所述的智能行走工具系统,其中:所述摄像头相对于所述主体可上下移动。
  42. 如权利要求37所述的智能行走工具系统,其中:所述智能行走设备还包括:支撑杆,用于支撑摄像头。
  43. 如权利要求42所述的智能行走工具系统,其中:所述支撑杆可伸缩,具有长度为第一长度的第一状态和长度为第二长度的第二状态,第二长度大于第一长度。
  44. 如权利要求43所述的智能行走工具系统,其中:所述智能行走设备还包括:容纳腔,设置于主体,用于容纳所述支撑杆和摄像头。
  45. 如权利要求32所述的智能行走工具系统,还包括:
    互动显示界面,被配置为供用户查看所述摄像头获取的实时图像,并在实时图像上叠加虚拟围栏,所述应用程序将虚拟围栏的锚点加入工作区域边界的锚点集合。
  46. 如权利要求32所述的智能行走工具系统,还包括:
    互动显示界面,被配置为供用户查看所述摄像头获取的实时图像,并在实时图像上叠加虚拟障碍物,所述应用程序记录虚拟障碍物的锚点并规划路径绕开虚拟障碍物。
  47. 如权利要求32所述的智能行走工具系统,其中:所述应用程序可以根据图像数据中的二维平面的特征点,对照应用程序预设的常见类型的地面的纹理特征,判断当前工作平面的类型,且在当前工作平面包含多个地面类型时, 控制所述智能割草机驶向多个地面类型中的硬度较大的地面。
  48. 如权利要求32所述的智能行走工具系统,其中:所述应用程序还包括物体识别程序,所述应用程序可以根据所述物体识别程序识别出的障碍物类别选择对应的避障策略。
  49. 如权利要求32所述的智能行走工具系统,还包括:全球定位系统传感器,所述应用程序使用全球定位系统传感器的定位结果对视觉惯性融合的即时定位与地图构建的结果进行滤波校正。
PCT/CN2020/135252 2020-12-10 2020-12-10 智能割草机以及智能割草系统 WO2022120713A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CA3200096A CA3200096A1 (en) 2020-12-10 2020-12-10 Intelligent mower and smart mowing system
EP20964651.2A EP4224268A4 (en) 2020-12-10 2020-12-10 SMART MOWER AND SMART MOWING SYSTEM
CN202080054020.0A CN114945882A (zh) 2020-12-10 2020-12-10 智能割草机以及智能割草系统
PCT/CN2020/135252 WO2022120713A1 (zh) 2020-12-10 2020-12-10 智能割草机以及智能割草系统
US18/301,774 US20230259138A1 (en) 2020-12-10 2023-04-17 Smart mower and smart mowing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/135252 WO2022120713A1 (zh) 2020-12-10 2020-12-10 智能割草机以及智能割草系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/301,774 Continuation US20230259138A1 (en) 2020-12-10 2023-04-17 Smart mower and smart mowing system

Publications (1)

Publication Number Publication Date
WO2022120713A1 true WO2022120713A1 (zh) 2022-06-16

Family

ID=81972993

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/135252 WO2022120713A1 (zh) 2020-12-10 2020-12-10 智能割草机以及智能割草系统

Country Status (5)

Country Link
US (1) US20230259138A1 (zh)
EP (1) EP4224268A4 (zh)
CN (1) CN114945882A (zh)
CA (1) CA3200096A1 (zh)
WO (1) WO2022120713A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11678604B1 (en) 2022-12-21 2023-06-20 Sensori Robotics, LLC Smart lawnmower with development of mowing policy and system and method for use of same
WO2024093238A1 (zh) * 2022-11-02 2024-05-10 无锡君创飞卫星科技有限公司 一种具有激光雷达的割草机控制方法及装置
US12001182B1 (en) * 2022-12-21 2024-06-04 Sensori Robotics, LLC Smart lawnmower with realization of mowing policy and system and method for use of same

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113900517B (zh) * 2021-09-30 2022-12-20 北京百度网讯科技有限公司 线路导航方法和装置、电子设备、计算机可读介质
CN115413472A (zh) * 2022-09-22 2022-12-02 珠海格力电器股份有限公司 除草机控制方法、装置、存储介质及除草机

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202958189U (zh) * 2012-11-27 2013-06-05 上海大学 智能割草机器人
CN205196323U (zh) * 2015-12-21 2016-05-04 广东工业大学 一种基于物联网的太阳能智能割草机器人
CN106647765A (zh) * 2017-01-13 2017-05-10 深圳拓邦股份有限公司 一种基于割草机器人的规划平台
CN107463168A (zh) * 2016-06-06 2017-12-12 苏州宝时得电动工具有限公司 定位方法及系统、地图构建方法和系统、自动行走设备
CN110612492A (zh) * 2018-06-20 2019-12-24 灵动科技(北京)有限公司 一种自驱动无人驾驶割草机
US20200068799A1 (en) * 2017-05-23 2020-03-05 The Toadi Order BV An energetically autonomous, sustainable and intelligent robot

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3971672A1 (en) * 2014-12-17 2022-03-23 Husqvarna AB Multi-sensor, autonomous robotic vehicle with mapping capability
EP3300842B1 (en) * 2015-06-05 2021-03-03 Ariel Scientific Innovations Ltd. System and method for coordinating terrestrial mobile automated devices
WO2018146518A1 (en) * 2017-02-10 2018-08-16 Airmow Holdings Pty Ltd Method and apparatus for estimating area

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202958189U (zh) * 2012-11-27 2013-06-05 上海大学 智能割草机器人
CN205196323U (zh) * 2015-12-21 2016-05-04 广东工业大学 一种基于物联网的太阳能智能割草机器人
CN107463168A (zh) * 2016-06-06 2017-12-12 苏州宝时得电动工具有限公司 定位方法及系统、地图构建方法和系统、自动行走设备
CN106647765A (zh) * 2017-01-13 2017-05-10 深圳拓邦股份有限公司 一种基于割草机器人的规划平台
US20200068799A1 (en) * 2017-05-23 2020-03-05 The Toadi Order BV An energetically autonomous, sustainable and intelligent robot
CN110612492A (zh) * 2018-06-20 2019-12-24 灵动科技(北京)有限公司 一种自驱动无人驾驶割草机

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4224268A4 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024093238A1 (zh) * 2022-11-02 2024-05-10 无锡君创飞卫星科技有限公司 一种具有激光雷达的割草机控制方法及装置
US11678604B1 (en) 2022-12-21 2023-06-20 Sensori Robotics, LLC Smart lawnmower with development of mowing policy and system and method for use of same
US12001182B1 (en) * 2022-12-21 2024-06-04 Sensori Robotics, LLC Smart lawnmower with realization of mowing policy and system and method for use of same

Also Published As

Publication number Publication date
EP4224268A1 (en) 2023-08-09
CA3200096A1 (en) 2022-06-16
US20230259138A1 (en) 2023-08-17
EP4224268A4 (en) 2024-03-20
CN114945882A (zh) 2022-08-26

Similar Documents

Publication Publication Date Title
WO2022120713A1 (zh) 智能割草机以及智能割草系统
AU2019208265B2 (en) Moving robot, method for controlling the same, and terminal
EP3603370B1 (en) Moving robot, method for controlling moving robot, and moving robot system
CN114616972A (zh) 智能割草机以及智能割草系统
CN109730590B (zh) 清洁机器人以及清洁机器人自动返回充电的方法
CN108247647B (zh) 一种清洁机器人
US20210064043A1 (en) Sensor fusion for localization and path planning
JP5946147B2 (ja) 可動式ヒューマンインターフェースロボット
US11960278B2 (en) Moving robot and controlling method thereof
KR102292262B1 (ko) 이동 로봇 및 그 제어방법
US20200068799A1 (en) An energetically autonomous, sustainable and intelligent robot
KR102238352B1 (ko) 스테이션 장치 및 이동 로봇 시스템
CN108536145A (zh) 一种使用机器视觉进行智能跟随的机器人系统及运行方法
CN111328017B (zh) 一种地图传输方法和装置
CN211022482U (zh) 清洁机器人
CN105204505A (zh) 一种基于扫地机器人定位视频采集和制图的系统及方法
CN109571404A (zh) 一种越障机构、越障智能巡检机器人及其变电站越障方法
CN112819943A (zh) 一种基于全景相机的主动视觉slam系统
WO2023125363A1 (zh) 电子围栏自动生成方法、实时检测方法及装置
CN114600621A (zh) 智能割草机以及智能割草系统
Einecke et al. Boundary wire mapping on autonomous lawn mowers
CN110271015A (zh) 一种摄像机姿态可调的主动式视觉slam移动平台
CN116352722A (zh) 多传感器融合的矿山巡检救援机器人及其控制方法
CN102645208B (zh) 基于动态路由机制的视觉测量定位及校正系统
EP4006681A1 (en) Autonomous work machine, control device, method for controlling autonomous work machine, method for operating control device, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20964651

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 3200096

Country of ref document: CA

ENP Entry into the national phase

Ref document number: 2020964651

Country of ref document: EP

Effective date: 20230506

NENP Non-entry into the national phase

Ref country code: DE