WO2021073587A1 - 自行走割草系统、自行走割草机和户外自行走设备 - Google Patents

自行走割草系统、自行走割草机和户外自行走设备 Download PDF

Info

Publication number
WO2021073587A1
WO2021073587A1 PCT/CN2020/121378 CN2020121378W WO2021073587A1 WO 2021073587 A1 WO2021073587 A1 WO 2021073587A1 CN 2020121378 W CN2020121378 W CN 2020121378W WO 2021073587 A1 WO2021073587 A1 WO 2021073587A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
real
module
mowing
virtual
Prior art date
Application number
PCT/CN2020/121378
Other languages
English (en)
French (fr)
Inventor
陈伟鹏
杨德中
Original Assignee
南京德朔实业有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 南京德朔实业有限公司 filed Critical 南京德朔实业有限公司
Priority to EP20876278.1A priority Critical patent/EP4018802A4/en
Publication of WO2021073587A1 publication Critical patent/WO2021073587A1/zh
Priority to US17/709,004 priority patent/US20220217902A1/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0248Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means in combination with a laser
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01DHARVESTING; MOWING
    • A01D34/00Mowers; Mowing apparatus of harvesters
    • A01D34/006Control or measuring arrangements
    • A01D34/008Control or measuring arrangements for automated or remotely controlled operation
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01DHARVESTING; MOWING
    • A01D69/00Driving mechanisms or parts thereof for harvesters or mowers
    • A01D69/02Driving mechanisms or parts thereof for harvesters or mowers electric
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/0011Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots associated with a remote control arrangement
    • G05D1/0038Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots associated with a remote control arrangement by providing the operator with simple or augmented images from one or more cameras located onboard the vehicle, e.g. tele-operation
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/0011Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots associated with a remote control arrangement
    • G05D1/0044Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots associated with a remote control arrangement by providing the operator with a computer generated representation of the environment of the vehicle, e.g. virtual reality, maps
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/027Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means comprising intertial navigation means, e.g. azimuth detector
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/0278Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using satellite positioning signals, e.g. GPS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01DHARVESTING; MOWING
    • A01D2101/00Lawn-mowers
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0255Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar

Definitions

  • This application relates to an outdoor electric tool, such as a self-propelled mowing system, a self-propelled lawn mower, and an outdoor self-propelled device.
  • the self-propelled mowing system does not require long-term operation by the user. It is intelligent and convenient and is favored by users.
  • the mowing work of the traditional self-propelled mowing system there are often obstacles in the mowing area, such as trees and stones. The obstacles will not only affect the walking trajectory of the self-propelled mowing system, but also collide with obstacles many times. It is also easy to damage the self-propelled mowing system.
  • In the mowing area there may be areas where the user does not want to mowing, such as planted flowers and grass, but the traditional self-propelled mowing system cannot detect this area, so it will mistakenly cut the area where the user does not want to mowing. Can not meet the user's mowing needs.
  • Other common outdoor walking equipment, such as snowplows also have the above problems.
  • This application provides a self-propelled mowing system, which can display the simulated real-time image or real-time image of the actuator, so that the user can add obstacle marks on the simulated real-time image or real-time image, and control the self-propelled mowing system to avoid obstacles. Area and can intuitively obtain the working status of the self-propelled mowing system.
  • An embodiment of the present application proposes a self-propelled mowing system, including: an actuator, including a mowing component for realizing the mowing function and a walking component for realizing the walking function; a housing for supporting the actuator; image
  • the acquisition module can acquire real-time images including at least part of the divided grass area and at least part of the divided grass boundaries;
  • the display module is electrically or communicatively connected with the image acquisition module, and the display module is configured to display real-time images or generate real-time images based on The real-world image of the simulation;
  • the boundary generation module which generates the first virtual boundary corresponding to the mowing boundary in the real-time image by calculating the characteristic parameters to form the first fusion image;
  • the receiving module is used to receive user input whether the first fusion needs to be corrected The information of the first virtual boundary in the image;
  • the correction module when the user inputs information that needs to modify the first virtual boundary, receives a user instruction to modify the first virtual boundary to generate a second virtual boundary in a
  • the receiving module is arranged outside the actuator, and the receiving module includes any one or more of mobile devices such as a keyboard, a mouse, a microphone, a touch screen, a remote control and/or a handle, a camera, a lidar, and a mobile phone.
  • mobile devices such as a keyboard, a mouse, a microphone, a touch screen, a remote control and/or a handle, a camera, a lidar, and a mobile phone.
  • the receiving module is further configured to receive the first virtual obstacle added by the user, and the actuator is controlled to avoid the actual virtual obstacle corresponding to the first virtual obstacle when walking.
  • the receiving module is further configured to receive the first walking path added by the user, and the actuator is controlled to walk within the second virtual boundary according to the first walking path.
  • An embodiment provides a self-propelled lawn mower, including: a main body including a casing; a mowing element connected to the main body and used for cutting vegetation; an output motor to drive the mowing element; a traveling wheel connected to the main body; a driving motor, Drive the walking wheel to rotate;
  • the image acquisition module can collect real-time images including at least part of the divided grass area and at least one obstacle located in the mowing area, and is set to send the real-time image to the display module to display the real-time image or according to the real-time image
  • the control module can receive instructions input by the user to generate virtual obstacle identification corresponding to the obstacle in the real-time image or simulated real scene map to form the first fusion image, and the control module controls the actuator to avoid the first
  • the virtual obstacle in the fusion image identifies the corresponding obstacle.
  • An embodiment provides a self-propelled mowing system, including: an actuator, including a mowing component for realizing the mowing function and a working component for realizing the walking function; a housing for supporting the actuator; an image acquisition module , Can collect real-time images including at least part of the divided grass area and at least part of the divided grass boundaries; the display module is electrically or communicatively connected with the image acquisition module, and the display module is configured to display real-time images or simulations based on the generation of real-time images Real scene image; boundary generation module, which generates a first virtual boundary corresponding to the mowing boundary in the real-time image by calculating feature parameters to form a first fusion image; sending module, sending the first fusion image; control module, electrically connected to the sending module Or communication connection, the control module controls the execution mechanism to run within the first virtual boundary.
  • the self-propelled mowing system further includes a positioning module.
  • the positioning module includes one or a combination of GPS positioning unit, IMU inertial measurement unit, and displacement sensor, which is used to obtain the real-time position of the actuator. Positioning data analysis to obtain the control and adjustment of the movement of the actuator and the mowing.
  • a display module includes a projection device and an interactive interface.
  • the interactive interface is projected by the projection device, and the interactive interface displays a simulated real-time image or real-time image.
  • the self-propelled mowing system further includes a guide channel setting module, and the guide channel setting module is used to receive a virtual guide channel set by the user between the first virtual sub-cutting area and the second virtual sub-cutting area , Used to guide the walking path of the actuator between the first sub-mowing area and the second sub-mowing area corresponding to the first virtual sub-mowing area and the second virtual sub-mowing area.
  • An embodiment provides an outdoor self-propelled device, including: an actuator, including a walking component for realizing a walking function and a working component for realizing a preset function; a housing, for supporting the actuator; an image acquisition module, capable of Acquire real-time images including at least part of the working area and at least part of the working boundary; a display module, which is electrically or communicatively connected with the image acquisition module, and the display module is configured to display real-time images or simulated real-time images generated based on the real-time images; boundaries;
  • the generating module generates a first virtual boundary corresponding to the working boundary in the real-time image by calculating the characteristic parameters to form the first fused image; the receiving module is used to receive user input whether the first virtual boundary in the first fused image needs to be corrected
  • the correction module when the user inputs information that needs to be corrected for the first virtual boundary, receives a user instruction to correct the first virtual boundary to generate a second virtual boundary in a real-time image or a simulated real image to form
  • An embodiment provides an outdoor self-propelled device, including: an actuator, including a walking component for realizing a walking function and a working component for realizing a preset function; a housing, for supporting the actuator; an image acquisition module, capable of Acquire real-time images including at least part of the working area and at least part of the working boundary; a display module, which is electrically or communicatively connected with the image acquisition module, and the display module is configured to display real-time images or simulated real-time images generated based on the real-time images; boundaries;
  • the generating module generates the first virtual boundary corresponding to the working boundary in the real-time image by calculating the characteristic parameters to form the first fused image; the sending module sends the first fused image; the control module is electrically or communicatively connected with the sending module to control
  • the module control actuator runs within the first virtual boundary.
  • Fig. 1 is a structural diagram of the actuator of the self-propelled mowing system of the present application.
  • Fig. 2 is a schematic diagram of the connection between the actuator and the projection device in Fig. 1.
  • Fig. 3 is a schematic diagram of a part of the internal structure of the actuator in Fig. 2.
  • Fig. 4 is a schematic diagram of the frame of the actuator in Fig. 1.
  • Fig. 5 is a schematic diagram of the framework of the self-propelled mowing system in Fig. 1.
  • Fig. 6 is a schematic diagram of the mowing area in the first embodiment of the present application.
  • FIG. 7 is a schematic diagram of the interactive interface of the first embodiment of the present application.
  • FIG. 8 is a schematic diagram of a real-time image displayed on the interactive interface of the first embodiment of the present application.
  • FIG. 9 is a schematic diagram of the first fusion image displayed on the interactive interface of the first embodiment of the present application.
  • FIG. 10 is a schematic diagram of a second fusion image in the interactive interface of the first embodiment of the present application.
  • Fig. 11 is a schematic diagram of the coordinate system of the actuator of the first embodiment of the present application.
  • FIG. 12 is a schematic diagram of the pixel coordinate system of the first embodiment of the present application.
  • FIG. 15 is a schematic diagram of the first fusion image in the second embodiment of the present application.
  • FIG. 16 is a schematic diagram of the framework of the self-propelled mowing system according to the third embodiment of the present application.
  • Fig. 17 is a schematic diagram of the mowing area in the third embodiment of the present application.
  • FIG. 18 is a schematic diagram of the first fusion image in the third embodiment of the present application.
  • FIG. 19 is a schematic diagram of the first fusion image in the third embodiment of the present application.
  • FIG. 20 is a schematic diagram of a second fused image in the third embodiment of the present application.
  • Fig. 21 is a schematic diagram of the frame of the self-propelled mowing system according to the fourth embodiment of the present application.
  • Fig. 22 is a schematic diagram of a mowing area according to a fourth embodiment of the present application.
  • FIG. 23 is a schematic diagram of the first fusion image in the fourth embodiment of the present application.
  • FIG. 24 is a schematic diagram of the first fusion image in the fourth embodiment of the present application.
  • FIG. 25 is a schematic diagram of a second fusion image in the fourth embodiment of the present application.
  • FIG. 26 is a schematic diagram of setting a virtual guide channel identifier according to the fourth embodiment of the present application.
  • Fig. 27 is a schematic structural diagram of an outdoor self-propelled device according to a fifth embodiment of the present application.
  • the self-propelled mowing system includes an actuator 100 for trimming vegetation, and the actuator 100 includes at least a mowing assembly 120 and a mowing assembly for realizing the mowing function.
  • the walking assembly 110 for realizing the walking function includes a main body 140 and a casing 130, and the casing 130 packages and supports the main body 140, the mowing assembly 120, and the walking assembly 110.
  • the mowing assembly 120 includes a mowing element 121 and an output motor 122.
  • the output motor 122 drives the mowing element 121 to rotate to trim vegetation.
  • the mowing element 121 may be a blade or other elements that can cut and trim lawns.
  • the traveling assembly 110 includes at least one traveling wheel 111 and a driving motor 112 for driving the traveling wheel 111, and the driving motor 112 provides torque to the at least one traveling wheel 111.
  • the self-propelled mowing system can control the actuator 100 to move and operate on the vegetation.
  • the actuator 100 is hardware for the self-propelled mowing system to realize the mowing function.
  • the actuator 100 is a self-propelled lawn mower.
  • the self-propelled mowing system further includes a receiving module 200, a processing component 180, and a power source 170.
  • the receiving module 200 at least includes a receiving module 200 for receiving user instructions.
  • the processing component 180 at least includes a control module 150 for controlling the operation of the self-propelled mowing system.
  • the control module 150 is used to control the operation of the drive motor 112 and the output motor 122 according to instructions and operating parameters of the self-propelled mowing system to control the actuator 100 walking in the corresponding work area and mowing operations.
  • the power supply 170 is used to supply power to the walking component and the output component.
  • the power supply 170 is a pluggable battery pack and is installed in the casing 130.
  • the self-propelled mowing system includes an image acquisition module 400 and a display module 500.
  • the processing component 180 includes a control module 150 for calculating image information.
  • the display module 500 and the image acquisition module 400 are electrically or communicatively connected.
  • the image acquisition module 400 can acquire The real-time image 530 of the at least partially divided grass area and the at least partially divided grass boundary, and the corresponding real-time image 530 of the cutting grass area and the grass cutting boundary is displayed through the display module 500.
  • the image acquisition module 400 includes at least one or a combination of a camera 410, a lidar 420, and a TOF sensor 430.
  • the surrounding environment information of the actuator 100 is acquired through the camera 410 and the lidar 420, that is, acquired through the camera 410
  • the environment image of the mowing area and the mowing boundary to be operated, and the position of the mowing area and the object within the mowing boundary, the distance to the current actuator 100, the slant distance, and the information from the laser reflection of the lidar 420 can be obtained.
  • the control module 150 receives the image information of the mowing area and the mowing boundary collected by the image acquisition module 400, and merges the characteristic parameters of the object in the image onto the image.
  • the display module 500 displays the real-time image 530 of the mowing area and the mowing boundary collected by the image acquisition module 400 to the user.
  • the self-propelled mowing system also includes a positioning module 300 for acquiring the position of the actuator 100.
  • the positioning module 300 includes one or a combination of a GPS positioning unit 310, an IMU inertial measurement unit 320, and a displacement sensor 330 for acquiring the position of the actuator 100.
  • the GPS positioning unit 310 is used to obtain the position information or position estimation of the actuator 100 and the starting position of the actuator 100 to move.
  • the IMU inertial measurement unit 320 includes an accelerometer and a gyroscope, and is used to detect the offset information of the actuator 100 during the traveling process.
  • the displacement sensor 330 may be provided on the driving motor 112 or the walking wheel 111 to obtain displacement data of the actuator 100. The information obtained by multiple devices above is combined and corrected to obtain more accurate position information, and obtain the real-time position and posture of the actuator 100.
  • control module 150 generates a simulated real-life map 540 of the mowing area according to the image information and data information of the mowing area collected by the image acquisition module 400, and the simulated real-life map 540 simulates the boundary and area of the mowing area. , Obstacles, etc., and establish the actuator model 160.
  • the actuator model 160 is correspondingly displayed in the simulation real map 540, so that the position and operation status of the actuator model 160 are in line with the actual The actuator 100 is synchronized.
  • the display module 500 is used for projecting a simulated real scene image 540.
  • the display module 500 generates an interactive interface 520 through projection of the projection device 510, and the interactive interface 520 displays a simulated real scene image 540 of the actuator 100.
  • the control module 150 controls the interactive interface 520 generated by the display module 500 to generate a control panel 550 for the user to operate while generating the simulated real-life image 540, and the user directly controls the self-propelled mowing system through the receiving module 200 or through the interactive interface 520 .
  • the projection device 510 may be a mobile phone screen or a hardware display screen, and is communicatively connected to the processing component 180 and used to display a simulated real-time image 540 or a real-time image 530.
  • the control module 150 includes a data operation processor 310 for processing data and an image processor 320 for image production and scene modeling.
  • the data operation processor 310 may be a CPU or a microcontroller with a higher data processing speed.
  • the image processor 320 may be an independent GPU ((Graphics Processing Unit)) module.
  • the data operation processor 310 analyzes the various operating data and environmental data of the actuator 100, and the image processor 320 generates corresponding virtual reality map information based on the above data modeling, and uses the projection device 510 A specific virtual reality map is generated, and as the real-time operating status of the actuator 100 changes, the virtual reality image is controlled to update the display content synchronously to match the operating status of the actual actuator 100.
  • the control module 150 also includes a memory for storing data, which stores related algorithms of the self-propelled mowing system and data information generated during the operation of the self-propelled mowing system.
  • the first virtual boundary 710 is generated in the corresponding mowing boundary position in the real-time image 530 or the simulated real image 540, and the first virtual boundary 710 is generated in the corresponding mowing boundary position in the real-time image 530 or the simulated real-time image 540, thereby combining the first virtual boundary 710 with the real-time
  • the image 530 or the simulated real scene image 540 is fused to generate a first fused image 720, and the first fused image 720 includes a first virtual boundary 710 and a first virtual mowing area 760 defined by the first virtual boundary 710.
  • the first virtual boundary 710 Corresponding to the actual first boundary, it is the mowing boundary in the current environment detected by the boundary generation module 700.
  • the first virtual mowing area 760 corresponds to the object distribution and position of the actual first mowing area 770.
  • the sending module 600 is electrically or communicatively connected to the control module 150.
  • the sending module 600 sends the information of the first fused image 720 to the control module 150.
  • the information of the first fused image 720 includes the position information of the first virtual boundary 710, and the control module controls the execution
  • the mechanism runs within the first virtual boundary, that is, the first virtual boundary 710 defines the first virtual mowing area 760, and the control module 150 controls the actuator 100 corresponding to the first virtual mowing area 760 according to the position information of the first virtual boundary 710.
  • the actual first mowing area 770 is mowing, and the actuator 100 is controlled to operate only within the actual first boundary corresponding to the first virtual boundary 710 according to the position of the detection actuator 100.
  • the control module 150 is connected to and controls the drive motor 112 and the output motor 122, so that the control module 150 controls the actuator 100 to follow the supplementary work path and work to mow the grass.
  • Two traveling wheels 111 are set as the first traveling wheel 113 and the second traveling wheel 113.
  • the walking wheel 114, the drive motor 112 is set as the first drive motor 115 and the second drive motor 116, the control module 150 controls and connects the first drive motor 115 and the second drive motor 116, and the control module controls the first drive motor 115 through the drive controller And the rotation speed of the second drive motor 116 to control the running state of the actuator 100.
  • the processing component 180 obtains the real-time position of the actuator 100 and analyzes the control instructions to the actuator 100 to realize the control of the actuator 100 to operate within the first boundary.
  • the control module 150 includes an output controller for controlling the output motor and a drive controller for controlling the drive motor 112.
  • the output controller is electrically connected to the output motor 122, and the operation of the output motor is controlled by the output controller to control the cutting of the cutting blade. status.
  • the drive controller is connected to control the drive motor 112, and the drive controller is communicatively connected with the drive motor 112, so that after the receiving module 200 receives the user's start instruction or judges the start, the control module 150 analyzes the driving route of the actuator 100, and then drives The controller controls the driving motor 112 to drive the traveling wheel 111 to travel.
  • the control module 150 obtains the position information corresponding to the first virtual boundary 710, and analyzes the steering and speed information required by the actuator 100 to complete the operation within the preset first boundary according to the position information of the actuator 100 detected by the positioning module 300, And control the drive controller to control the rotation speed of the drive motor 112, so that the actuator 100 runs at a preset speed. It can also make the two-wheel differential rotation of the actuator 100 to make the actuator 100 turn.
  • the user can operate the displacement of the actuator 100 through the receiving module 200 and the displacement of the image acquisition module 400 to control the movement of the corresponding real-time image 530 or the simulated real-life image 540, so that the real-time image 530 or the simulated real-life image 540 can be viewed by the user. Grass area, and add control instructions.
  • the receiving module 200 can be set in a peripheral device outside the actuator 100, and the peripheral device is communicatively connected to the actuator 100.
  • the peripheral device receives the user's control instruction and sends it to the processing component 180.
  • the processing component 180 analyzes the user's The control instructions are executed by the control actuator 100.
  • the peripheral device can be set to any one or more of mobile devices such as a keyboard, a mouse, a microphone, a touch screen, a remote control and/or a handle, a camera 410, a lidar 420, and a mobile phone. Users can directly manually input command information through hardware such as mouse, keyboard, remote control, mobile phone, etc., or input command information through signals such as voice, gestures, and eye movements.
  • By setting the camera 410 it is used to collect the information characteristics of the user's eye movement or hand movement, so as to analyze the control instructions given by the user.
  • the projection device 510 uses virtual imaging technology, through the principle of interference and diffraction, through holographic projection, through the AR device or in the VR glasses device to display the image, and correspondingly generate the virtual control panel 550, and through communication
  • the connected peripheral device 310 such as a remote control or a handle, implements command input.
  • the interactive module 400 includes a motion capture unit and an interactive positioning device.
  • the motion capture unit is set as a camera 410 and/or an infrared sensing device for capturing the user's hand or controller movement, and the interactive positioning device acquires the projection device 510, and analyze the user's selection of the generated virtual control panel 550 by analyzing the displacement of the user's hand and the relative position with the projection device 510, and generate corresponding control instructions.
  • the projection device 510 is mounted on a peripheral device.
  • the peripheral device 310 is selected as a mobile phone or a computer or a VR device, and the projection device 510 corresponds to a mobile phone screen, a computer screen, a curtain, and VR glasses.
  • the display module 500 has at least a projection device 510 and an interactive interface 520.
  • the interactive interface 520 is displayed through the projection device 510.
  • the interactive interface 520 displays a real-time image 530 or a simulated real scene image 540 and a first fused image 720.
  • the projection device 510 can be implemented as a hardware display screen.
  • the hardware display screen can be an electronic device installed on a peripheral device, such as a mobile phone, a computer, etc., or directly installed on the actuator 100, or a place where the processing component 180 can communicate with each other. It is matched to a variety of display screens, and the user selects the projection object to display the corresponding real-time image 530 or the simulated real-time image 540.
  • the receiving module 200 may also generate a control panel 550 on the interactive interface 520 to receive a user's control instruction through the control panel 550. It is used to receive user input information about whether the first virtual boundary 710 in the first fused image 720 needs to be corrected. When the user chooses to correct the information of the first fused image 720, the user manually inputs an instruction to correct the first virtual boundary 710, thereby generating a second virtual boundary 730 specified by the user. After the boundary display module 500 calculates and generates the first fused image 720, it displays The module 500 generates an interactive interface 520 through the projection device 510 to display the first fused image 720 and the first virtual boundary 710.
  • the receiving module 200 asks the user through the interactive interface 520 whether the first virtual boundary 710 needs to be corrected, and the user selects the modification through the receiving module 200.
  • the first virtual boundary 710 is corrected in the first fusion image 720 displayed by the control panel 550 based on the actual needs of the mowing boundary.
  • the processing component 180 also includes a correction module 801.
  • the correction module 801 receives a user instruction to correct the first virtual boundary 710 when the user inputs information that needs to be corrected on the first virtual boundary 710 to generate a second virtual boundary in the real-time image 530 or the simulated real scene 540.
  • the boundary 730 thus forms a second fused image 740.
  • the second fusion image 740 includes a second virtual boundary 730 and a second virtual mowing area defined by the second virtual boundary 730.
  • the second virtual boundary 730 corresponds to the actual second boundary, and the second boundary is the actual to-be-corrected boundary corrected by the user. Mowing area.
  • the second virtual mowing area corresponds to the distribution and location of objects in the actual second mowing area.
  • the control module controls the actuator to operate within the second virtual boundary, that is, the second virtual boundary defines the second virtual mowing area, and the control module 150 controls the actuator 100 to correspond to the second virtual mowing area according to the position information of the second virtual boundary 730
  • the actual second mowing area is mowing, and the actuator 100 is controlled to operate only within the actual second boundary corresponding to the second virtual boundary 730 according to the position of the detection actuator 100.
  • the data operation processor is based on the positioning
  • the first fusion image 720 acquired by the module 300 and the image acquisition module 400 and the positioning of the actuator 100 establishes the actuator coordinate system 750, which is used to analyze the positioning of the actuator 100 in the environment to be mowed.
  • the data operation processor establishes a pixel coordinate system 760 for the generated first fused image 720, so that the pixels in the first fused image 720 respectively correspond to their pixel coordinates, and generate a real-time image 530 or a simulated real scene image 540 through analysis.
  • the user selects a line segment or region in the first fused image 720 through the interactive interface 520, it essentially selects a set of multiple pixels on the first fused image 720.
  • the correction module 801 calculates the actual second boundary by analyzing the real-time position of the actuator 100 in the actuator coordinate system 750, the rotation angle of the image acquisition module 400, and the pixel coordinate set corresponding to the second virtual boundary 730 selected by the user.
  • the second virtual boundary 730 selected and corrected by the user on the first fusion image 720 is projected into the actual mowing area to obtain the second mowing area specified by the user, and the second virtual boundary 730 is merged To the real-time image 530 or the simulated real-world image 540 to generate the second fused image 740.
  • the coordinates of the second virtual boundary 730 are fixed in the actuator coordinate system 750, and as the user controls the conversion of the real-time image 530 or the simulated real-world image 540 In the process, the position moves in the pixel coordinate system 760.
  • the user's correction it is possible to correct the error acquired by the self-propelled mowing system in the automatic recognition of the mowing boundary, so that the boundary of the mowing area can be set intuitively and accurately, and the first virtual boundary 710 is generated by the recognition by the image sensor and other devices, and the user It is only necessary to make corrections on the basis of the first virtual boundary 710 to generate the second virtual boundary 730, thereby facilitating the user's operation to set the mowing boundary.
  • the user can directly set the first virtual boundary 710 on the real-time image 530 or the simulated real-time image 540 through the receiving module 200, and obtain the position information of the first virtual boundary 710 set by the user through the boundary recognition module , And projected onto the coordinates of the actuator 100, and the positioning module 300 detects the position of the actuator 100, so that the control module 150 controls the actuator 100 to move on the first boundary corresponding to the first virtual boundary 710, thus facilitating the user to quickly set Mowing the border.
  • the processing component 180 includes an image acquisition module 400a and an obstacle generation module 800a.
  • the image acquisition module 400a includes an image sensor, a lidar 420a, an ultrasonic sensor, a camera 410a, One or a combination of TOF sensor 430a.
  • the ultrasonic sensor detects whether there are obstacles in the mowing area according to the return time of the ultrasonic waves, and records the position information of the obstacles.
  • the lidar 420a sends laser light and measures the reflection time of the laser light. Detect obstacles in the mowing area; the image sensor analyzes the shape and color of the image obtained, and analyzes the corresponding image that matches the obstacle through the algorithm.
  • the obstacle generation module 800a integrates the obstacle detection information in the mowing area with the image acquisition module 400a into the real-time image 530a or the simulated real image 540a, and cuts in the real-time image 530a or the simulated real image 540a through the display module 500a.
  • a first virtual obstacle identifier 810a is generated in a corresponding position in the grass area, thereby generating a first fused image 720a, and the first fused image 720a is a real-time image 530a or a simulated real scene image 540a including the first virtual obstacle identifier 810a.
  • the sending module 600a sends the information of the first fused image 720a to the control module 150a.
  • the control module 150a controls the actuator 100a according to the information of the first fusion image 720a to avoid virtual obstacles when mowing the grass.
  • the data calculation processor establishes the pixel coordinate system and the coordinate system of the actuator 100a, recognizes the pixel coordinates of the first virtual obstacle identifier 810a added by the user on the first fused image 720a, and calculates the first virtual obstacle according to the preset coordinate conversion method.
  • the obstacle identifier 810a is in the obstacle to convert the position information of the first virtual obstacle identifier to the actual position information of the obstacle 820a, and the control module 150a controls the actuator 100a to avoid the obstacle 820a during operation to
  • the user can add the first virtual obstacle identification 810a to the real-time image 530a or the simulated real-life image 540a, and make the self-propelled mowing system recognize the obstacle and bypass it, which is convenient for the user to operate and can accurately add the obstacle information to the mowing. within the area.
  • the obstacle generation module 800a generates a virtual obstacle identifier corresponding to the obstacle in the real-time image 530a or the simulated real scene image 540a according to the instruction input by the user to form the first fused image 720a.
  • the user uses the receiving module 200a to set a virtual obstacle identification in the real-time image 530a or the simulated real-time image 540a according to the position of the obstacle in the actual mowing area, or the position of the area that does not require mowing, as the setting actuator 100a, no work is required, The identification of the area that needs to be bypassed in the actual mowing operation.
  • the obstacle generation module 800a presets obstacle models, such as stone models, tree models, and flower models, for the user to choose.
  • the user simulates the real-time image 540a or real-time image 530a through the interactive interface 520a.
  • the obstacle corresponds to the simulated real image.
  • the image processor 320 After the user inputs the relevant information, the image processor 320 generates The corresponding simulated obstacle 640 is generated in the simulated real-world image 540a of, and the control module 150a controls the actuator 100a to avoid obstacles during operation.
  • the obstacle generation module 800a generates a virtual obstacle identifier corresponding to the obstacle in the real-time image 530a or the simulated real scene image 540a to form a first fused image 720a, and the first fused image 720a includes the size, shape, and position information of the virtual obstacle identifier .
  • the sending module 600a sends the information of the first fusion image 720a to the control module 150a, so that the control module 150a controls the actuator 100a according to the information of the first fusion image 720a to bypass the virtual obstacle identifier when mowing in the mowing area, so as to achieve The need to avoid obstacles.
  • the first fusion image 720a may also include a first virtual boundary 710a.
  • the boundary generation module 700a generates a first virtual boundary 710a corresponding to the mowing boundary in the real-time image 530a or the simulated real scene image 540a by calculating feature parameters, so that the control module 150a is based on
  • the information of the first fusion image 720a controls the actuator 100a to work in the first mowing area corresponding to the first virtual mowing area within the first virtual boundary 710a and outside the virtual obstacle identification, thereby restricting the actuator 100a in the first mowing area It operates within a boundary and avoids virtual obstacle signs.
  • Obstacles can be objects that occupy space such as stones and objects, or areas that do not require mowing such as flowers, special plants, etc.; obstacles can also be understood as areas required by users that do not need to be within the current first virtual boundary 710a
  • the working area can be formed into a special pattern or shape to meet the user's needs for beautifying the lawn.
  • the obstacle generation module 800b generates a first virtual obstacle 810b corresponding to the mowing obstacle in the real-time image 530b or the simulated real-time image 540b by calculating feature parameters.
  • the first fusion image 720b includes a first virtual mowing area 760b and a first virtual obstacle 810b in the first virtual mowing area 760b.
  • the first virtual mowing area 760b corresponds to the actual first mowing area 770b.
  • the virtual mowing area 760b corresponds to the object distribution and position of the actual first mowing area 770b, and the first mowing area 770b is the mowing area where the actuator 100b needs to work.
  • the obstacle generation module 800b is equipped with an obstacle analysis algorithm.
  • the obstacle 820b in the area to be mowed is detected by the image acquisition module 400b, and generated in the position of the corresponding mowing obstacle 820b in the real-time image 530b or the simulated real scene image 540b
  • the first virtual obstacle 810b is thus fused with the real-time image 530b or the simulated real scene image 540b to generate the first fused image 720b.
  • the virtual real scene image 540b or the real-time image 530b is displayed through the display module 500b.
  • the first fusion image 720b includes a first virtual obstacle 810b.
  • the first virtual obstacle 810b corresponds to at least one actual obstacle 820b, and is a lawn mowing obstacle 820b in the current environment detected by the obstacle generation module 800b.
  • the sending module 600b is electrically or communicatively connected with the control module 150b.
  • the sending module 600b sends the information of the first fused image 720b to the control module 150b.
  • the information of the first fused image 720b includes the position information of the first virtual obstacle 810b.
  • the control module 150b According to the position information of the first virtual obstacle 810b, the actuator 100b is controlled to cut grass in the actual first mowing area 770b corresponding to the first virtual mowing area 760b, and the actuator 100b is controlled only in the first mowing area 770b according to the position of the detection actuator 100b. Work within the actual first obstacle corresponding to the virtual obstacle 810b.
  • the receiving module 200b asks the user through the display interface whether the first virtual obstacle 810b information in the current first fused image 720b needs to be corrected.
  • the user input information about whether the first virtual obstacle 810b in the first fused image 720b needs to be corrected is received.
  • the user manually enters an instruction to correct the first virtual obstacle 810b, thereby generating a second virtual obstacle 830b designated by the user, so that the user can pass the mowing obstacle according to actual needs through the control panel
  • the first virtual obstacle 810b is corrected in the displayed first fusion image 720b.
  • the processing component 180 further includes a correction module 801b.
  • the correction module 801b receives a user instruction to correct the first virtual obstacle 810b when the user inputs information that needs to be corrected for the first virtual obstacle 810b to generate the first virtual obstacle 810b in the real-time image 530b or the simulated real scene image 540b.
  • the two virtual obstacles 830b thus form a second fused image 740b.
  • the second fused image 740b includes a modified second virtual obstacle 830b, and the second virtual obstacle 830b corresponds to at least one obstacle 820b that the actual user needs to avoid.
  • the control module 150b controls the actuator 100b to mow grass in the actual first mowing area 770b corresponding to the first virtual mowing area 760b according to the position information of the second virtual obstacle 830b, and controls the actuator 100b only according to the position of the detection actuator 100b.
  • the control module 150b controls the actuator 100b according to the information of the first fusion image 720b to avoid the actual obstacle corresponding to the second virtual obstacle 830b when mowing the grass.
  • Obstacles can be objects that occupy space such as stones, objects, or areas that do not require mowing, such as flowers and special plants.
  • the 21 processing component 180 includes a path generation module 900c.
  • the path generation module 900c generates a walking path 910c in the real-time image 530c or the simulated real-time image according to the instructions input by the user to form the first A fusion image 720c.
  • the path generation module 900c is provided with a preset mowing path mode, such as a bow-shaped path.
  • the actuator 100c is controlled to work reciprocally in the boundary, or a back-shaped path, and the actuator 100c is controlled to work gradually around a center.
  • the path generation module 900c installs a preset algorithm in the generated first virtual boundary 710c to design the walking path 910c in the mowing area, and according to the generated walking path 910c corresponding to the actuator 100c coordinate system The position coordinates are calculated and the pixel coordinates in the corresponding pixel coordinate system are calculated, so that the generated walking path 910c is displayed in the real-time image 530c or the simulated real scene, and is fused to the real-time image 530c or the simulated real scene to generate the first fused image 720c.
  • the sending module 600c sends the first fusion image 720c to the control module 150c, and the control module 150c controls the walking component 110c to walk along the walking path 910c in the first fusion image 720c and perform mowing operations on the mowing area.
  • the processing component 180 further includes a correction module 801c.
  • the user can modify the walking path 910c in the first fusion image 720c through the receiving module 200c, and modify the path generated by the path generation module 900c through the correction module 801c.
  • the generated walking path 910c is corrected on the first fusion image 720c through the interactive interface 520c, the path is deleted by selecting part of the path to delete, and a line segment is added to the first fusion image 720c to add a new path.
  • the path generation module 900c includes a preset algorithm for calculating and generating the first walking path 910c according to the characteristic parameters of the mowing area, and displaying the real-time image 530c or the simulated real-time image on the display module 500c Show up.
  • the path generation module 900c automatically calculates and generates the first walking path 910c according to the obtained mowing boundary information and area information.
  • the path generation module 900c is configured to generate a first walking path 910c according to the characteristic parameters of the mowing area, such as a bow-shaped path, a back-shaped path or a random path.
  • the receiving module 200c receives the information input by the user as to whether the first walking path 910c in the first fused image 720c needs to be corrected, the user selects the correction and inputs a correction instruction through the receiving module 200c to delete some line segments or areas of the first walking path 910c , And add some line segments or regions to the first walking path 910c to generate the second walking path 920c in the real-time image 530c or the simulated real image.
  • the correction module 801c recognizes the user's correction instruction, and merges the coordinates of the second walking path 920c Go to the real-time image 530c or the simulated real image to generate the second fused image 740c.
  • the sending module 600c sends the information of the second fusion image 740c to the control module 150c, and the control module 150c controls the actuator 100c to walk along the path actually in the mowing area corresponding to the second walking path 920c according to the information of the second walking path 920c .
  • the path generation module 900c generates a preset path brush, such as a back-shaped path brush, a bow-shaped path brush, and a straight path brush for the user to select.
  • the path generation module 900c forms a selectable path brush on the interactive interface 520c.
  • the user selects the corresponding path brush and brushes the area where the actuator 100c is expected to work in the real-time image 530c or the simulated real-time image, so as to be in the corresponding area Generate back-shaped path, bow-shaped path, linear path; to generate the corresponding walking path 910c in the real-time image 530c or the simulated real image, the control module 150c controls the actuator 100c along the walking path 910c corresponding to the actual mowing area Path walking operation.
  • the path generation module 900c can receive patterns, texts, and other graphics sent by the user through the receiving module 200c, and calculate the corresponding walking path 910c according to the pattern.
  • the control module 150c controls the actuator 100c according to the generated walking path 910c. Walk and mow the grass, so as to print the mowing traces sent by the user in the mowing area, so as to achieve the purpose of printing the mowing, so as to enrich the appearance of the grass.
  • the boundary generation module 700, the path generation module 900c, and the obstacle generation module 800b when the boundary generation module 700, the path generation module 900c, and the obstacle generation module 800b generate the corresponding virtual boundary, virtual obstacle identifier, and walking path 910c, the real-time image or simulated real-time image displayed by the display module can be used.
  • the processing component 180 also includes a guide channel setting module.
  • the guide channel setting module controls the interactive interface 520c projected on the projection device 510 to generate a guide channel setting key or a setting interface.
  • the user uses the guide channel setting module to simulate A virtual guide channel identifier 560c is added to the real image 540c or the real-time image 530c.
  • the guide channel moves from one work area to another work area.
  • the self-propelled mowing system detects the mowing area, and when the working environment has multiple relatively independent working areas, it recognizes and generates the corresponding first virtual sub-mowing area 770c and the second virtual sub-mowing area 780c, or The user selects the target work area, and selects at least the first virtual sub-cutting area 770c and the second virtual sub-cutting area 780c through the simulated real-world image 540c.
  • the guide channel setting module is used to receive the virtual guide channel set by the user between the first virtual sub-cutting area 770c and the second virtual sub-cutting area 780c, and is used to guide the actuator 100c in the first virtual sub-cutting area.
  • the 770c and the second virtual sub-mowing area 780c correspond to the walking path 910c between the first sub-mowing area and the second sub-mowing area.
  • the user selects the corresponding virtual guide channel identifier 560c in the simulated real scene 540c according to the movement channel of the actuator 100c between the first mowing area and the second mowing area according to the requirements, and the control module 150c controls the actuator 100c according to
  • the virtual guide channel identifier 560c integrated in the simulated real scene is used to guide the actuator 100c to travel.
  • the self-propelled mowing system also includes a detection device for detecting the operating conditions of the actuator 100c, such as the machine parameters, working modes, machine failure conditions, and alarm information of the actuator 100c.
  • the display module can also display the machine parameters, working mode, machine failure and alarm information of the actuator through the interactive interface, and the data calculation processor 310 calculates the display information to control the projection device to dynamically respond to the machine information in real time, which is convenient for the user to control and obtain the actuator The operating status of the.
  • the self-propelled mowing system also includes a voltage sensor and/or a current sensor, a rainfall sensor, and a boundary recognition sensor.
  • the above sensors can be installed in the actuator, and the voltage sensor and the current sensor are used to detect the current and voltage values during the operation of the actuator to analyze the current operation information of the actuator.
  • the rain sensor is used to detect the rain in the environment of the actuator.
  • the boundary recognition sensor is used to detect the boundary of the work area, and it can be a sensor that matches the electronic embedding of the boundary, a camera device that acquires environmental information by camera, or a positioning device.
  • the current rainfall information is detected by the rainfall sensor, and the simulated real scene image is calculated by the image sensor to simulate the corresponding raining scene and the magnitude of the rainfall.
  • the detection device such as lidar, camera, and status sensor, obtain the surrounding environment and height information of the actuator, and correspondingly display it in the simulated real scene.
  • a capacitance sensor is set to detect the load information of the mowing blade, so as to simulate the grass height information after the actuator is operated.
  • the outdoor self-propelled device may be a snowplow, which includes: an actuator 100d, including a walking component 110d for realizing a walking function, and Working components for realizing preset functions; housing for supporting the actuator 100d; image acquisition module 400d, capable of acquiring real-time images 530d including at least part of the working area and at least part of the working boundary; display module 500d, and image acquisition module 400d is electrically connected or connected by communication, the display module 500d is configured to display the real-time image 530d or the simulated real scene image 540d generated according to the real-time image 530d; the boundary generating module 700d generates and working boundary in the real-time image 530d by calculating the characteristic parameters
  • the corresponding first virtual boundary is used to form the first fused image;
  • the receiving module 200d is used to receive the information input by the user whether the first virtual boundary in the first fused image needs to be corrected; the correction module 801d,
  • the boundary generation module 700d generates a first virtual boundary corresponding to the working boundary in the real-time image 530d by calculating feature parameters to form the first fused image; the sending module 600d sends the first fused image; the control module 300d, and The sending module 600d is electrically connected or communication connected, and the control module 300d controls the actuator 100d to operate within the first virtual boundary.
  • the outdoor self-propelled device further includes an obstacle generation module, which generates a virtual obstacle identification corresponding to the obstacle in the real-time image 530d according to the instruction input by the user to form a first fused image;
  • the image acquisition module 400d acquisition includes at least part The real-time image 530d of the working area and at least one obstacle located in the working area is electrically or communicatively connected to the sending module 600d, and the control module 300d controls the actuator 100d to avoid the virtual obstacle in the first fusion image.
  • the obstacle generation module generates a first virtual obstacle identification corresponding to the obstacle in the real-time image 530d by calculating feature parameters to form a first fusion image, and the control module 300d controls the actuator 100d to avoid the first fusion image Virtual obstacles.
  • the obstacle generation module generates a first virtual obstacle identification corresponding to the obstacle in the real-time image 530d or the simulated real scene image 540d by calculating the characteristic parameters to form the first fused image;
  • the receiving module 200d receives user input whether it is necessary Correct the information of the first virtual obstacle identification in the first fusion image;
  • the correction module 801d receives the user instruction to correct the first virtual obstacle identification in the real-time image 530d when the user inputs the information that needs to be corrected for the first virtual obstacle identification.
  • the obstacle generation module generates a first virtual obstacle identifier in the real-time image 530d or the simulated real scene 540d according to the instruction input by the user to form the first fused image; the sending module 600d sends the first fused image; the control module 300d , Is electrically connected or communicatively connected with the sending module 600d, and the control module 300d controls the actuator 100d to avoid the first virtual obstacle identifier in the first fusion image.
  • the path generation module generates a walking path in the real-time image 530d or the simulated real-time image 540d according to the instructions input by the user to form the first fused image; the sending module 600d sends the first fused image; the control module 300d and the sending module 600d are electrically connected to each other.
  • the control module 300d controls the walking component 110d to walk along the walking path in the first fused image.
  • the path generation module generates the first walking path in the real-time image 530d or the simulated real-time image 540d according to the characteristic parameters of the calculated working area to form the first fusion image; the receiving module 200d is used to receive user input whether the first path needs to be corrected.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Environmental Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Electromagnetism (AREA)
  • General Engineering & Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Guiding Agricultural Machines (AREA)
  • Harvester Elements (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

一种自行走割草系统,包括:执行机构(100),包括用于实现割草功能的割草组件(120)和用于实现行走功能的行走组件(110),图像采集模块(400),能采集割草区域的实时图像(530);显示模块(500),显示模块被配置成用于显示实时图像根据实时图像的生成的模拟实景图(540);接收模块(400),用于接收用户输入的指令;障碍物生成模块(800a,800b),根据用户输入的指令生成第一虚拟障碍物标识(810a,810b)以形成第一融合图像(720a,720b);控制模块(150),与发送模块(600)电连接或者通讯连接,控制模块控制执行机构避开第一融合图像中的第一虚拟障碍物标识。

Description

自行走割草系统、自行走割草机和户外自行走设备
本申请要求申请日为2019年10月18日、申请号为201910992552.8,以及申请日为2019年12月31日、申请号为201911409433.1的中国专利申请的优先权,以上申请的全部内容通过引用结合在本申请中。
技术领域
本申请涉及一种户外电动工具,例如涉及自行走割草系统、自行走割草机和户外自行走设备。
背景技术
自行走割草系统作为一种户外割草工具,不需要用户长期操作,智能方便而受到用户的青睐。传统的自行走割草系统割草工作的过程中,其所在的割草区域经常会有障碍物,如树木、石头,障碍物不但会影响自行走割草系统的行走轨迹,多次碰撞障碍物也容易损坏自行走割草系统。用户在割草区域内可能有不想要割草作业的区域,如种植的花草区域,而传统的自行走割草系统不能检测到这一区域,从而会对用户不期望割草的区域误割,不能满足用户的割草需求。常见的其它户外行走设备,如扫雪机,也同样具有以上问题。
发明内容
本申请提供一种自行走割草系统,其可以通过显示执行机构的模拟实景图或实时图像,使得用户在模拟实景图或实时图像上添加障碍物标识,控制自行走割草系统绕开障碍物区域并且可以直观地获取自行走割草系统的工作情况。
本申请一实施例提出一种自行走割草系统,包括:执行机构,包括用于实现割草功能的割草组件和用于实现行走功能的行走组件;机壳,用于支撑执行机构;图像采集模块,能采集包括至少部分割草区域以及至少部分割草边界的实时图像;显示模块,与图像采集模块电连接或者通信连接,显示模块被配置成用于显示实时图像或根据实时图像的生成的模拟实景图;边界生成模块,通过计算特征参数在实时图像中生成与割草边界对应的第一虚拟边界以形成第一融合图像;接收模块,用于接收用户输入的是否需要修正第一融合图像中的第一虚拟边界的信息;修正模块,在用户输入需要修正第一虚拟边界的信息时接 收用户指令以修正第一虚拟边界以在实时图像或模拟实景图中生成第二虚拟边界从而形成第二融合图像;发送模块,发送无需修正的第一融合图像的信息或经过修正的第二融合图像的信息;控制模块,与发送模块电连接或者通讯连接,控制模块控制执行机构在第一虚拟边界或者第二虚拟边界内运行。
可选的,接收模块设置在执行机构之外,接收模块包括键盘、鼠标、麦克风、触摸屏、遥控器和/或手柄、摄像头、激光雷达、手机等移动设备的任一种或多种。
可选的,接收模块还用于接收用户添加的第一虚拟障碍物,执行机构被控制在行走时规避第一虚拟障碍物对应的实际虚拟障碍物。
可选的,接收模块还用于接收用户添加的第一行走路径,执行机构被控制在行走按照第一行走路径在第二虚拟边界内行走作业。
一实施例提出一种自行走割草机,包括:主体,包括机壳;割草元件,连接于主体并用于切割植被;输出马达,驱动割草元件;行走轮,连接于主体;驱动马达,驱动行走轮转动;图像采集模块,能采集包括至少部分割草区域以及位于割草区域内的至少一个障碍物的实时图像,并被设置将实时图像发送给显示模块以显示实时图像或根据实时图像生成的模拟实景图;控制模块,能接收根据用户输入的指令在实时图像或模拟实景图中生成与障碍物对应的虚拟障碍物标识以形成第一融合图像,控制模块控制执行机构避开第一融合图像中的虚拟障碍物标识对应的障碍物。
一实施例提出一种自行走割草系统,包括:执行机构,包括用于实现割草功能的割草组件和用于实现行走功能的工作组件;机壳,用于支撑执行机构;图像采集模块,能采集包括至少部分割草区域以及至少部分割草边界的实时图像;显示模块,与图像采集模块电连接或者通信连接,显示模块被配置成用于显示实时图像或根据实时图像的生成的模拟实景图;边界生成模块,通过计算特征参数在实时图像中生成与割草边界对应的第一虚拟边界以形成第一融合图像;发送模块,发送第一融合图像;控制模块,与发送模块电连接或者通讯连接,控制模块控制执行机构在第一虚拟边界内运行。
可选的,自行走割草系统还包括定位模块,定位模块包括GPS定位单元、IMU惯性测量单元、位移传感的一种或组合,用于获取执行机构的实时位置,通过对执行机构的实时定位数据分析,获取对执行机构行进和割草的控制调节。
为了实现以上主要申请目的,提出一显示模块包括投射设备和交互界面, 通过投射设备投射产生交互界面,并由交互界面显示模拟实景图或实时图像。
可选的,自行走割草系统还包括引导通道设定模块,引导通道设定模块用于接收用户对第一虚拟子割草区域和第二虚拟子割草区域之间设定的虚拟引导通道,用于引导执行机构在第一虚拟子割草区域和第二虚拟子割草区域对应的第一子割草区域和第二子割草区域之间的行走路径。
一实施例提出一种户外自行走设备,包括:执行机构,包括用于实现行走功能的行走组件和用于实现预设功能的工作组件;机壳,用于支撑执行机构;图像采集模块,能采集包括至少部分工作区域以及至少部分工作边界的实时图像;显示模块,与图像采集模块电连接或者通信连接,显示模块被配置成用于显示实时图像或根据实时图像的生成的模拟实景图;边界生成模块,通过计算特征参数在实时图像中生成与工作边界对应的第一虚拟边界以形成第一融合图像;接收模块,用于接收用户输入的是否需要修正第一融合图像中的第一虚拟边界的信息;修正模块,在用户输入需要修正第一虚拟边界的信息时接收用户指令以修正第一虚拟边界以在实时图像或模拟实景图中生成第二虚拟边界从而形成第二融合图像;发送模块,发送无需修正的第一融合图像或经过修正的第二融合图像;控制模块,与发送模块电连接或者通讯连接,控制模块控制执行机构在第一虚拟边界或者第二虚拟边界内运行。
一实施例提出一种户外自行走设备,包括:执行机构,包括用于实现行走功能的行走组件和用于实现预设功能的工作组件;机壳,用于支撑执行机构;图像采集模块,能采集包括至少部分工作区域以及至少部分工作边界的实时图像;显示模块,与图像采集模块电连接或者通信连接,显示模块被配置成用于显示实时图像或根据实时图像的生成的模拟实景图;边界生成模块,通过计算特征参数在实时图像中生成与工作边界对应的第一虚拟边界以形成第一融合图像;发送模块,发送第一融合图像;控制模块,与发送模块电连接或者通讯连接,控制模块控制执行机构在第一虚拟边界内运行。
附图说明
图1是本申请的自行走割草系统的执行机构的结构图。
图2是图1中的执行机构和投射设备的连接示意图。
图3是图2中的执行机构的部分内部结构示意图。
图4是图1中的执行机构的框架示意图。
图5是图1中的自行走割草系统的框架示意图。
图6是本申请第一实施方式的割草区域示意图。
图7是本申请第一实施方式的交互界面示意图。
图8是本申请第一实施方式的交互界面显示实时图像的示意图。
图9是本申请第一实施方式的交互界面显示第一融合图像的示意图。
图10是本申请第一实施方式的交互界面中第二融合图像的示意图。
图11是本申请第一实施方式的执行机构坐标系示意图。
图12是本申请第一实施方式的像素坐标系示意图。
图13是本申请第二实施方式的自行走割草系统的框架示意图。
图14是本申请第二实施方式的割草区域示意图。
图15是本申请第二实施方式的第一融合图像的示意图。
图16是本申请第三实施方式的自行走割草系统的框架示意图。
图17是本申请第三实施方式的割草区域示意图。
图18是本申请第三实施方式的第一融合图像的示意图。
图19是本申请第三实施方式的第一融合图像的示意图。
图20是本申请第三实施方式的第二融合图像的示意图。
图21是本申请第四实施方式的自行走割草系统的框架示意图。
图22是本申请第四实施方式的割草区域示意图。
图23是本申请第四实施方式的第一融合图像的示意图。
图24是本申请第四实施方式的第一融合图像的示意图。
图25是本申请第四实施方式的第二融合图像的示意图。
图26是本申请第四实施方式的虚拟引导通道标识设置的示意图。
图27是本申请第五实施方式的户外自行走设备的结构示意图。
具体实施方式
本申请提出一种自行走割草系统,参照图1到图3,自行走割草系统包括用于修整植被的执行机构100,执行机构100至少包括用于实现割草功能的割草组件120和用于实现行走功能的行走组件110,并且包括主体140以及机壳130,机壳130包装支撑主体140、割草组件120、行走组件110。割草组件120包括割草元件121和输出马达122,输出马达122驱动割草元件121旋转进行修整植被,割草元件121可以是刀片,或者其它可以切割修整草坪的元件。行走组件 110包括至少一行走轮111,以及用于驱动行走轮111的驱动马达112,驱动马达112提供转矩给至少一个行走轮111。通过割草组件120和行走组件110的配合,实现自行走割草系统可以控制执行机构100在植被上移动并作业。执行机构100为自行走割草系统实现割草功能的硬件,可选的,执行机构100为自行走割草机。
参照图4,自行走割草系统还包括接收模块200以及处理组件180、电源170,接收模块200至少包括用于接收用户指令的接收模块200,接收模块200用于接收用于输入的对自行走割草系统的控制指令。处理组件180至少包括用于控制自行走割草系统运行的控制模块150,控制模块150用于根据指令和自行走割草系统的运行参数控制驱动马达112和输出马达122的运行,以控制执行机构100在对应的工作区域内行走以及进行割草作业。电源170用于给行走组件和输出组件供电,可选的,电源170为可插拔的电池包,安装到机壳130。
自行走割草系统包括图像采集模块400和显示模块500,处理组件180包括用于计算图像信息的控制模块150,显示模块500和图像采集模块400电连接或者通信连接,图像采集模块400能采集包括至少部分割草区域以及至少部分割草边界的实时图像530,并通过显示模块500显示对应的割草区域和割草边界的实时图像530。参照图3和图6,图像采集模块400至少包括摄像头410、激光雷达420、TOF传感器430的一种或组合,通过摄像头410和激光雷达420获取执行机构100的周边环境信息,即通过摄像头410获取待作业的割草区域和割草边界的环境图像,并可通过激光雷达420的激光反射的信息获取割草区域和割草边界内的物体的位置、相对当前执行机构100的距离、斜距、形状等特征参数,控制模块150接收图像采集模块400采集的割草区域和割草边界的图像信息,将图像内物体的特征参数合并到图像上。显示模块500将图像采集模块400采集的割草区域和割草边界的实时图像530显示给用户。
参照图3,为了提升对执行机构100的位置检测精度,自行走割草系统还包括定位模块300,用于获取执行机构100的位置,通过对执行机构100的实时定位数据分析,获取对执行机构100行进和割草的控制调节。定位模块300包括GPS定位单元310、IMU惯性测量单元320、位移传感器330的一种或组合,用于对执行机构100位置的获取。GPS定位单元310用于获取执行机构100的位置信息或位置估算、以及执行机构100移动的起始位置。IMU惯性测量单元320包括加速度计、陀螺仪,用于检测执行机构100在行进过程的偏移信息。位移 传感器330可以设置驱动马达112或者行走轮111上,用于获取执行机构100的位移数据。通过多个以上装置获取的信息结合修正,获取较为精确的位置信息,获取执行机构100的实时位置和姿态。
在另一种方式中,控制模块150根据图像采集模块400采集的割草区域的图像信息和数据信息,生成对割草区域的模拟实景图540,模拟实景图540模拟割草区域的边界、区域、障碍物等,并建立执行机构模型160,根据执行机构100在割草区域内的位置在模拟实景图540中对应的显示执行机构模型160,使得执行机构模型160的位置和作业状态和实际的执行机构100同步。
参照图5,显示模块500用于投射模拟实景图540。示例性的,显示模块500通过投射设备510投射产生交互界面520,并由交互界面520显示执行机构100的模拟实景图540。控制模块150控制显示模块500所生成的交互界面520在生成模拟实景图540的同时,生成供用户操作的控制面板550,并由用户通过接收模块200或者通过交互界面520直接控制自行走割草系统。投射设备510可以是手机屏幕、硬件显示屏,与处理组件180可通信连接并用于显示模拟实景图540或实时图像530。
参照图3,控制模块150包括处理数据的数据运算处理器310以及用于生产图像和场景建模的图像处理器320,数据运算处理器310可以为CPU或较高数据处理速度的微控制器,图像处理器320可以为独立的GPU((Graphics Processing Unit))模块。在执行机构100运行时,通过数据运算处理器310分析执行机构100的各项运行数据以及环境数据,并由图像处理器320根据以上数据建模生成对应的虚拟实景图信息,并通过投射设备510生成具体的虚拟实景图,并随着执行机构100的实时运行状态变化,控制虚拟实景图同步更新显示内容,以与实际执行机构100的运行状态相互匹配。控制模块150还包括用于储存数据的储存器,其储存自行走割草系统的相关算法以及自行走割草系统运行过程中产生的数据信息。
在本申请的第一实施方式中,处理组件180还包括边界生成模块700、控制模块150和发送模块600,参照图7和图8,通过计算特征参数在实时图像530或模拟实景图540中生成与割草边界对应的第一虚拟边界710以形成第一融合图像720。边界生成模块700内设有边界分析算法,通过对实时图像530或模拟实景图540中的色彩、草高、形状分析待割草区域的割草边界,从而在实时图像530或模拟实景图540内中对应地割草边界位置中生成第一虚拟边界710,并 在实时图像530或模拟实景图540内中对应地割草边界位置中生成第一虚拟边界710,从而将第一虚拟边界710与实时图像530或模拟实景图540融合生成第一融合图像720,且第一融合图像720中包括第一虚拟边界710以及被第一虚拟边界710限定的第一虚拟割草区域760,第一虚拟边界710对应实际的第一边界,为边界生成模块700检测到的当前环境中的割草边界。第一虚拟割草区域760与实际的第一割草区域770的物体分布和位置对应。发送模块600与控制模块150电连接或者通讯连接,发送模块600发送第一融合图像720的信息给控制模块150,第一融合图像720的信息包括第一虚拟边界710的位置信息,控制模块控制执行机构在第一虚拟边界内运行,即第一虚拟边界710限定第一虚拟割草区域760,控制模块150根据第一虚拟边界710的位置信息控制执行机构100在第一虚拟割草区域760对应的实际第一割草区域770割草,且根据检测执行机构100的位置控制执行机构100仅在第一虚拟边界710对应的实际的第一边界内作业。
控制模块150连接并控制驱动马达112和输出马达122,从而控制模块150控制执行机构100按照补充作业路径行进并作业割草,行走轮111设为两个,分别为第一行走轮113和第二行走轮114,驱动马达112设为第一驱动马达115和第二驱动马达116,控制模块150控制连接第一驱动马达115和第二驱动马达116,控制模块通过驱动控制器控制第一驱动马达115和第二驱动马达116的转速,以控制执行机构100的行进状态。处理组件180通过获取执行机构100的实时位置,分析对执行机构100的控制指令,以实现控制执行机构100在第一边界内作业。控制模块150包括用于控制输出电机的输出控制器,以及控制驱动马达112的驱动控制器,输出控制器与输出马达122电连接,通过输出控制器控制输出电机的运行,从而控制切割刀片的切割状态。驱动控制器连接控制驱动马达112,且驱动控制器与驱动马达112可通信地连接,从而接收模块200在接收用户的启动指令或判断启动后,控制模块150分析执行机构100的行驶路线,通过驱动控制器控制驱动马达112带动行走轮111行进。控制模块150获取第一虚拟边界710对应的位置信息,并根据定位模块300检测的执行机构100的位置信息,分析执行机构100为了完成在预设的第一边界内作业需要的转向和速度信息,并控制驱动控制器控制驱动马达112的转速,使得执行机构100安装预设的速度行驶。并可使得执行机构100两轮差速转动,以使得执行机构100转向。用户可以通过接收模块200操作执行机构100位移,以及图像采集模 块400的位移,从而控制对应的实时图像530或模拟实景图540的移动,从而使得实时图像530或模拟实景图540用户需要观看的割草区域,并添加控制指令。
接收模块200可以设置在执行机构100之外的外设装置,外设装置和执行机构100可通信地连接,外设装置接收用户的控制指令,并发送给处理组件180由处理组件180分析用户的控制指令以控制执行机构100执行。外设装置可以被设置为键盘、鼠标、麦克风、触摸屏、遥控器和/或手柄、摄像头410、激光雷达420、手机等移动设备的任一种或多种。用户可通过鼠标、键盘、遥控器、手机等硬件直接手动输入命令信息,也可通过语音、手势、眼部运动等信号输入命令信息。通过设置摄像头410,用于采集用户眼部运动或手部运动的信息特征,从而分析用户给出的控制指令。
在另一种实施方式中,投射设备510采用虚拟成像技术,通过干涉和衍射原理,通过全息投影,通过AR设备或在VR眼镜设备内显示图像,并相应地生成虚拟控制面板550,并通过通信连接的外设装置310,如遥控器或手柄实现指令输入。可选的,交互模块400包括动作捕捉单元及交互定位装置,动作捕捉单元被设置为摄像头410和/或红外感应装置,用于捕获用户的手部或控制器的动作,交互定位装置获取投射设备510的位置,并通过分析用户手部的位移以及和投射设备510的相对位置,分析用户对生成的虚拟控制面板550的选择,并生成对应的控制指令。
在一种实施方式中,投射设备510搭载在外设装置上,如外设装置310选为手机或计算机或VR设备,投射设备510对应为手机屏幕、计算机屏幕、幕布、VR眼镜等。
显示模块500至少具有投射设备510和交互界面520,通过投射设备510显示交互界面520,交互界面520中显示实时图像530或模拟实景图540和第一融合图像720。投射设备510可以被实施为硬件显示屏,硬件显示屏可以是安装到外设装置上,如手机、计算机等的电子设备,或是直接安装在执行机构100上,或使得处理组件180可通信地匹配到多种显示屏,并由用户选择投射对象以显示对应的实时图像530或模拟实景图540。
参照图9,接收模块200还可以在交互界面520上生成控制面板550,以通过控制面板550接收用户的控制指令。用于接收用户输入的是否需要修正第一融合图像720中的第一虚拟边界710的信息。用户选择修正第一融合图像720 的信息时,用户手动输入指令对第一虚拟边界710修正,从而生成用户指定的第二虚拟边界730,在边界显示模块500计算生成第一融合图像720后,显示模块500通过投射设备510生成交互界面520以展示第一融合图像720以及第一虚拟边界710,接收模块200通过交互界面520询问用户是否需要修正第一虚拟边界710,用户通过接收模块200选择修改,结合实际需求的割草边界通过控制面板550在显示的第一融合图像720中对第一虚拟边界710进行修正。处理组件180还包括修正模块801,修正模块801在用户输入需要修正第一虚拟边界710的信息时接收用户指令以修正第一虚拟边界710以在实时图像530或模拟实景图540中生成第二虚拟边界730从而形成第二融合图像740。
第二融合图像740中包括第二虚拟边界730以及被第二虚拟边界730限定的第二虚拟割草区域,第二虚拟边界730对应实际的第二边界,第二边界为用户修正出的实际待割草区域。第二虚拟割草区域与实际的第二割草区域的物体分布和位置对应。控制模块控制执行机构在第二虚拟边界内运行,即第二虚拟边界限定第二虚拟割草区域,控制模块150根据第二虚拟边界730的位置信息控制执行机构100在第二虚拟割草区域对应的实际的第二割草区域割草,且根据检测执行机构100的位置控制执行机构100仅在第二虚拟边界730对应的实际的第二边界内作业。
参照图10和图11,为了识别用户对第一融合图像720的修正指令以生成第二融合图像740,即将用户的修正指令融合到实时图像530或模拟实景图540中,数据运算处理器根据定位模块300和图像采集模块400获取的第一融合图像720和执行机构100的定位,建立执行机构坐标系750,用于对执行机构100在待割草环境中的定位分析。数据运算处理器对生成的第一融合图像720建立像素坐标系760,使得第一融合图像720中的像素分别对应其像素坐标,通过分析生成实时图像530或模拟实景图540的。用户通过交互界面520选中第一融合图像720中的线段或区域时,本质是选中了第一融合图像720上多个像素的集合。修正模块801通过分析实时的执行机构100在执行机构坐标系750中的位置,图像采集模块400的转动角度,以及用户选择的第二虚拟边界730对应的像素坐标集合,计算出实际的第二边界的位置信息,从而将用户在第一融合图像720上选择修正的第二虚拟边界730投射到实际的割草区域中,以获取用户指定的第二割草区域,且将第二虚拟边界730融合到实时图像530或模拟实景图540中,以生成第二融合图像740,第二虚拟边界730的坐标在执行机构坐标系750 中固定,而随着用户控制实时图像530或模拟实景图540的转换过程中,在像素坐标系760中位置移动。通过用户的修正,可以修正自行走割草系统对割草边界自动识别获取的误差,从而可以直观准确地设定割草区域的边界,且通过图像传感器等装置识别生成第一虚拟边界710,用户只需在第一虚拟边界710的基础上进行修正以生成第二虚拟边界730,从而方便用户操作设定割草边界。
在另一种实施方式中,用户可以直接通过接收模块200在实时图像530或模拟实景图540上设定第一虚拟边界710,通过边界识别模块获取用户设定的第一虚拟边界710的位置信息,并投射到执行机构100坐标上,并通过定位模块300检测执行机构100位置,以通过控制模块150控制执行机构100在第一虚拟边界710对应的第一边界上移动,从而方便用户快速设定割草边界。
在本申请的第二实施方式中,参照图13和图14,处理组件180包括图像采集模块400a和障碍物生成模块800a,图像采集模块400a包括图像传感器、激光雷达420a、超声波传感器、摄像头410a、TOF传感器430a一种或组合,超声波传感器通过发送超声波并根据超声波的返回时间以检测割草区域内是否有障碍物,并记录障碍物的位置信息,激光雷达420a发送激光并根据激光的反射时间以检测割草区域内的障碍物;图像传感器分析获取的图像形状、颜色,通过算法分析符合障碍物的对应图像。障碍物生成模块800a根据图像采集模块400a对割草区域的障碍物检测信息,将其融合到实时图像530a或模拟实景图540a中,通过显示模块500a在实时图像530a或模拟实景图540a中的割草区域内对应位置中生成第一虚拟障碍物标识810a,从而生成第一融合图像720a,第一融合图像720a为包括第一虚拟障碍物标识810a的实时图像530a或模拟实景图540a。发送模块600a发送第一融合图像720a的信息到控制模块150a。控制模块150a根据第一融合图像720a的信息控制执行机构100a在运行割草时避开虚拟障碍物。数据运算处理器建立像素坐标系和执行机构100a坐标系,通过识别用户在第一融合图像720a上添加的第一虚拟障碍物标识810a的像素坐标,根据预设的坐标转换方法,计算第一虚拟障碍物标识810a在障碍物中的以将第一虚拟障碍物标识位置信息转换到实际的障碍物820a的位置信息,并通过控制模块150a控制执行机构100a在运行过程中避开障碍物820a,以达到用户在实时图像530a或模拟实景图540a上添加第一虚拟障碍物标识810a,并使得自行走割草系统识别障碍物并绕开,从而方便用户操作,可以准确地添加障碍物信息到割草区域内。
在另一种实施方式中,参照图15,障碍物生成模块800a根据用户输入的指令在实时图像530a或模拟实景图540a中生成与障碍物对应的虚拟障碍物标识以形成第一融合图像720a。用户通过接收模块200a在实时图像530a或模拟实景图540a中根据实际割草区域中的障碍物位置,或者不需要割草的区域位置设定虚拟障碍物标识,作为设定执行机构100a无需作业,在实际割草作业中需要绕开的区域的标识。
针对割草区域中可能有障碍物如石头、树木等,障碍物生成模块800a预设障碍物模型,如石头模型、树木模型、花模型供用户选择。用户通过交互界面520a所模拟实景状态的模拟实景图540a或者实时图像530a,根据模拟实景图540a或实时图像530a所展示的环境特点,结合割草区域的实际状态,判断障碍物对应在模拟实景图540a或实时图像530a中的位置,并通过接收模块200a在模拟实景图540a或中选择或选择障碍物的种类,以及障碍物的位置和尺寸,用户输入相关信息后,由图像处理器320在生成的模拟实景图540a内生成对应的模拟障碍物640,并且由控制模块150a控制执行机构100a在运行过程中,规避障碍物作业。
障碍物生成模块800a在实时图像530a或模拟实景图540a中生成与障碍物对应的虚拟障碍物标识以形成第一融合图像720a,第一融合图像720a包括虚拟障碍物标识的大小、形状、位置信息。发送模块600a将第一融合图像720a的信息发送给控制模块150a,从而控制模块150a根据第一融合图像720a的信息控制执行机构100a在割草区域内割草时绕开虚拟障碍物标识,以达到规避障碍物的需求。
第一融合图像720a还可以包括第一虚拟边界710a,边界生成模块700a通过计算特征参数在实时图像530a或模拟实景图540a中生成与割草边界对应的第一虚拟边界710a,从而控制模块150a根据第一融合图像720a信息,控制执行机构100a在第一虚拟边界710a之内和虚拟障碍物标识之外的第一虚拟割草区域对应的第一割草区域内工作,从而限定执行机构100a在第一边界的范围内作业,并且规避了虚拟障碍物标识。障碍物可以是石头、物品等占用空间的物体,也可以是花、特殊植物等不需要割草的区域;障碍物也可以理解为用户需求的不需要在当前的第一虚拟边界710a内的区域作业的区域,并且可以形成特殊的图案或者形状,以满足用户的美化草坪的需求。
在本申请的第三实施方式中,参照图16到图19,障碍物生成模块800b通 过计算特征参数在实时图像530b或模拟实景图540b中生成与割草障碍物对应的第一虚拟障碍物810b以形成第一融合图像720b。第一融合图像720b包括第一虚拟割草区域760b,以及第一虚拟割草区域760b内的第一虚拟障碍物810b,第一虚拟割草区域760b对应实际的第一割草区域770b,第一虚拟割草区域760b与实际的第一割草区域770b的物体分布和位置对应,第一割草区域770b为执行机构100b需要作业的割草区域。障碍物生成模块800b内设有障碍物分析算法,通过图像采集模块400b检测待割草区域的障碍物820b,并在实时图像530b或模拟实景图540b内中对应地割草障碍物820b位置中生成第一虚拟障碍物810b,从而将第一虚拟障碍物810b与实时图像530b或模拟实景图540b融合生成第一融合图像720b。通过显示模块500b显示虚拟实景图540b或者实时图像530b。第一融合图像720b中包括第一虚拟障碍物810b,第一虚拟障碍物810b对应实际的至少一个障碍物820b,为障碍物生成模块800b检测到的当前环境中的割草障碍物820b。发送模块600b与控制模块150b电连接或者通讯连接,发送模块600b发送第一融合图像720b的信息给控制模块150b,第一融合图像720b的信息包括第一虚拟障碍物810b的位置信息,控制模块150b根据第一虚拟障碍物810b的位置信息控制执行机构100b在第一虚拟割草区域760b对应的实际第一割草区域770b割草,且根据检测执行机构100b的位置控制执行机构100b仅在第一虚拟障碍物810b对应的实际的第一障碍物内作业。
可选的,参照图20,在障碍物生成模块800b生成第一融合图像720b后,接收模块200b通过显示界面询问用户是否需要修正当前的第一融合图像720b中的第一虚拟障碍物810b信息,并且接收用户输入的是否需要修正第一融合图像720b中的第一虚拟障碍物810b的信息。用户选择修正第一融合图像720b的信息时,用户手动输入指令对第一虚拟障碍物810b修正,从而生成用户指定的第二虚拟障碍物830b,使得用户结合实际需求的割草障碍物通过控制面板在显示的第一融合图像720b中对第一虚拟障碍物810b进行修正。处理组件180还包括修正模块801b,修正模块801b在用户输入需要修正第一虚拟障碍物810b的信息时接收用户指令以修正第一虚拟障碍物810b以在实时图像530b或模拟实景图540b中生成第二虚拟障碍物830b从而形成第二融合图像740b。
第二融合图像740b中包括修正的第二虚拟障碍物830b,第二虚拟障碍物830b对应实际的用户需要规避的至少一障碍物820b。控制模块150b根据第二虚拟障碍物830b的位置信息控制执行机构100b在第一虚拟割草区域760b对应 的实际第一割草区域770b割草,且根据检测执行机构100b的位置控制执行机构100b仅在第二虚拟障碍物830b对应的实际的第二障碍物内作业,控制模块150b根据第一融合图像720b的信息控制执行机构100b在运行割草时避开第二虚拟障碍物830b对应的实际障碍物位置,从而使得用户方便的调节自行走割草系统作业过程中规避作业区,障碍物可以是石头、物品等占用空间的物体,也可以是花、特殊植物等不需要割草的区域。
在本申请的第四实施方式中,参照图,21处理组件180包括路径生成模块900c,路径生成模块900c根据用户输入的指令在所述实时图像530c或模拟实景图中生成行走路径910c以形成第一融合图像720c。路径生成模块900c设有预设的割草路径模式,如弓字形路径,执行机构100c被控制在边界中往复渐进的工作,或回字形路径,执行机构100c被控制向一个中心环绕渐进作业。
参照图22,处理组件180包括边界生成模块700c,用户发送开启指令,边界生成模块700c内设有边界分析算法,通过对实时图像530c或模拟实景图中的色彩、草高、形状分析待割草区域的割草边界,并在实时图像530c或模拟实景图内中对应地割草边界位置中生成第一虚拟边界710c。参照图23和图24,路径生成模块900c在生成的第一虚拟边界710c内安装预设算法设计割草区域内的行走路径910c,并根据生成的行走路径910c对应的执行机构100c坐标系中的位置坐标,计算对应的像素坐标系中的像素坐标,从而将生成的行走路径910c展示在实时图像530c或模拟实景图中,融合到实时图像530c或模拟实景图中生成第一融合图像720c。发送模块600c将第一融合图像720c发送到控制模块150c,控制模块150c控制行走组件110c沿着第一融合图像720c中的行走路径910c行走并对割草区域进行割草作业。
可选的,参照图25,处理组件180还包括修正模块801c,用户可以通过接收模块200c对第一融合图像720c中的行走路径910c进行修改,并通过修正模块801c修正路径生成模块900c生成的第一融合图像720c。通过交互界面520c在第一融合图像720c上对生成的行走路径910c修正,通过选中部分路径选择删除以对路径进行删除,以及在第一融合图像720c中添加线段以添加新路径,修正模块801c读取用户选定的路径或添加路径的像素坐标集合,并根据预设算法将其转换为执行机构100c坐标集合,并投射到割草区域对应的位置上,从而根据对执行机构100c的定位追踪,分析对控制执行机构100c的行进控制指令和割草控制指令,使得执行机构100c沿着用户修改后的行走路径910c行走作业。
在另一种实施方式中,路径生成模块900c包括预设算法用于根据割草区域的特征参数计算生成对已的第一行走路径910c,并在显示模块500c显示的实时图像530c或模拟实景图中显示出来。路径生成模块900c根据获取的割草边界信息、区域信息,自动计算生成第一行走路径910c。路径生成模块900c被设置根据割草区域的特征参数,生成第一行走路径910c,如弓字形路径、回字形路径或随机路径。且将对应的割草区域内即将割草遵循的第一行走路径910c在实时图像530c或模拟实景图展示给用户。接收模块200c接收用户输入的是否需要修正所述第一融合图像720c中的第一行走路径910c的信息,用户选择修正并通过接收模块200c输入修正指令,对第一行走路径910c删除部分线段或区域,以及对第一行走路径910c添加部分线段或区域,从而在实时图像530c或模拟实景图中生成第二行走路径920c,修正模块801c识别用户的修正指令,并将第二行走路径920c的坐标融合到实时图像530c或模拟实景图中,以生成第二融合图像740c。发送模块600c发送第二融合图像740c的信息给控制模块150c,控制模块150c按照第二行走路径920c的信息,控制执行机构100c沿着第二行走路径920c对应的实际在割草区域的路径行走作业。
在另一种实施方式中,路径生成模块900c生成预设路径刷,如回字形路径刷、弓字形路径刷、直线路径刷供用户选择。路径生成模块900c在交互界面520c上形成可供选择的路径刷,用户选择对应的路径刷并在实时图像530c或模拟实景图中刷过期望所述执行机构100c工作的区域,从而在对应的区域生成回字形路径、弓字形路径、直线形路径;以在实时图像530c或模拟实景图中生成对应的行走路径910c,控制模块150c控制执行机构100c沿着行走路径910c对应的实际在割草区域的路径行走作业。
在另一种方式中,路径生成模块900c可以接收用户通过接收模块200c发送的图案、文字等图形,并根据图案计算生成对应的行走路径910c,控制模块150c控制执行机构100c根据生成的行走路径910c行走割草,从而在割草区域内打印出用户发送图形的割草痕迹,以实现打印割草地目的,从而可以丰富草地外观类型。
在以上实施方式中,在边界生成模块700、路径生成模块900c、障碍物生成模块800b生成对应的虚拟边界、虚拟障碍物标识、行走路径910c时,可以通过显示模块显示的实时图像或模拟实景图中通过执行机构模型预览执行机构后续的工作执行状态,以及完成割草作业后的割草区域状态,从而用户可以提 前获知当前设置下的执行机构的后续割草状态以及割草效果,如通过实时图像或所述模拟实景图,预览所述自行走割草系统规避所述第一虚拟障碍物标识的割草工作割草和割草作业效果,从而方便用户及时调整设置自行走割草系统。
用户通过交互界面520c所模拟实景状态的模拟实景图540c或者实时图像530c,根据模拟实景图540c或实时图像530c所展示的环境特点,结合割草区域的实际状态,判断障碍物对应在模拟实景图540c或实时图像530c中的位置,并通过接收模块200c在模拟实景图540c或中选择或选择障碍物的种类,以及障碍物的位置和尺寸,用户输入相关信息后,由图像处理器在生成的模拟实景图540c内生成对应的模拟障碍物,并且由控制模块150c控制执行机构100c在运行过程中,规避障碍物作业。
参照图26,处理组件180还包括引导通道设定模块,引导通道设定模块控制在投射设备510投射的交互界面520c生成引导通道设定键或设定界面,用户通过引导通道设定模块在模拟实景图540c或实时图像530c中添加虚拟引导通道标识560c。用户的待作业区域可能存在多个相对独立的作业区域,如用户庭院的前后院,从而用户可以通过在两个独立的作业区域之间添加虚拟引导通道标识560c,引导执行机构100c通过用户需求的引导通道从某作业区域移动到另一个作业区域。示例性的,自行走割草系统检测割草区域,在作业环境具有相对独立的多个作业区域时,识别生成对应的第一虚拟子割草区域770c和第二虚拟子割草区域780c,或者用户选定目标作业区域,通过模拟实景图540c选择至少第一虚拟子割草区域770c和第二虚拟子割草区域780c。引导通道设定模块用于接收用户对第一虚拟子割草区域770c和第二虚拟子割草区域780c之间设定的虚拟引导通道,用于引导执行机构100c在第一虚拟子割草区域770c和第二虚拟子割草区域780c对应的第一子割草区域和第二子割草区域之间的行走路径910c。用户根据需求的执行机构100c在第一割草区域和第二割草区域之间的移动通道,在模拟实景图540c中选择对应的虚拟引导通道标识560c,并由控制模块150c控制执行机构100c根据在模拟实景图中融合的虚拟引导通道标识560c来进行引导执行机构100c行进。
自行走割草系统还包括检测装置,用于检测执行机构100c的运行状况,如执行机构100c的机器参数、工作模式、机器故障情况和报警信息。显示模块还可以通过交互界面显示执行机构的机器参数、工作模式、机器故障情况和报警信息,并由数据运算处理器310计算显示信息控制投射设备实时动态反应机器 信息,方便用户控制和获取执行机构的运行状态。
为了更好的检测执行机构的运行状态,自行走割草系统还包括电压传感器和/或电流传感器、雨量传感器、边界识别传感器。通常以上传感器可设置于执行机构内,电压传感器和电流传感器用于检测执行机构运行过程的电流和电压数值,以分析执行机构当前的运行信息。雨量传感器用于检测执行机构环境的雨水情况。边界识别传感器用于检测作业区域的边界,其可以是与边界电子埋线的匹配的传感器,也可以是通过摄像获取环境信息的摄像装置,还可以是定位装置。
可选的,通过雨量传感器检测当前雨量信息,并由图像传感器计算在生成的模拟实景图,模拟对应的下雨场景以及雨量大小。通过检测装置的激光雷达、摄像头、状态传感器等检测装置,获取执行机构周边环境和高度信息,并在模拟实景图中对应显示。可选的,设置电容传感器检测割草刀片的负荷信息,从而模拟执行机构作业后的草高信息。
以上实施方式中的处理组件180可通信地和执行机构连接,处理组件180的至少部分结构可以设置在执行机构内,也可以设置在执行机构之外,通过发送信号传递给执行机构的控制器以控制输出马达和行走马达的运行,以控制执行机构的行走和割草状态。
在本申请的第五实施方式中,参照图27,提出一种户外自行走设备,户外自行走设备可以是扫雪机,其包括:执行机构100d,包括用于实现行走功能的行走组件110d和用于实现预设功能的工作组件;机壳,用于支撑执行机构100d;图像采集模块400d,能采集包括至少部分工作区域以及至少部分工作边界的实时图像530d;显示模块500d,与图像采集模块400d电连接或者通信连接,显示模块500d被配置成用于显示实时图像530d或根据实时图像530d的生成的模拟实景图540d;边界生成模块700d,通过计算特征参数在实时图像530d中生成与工作边界对应的第一虚拟边界以形成第一融合图像;接收模块200d,用于接收用户输入的是否需要修正第一融合图像中的第一虚拟边界的信息;修正模块801d,在用户输入需要修正第一虚拟边界的信息时接收用户指令以修正第一虚拟边界以在实时图像530d或模拟实景图540d中生成第二虚拟边界730d从而形成第二融合图像;发送模块600d,发送无需修正的第一融合图像或经过修正的第二融合图像;控制模块300d,与发送模块600d电连接或者通讯连接,控制模块300d控制执行机构100d在第一虚拟边界或者第二虚拟边界730d内运行。
可选的,边界生成模块700d,通过计算特征参数在实时图像530d中生成与工作边界对应的第一虚拟边界以形成第一融合图像;发送模块600d,发送第一融合图像;控制模块300d,与发送模块600d电连接或者通讯连接,控制模块300d控制执行机构100d在第一虚拟边界内运行。
可选的,户外自行走设备还包括障碍物生成模块,根据用户输入的指令在实时图像530d中生成与障碍物对应的虚拟障碍物标识以形成第一融合图像;图像采集模块400d采集包括至少部分工作区域以及位于工作区域内的至少一个障碍物的实时图像530d,与发送模块600d电连接或者通讯连接,控制模块300d控制执行机构100d避开第一融合图像中的虚拟障碍物。
可选的,障碍物生成模块通过计算特征参数在实时图像530d中生成与障碍物对应的第一虚拟障碍物标识以形成第一融合图像,控制模块300d控制执行机构100d避开第一融合图像中的虚拟障碍物。
可选的,障碍物生成模块通过计算特征参数在实时图像530d或模拟实景图540d中生成与障碍物对应的第一虚拟障碍物标识以形成第一融合图像;接收模块200d接收用户输入的是否需要修正第一融合图像中的第一虚拟障碍物标识的信息;修正模块801d在用户输入需要修正第一虚拟障碍物标识的信息时,接收用户指令以修正第一虚拟障碍物标识以在实时图像530d或模拟实景图540d中生成第二虚拟障碍物标识从而形成第二融合图像;发送模块600d发送无需修正的第一融合图像或经过修正的第二融合图像;控制模块300d,与发送模块600d电连接或者通讯连接,控制模块300d控制执行机构100d避开第一融合图像中的第一虚拟障碍物标识或者第二融合图像中的第二虚拟障碍物标识。
可选的,障碍物生成模块根据用户输入的指令在实时图像530d或模拟实景图540d中生成第一虚拟障碍物标识以形成第一融合图像;发送模块600d,发送第一融合图像;控制模块300d,与发送模块600d电连接或者通讯连接,控制模块300d控制执行机构100d避开第一融合图像中的第一虚拟障碍物标识。
可选的,路径生成模块,根据用户输入的指令在实时图像530d或模拟实景图540d中生成行走路径以形成第一融合图像;发送模块600d发送第一融合图像;控制模块300d与发送模块600d电连接或者通讯连接,控制模块300d控制行走组件110d沿第一融合图像中的行走路径行走。
可选的,路径生成模块根据计算工作区域的特征参数在实时图像530d或模拟实景图540d中生成第一行走路径以形成第一融合图像;接收模块200d用于 接收用户输入的是否需要修正第一融合图像中的第一行走路径的信息;修正模块801d在用户输入需要修正第一行走路径的信息时,接收用户指令以修正第一行走路径以在实时图像530d或模拟实景图540d中生成第二行走路径从而形成第二融合图像;发送模块600d发送无需修正的第一融合图像或经过修正的第二融合图像;控制模块300d与发送模块600d电连接或者通讯连接,控制模块300d控制行走组件110d沿第一融合图像中的第一行走路径或者第二融合图像中的第二行走路径行走。

Claims (27)

  1. 一种自行走割草系统,包括:
    主体,包括机壳;
    割草元件,连接于所述主体并用于切割植被;
    输出马达,驱动所述割草元件;
    行走轮,连接于所述主体;
    驱动马达,驱动所述行走轮转动;
    图像采集模块,能采集包括至少部分割草区域以及位于所述割草区域内的至少一个障碍物的实时图像;
    显示模块,与所述图像采集模块电连接或者通信连接,所述显示模块被配置成用于显示所述实时图像或根据所述实时图像的生成的模拟实景图;
    障碍物生成模块,通过计算特征参数在所述实时图像或所述模拟实景图中生成与所述障碍物对应的第一虚拟障碍物标识以形成第一融合图像;
    接收模块,用于接收用户输入的是否需要修正所述第一融合图像中的第一虚拟障碍物标识的信息;
    修正模块,在用户输入需要修正所述第一虚拟障碍物标识的信息时,接收用户指令以修正所述第一虚拟障碍物标识以在所述实时图像或所述模拟实景图中生成第二虚拟障碍物标识从而形成第二融合图像;
    发送模块,发送无需修正的第一融合图像或经过修正的第二融合图像;
    控制模块,与所述发送模块电连接或者通讯连接,所述控制模块控制所述主体避开所述第一融合图像中的所述第一虚拟障碍物标识或者所述第二融合图像中的所述第二虚拟障碍物标识。
  2. 如权利要求1所述的自行走割草系统,其中:所述控制模块包括处理数据的数据运算处理器,所述数据运算处理器建立像素坐标系,以将所述虚拟障碍物标识位置信息转换到实际的所述障碍物的位置信息。
  3. 如权利要求2所述的自行走割草系统,其中:所述控制模块还包括用于产生图像和场景建模的图像处理器,所述图像处理器根据图像采集模块采集的所述实时图像生成所述模拟实景图。
  4. 如权利要求3所述的自行走割草系统,其中:所述显示模块包括投射设备和交互界面,通过所述投射设备投射产生所述交互界面,并由所述交互界面显示所述模拟实景图或所述实时图像。
  5. 如权利要求1所述的自行走割草系统,其中:所述自行走割草系统还包 括定位模块,所述定位模块包括GPS定位单元、IMU惯性测量单元、位移传感器的一种或组合,用于获取所述主体和所述割草区域的位置信息。
  6. 如权利要求5所述的自行走割草系统,其中:所述自行走割草系统通过所述实时图像或所述模拟实景图,预览所述自行走割草系统规避所述第一虚拟障碍物标识的割草工作割草和割草作业效果。
  7. 一种自行走割草系统,包括:
    执行机构,包括用于实现割草功能的割草组件和用于实现行走功能的行走组件;
    机壳,用于支撑所述执行机构;
    图像采集模块,能采集包括至少部分割草区域以及位于所述割草区域内的至少一个障碍物的实时图像;
    显示模块,与所述图像采集模块电连接或者通信连接,所述显示模块被配置成用于显示所述实时图像或根据所述实时图像的生成的模拟实景图;
    障碍物生成模块,根据用户输入的指令在所述实时图像或所述模拟实景图中生成与所述障碍物对应的虚拟障碍物标识以形成第一融合图像;
    发送模块,发送所述第一融合图像的信息;
    控制模块,与所述发送模块电连接或者通讯连接,所述控制模块控制所述执行机构避开所述第一融合图像中的所述虚拟障碍物标识对应的所述障碍物。
  8. 如权利要求7所述的自行走割草系统,其中:所述显示模块包括投射设备,通过所述投射设备投射所述模拟实际图或所述实际图,所述投射设备包括手机屏幕、硬件显示屏、VR眼镜、AR眼镜的一种。
  9. 如权利要求8所述的自行走割草系统,其中:所述控制模块包括处理数据的数据运算处理器以及用于产生图像和场景建模的图像处理器,所述数据运算处理器建立像素坐标系和执行机构坐标系,以将所述虚拟障碍物标识位置信息转换到实际的所述障碍物的位置信息。
  10. 如权利要求8所述的自行走割草系统,其中:所述障碍物生成模块被设置包括用于添加所述虚拟障碍物标识的预设障碍物模型,所述预设障碍物模型至少包括石头模型、树木模型、花模型的一种或组合。
  11. 如权利要求7所述的自行走割草系统,其中:所述图像采集模块包括图像传感器、激光雷达、超声波传感器、摄像头、TOF传感器一种或组合。
  12. 如权利要求7所述的自行走割草系统,其中:
    所述自行走割草系统还包括:
    边界生成模块,通过计算所述实时图像的特征参数在所述实时图像中生成与所述割草边界对应的第一虚拟边界以形成第一融合图像;
    所述发送模块发送所述第一融合图像;
    所述控制模块与所述发送模块电连接或者通讯连接,所述控制模块控制所述执行机构在所述第一虚拟边界内运行。
  13. 如权利要求12所述的自行走割草系统,其中:所述自行走割草系统还包括定位模块,所述定位模块包括GPS定位单元、IMU惯性测量单元、位移传感的一种或组合,用于获取执行机构的实时位置,通过对执行机构的实时定位数据分析,获取对执行机构行进和割草的控制调节。
  14. 如权利要求12所述的自行走割草系统,其中:所述自行走割草系统还包括引导通道设定模块,所述引导通道设定模块用于接收用户对第一虚拟子割草区域和第二虚拟子割草区域之间设定的虚拟引导通道,用于引导所述执行机构在所述第一虚拟子割草区域和所述第二虚拟子割草区域对应的第一子割草区域和第二子割草区域之间的行走路径。
  15. 一种自行走割草机,包括:
    主体,包括机壳;
    割草元件,连接于所述主体并用于切割植被;
    输出马达,驱动所述割草元件;
    行走轮,连接于所述主体;
    驱动马达,驱动所述行走轮转动;
    图像采集模块,能采集包括至少部分割草区域以及位于所述割草区域内的至少一个障碍物的实时图像,并被设置将所述实时图像发送给显示模块以显示所述实时图像或根据所述实时图像生成的模拟实景图;
    控制模块,能接收根据用户输入的指令在所述实时图像或所述模拟实景图中生成与所述障碍物对应的虚拟障碍物标识以形成第一融合图像,所述控制模块控制所述执行机构避开所述第一融合图像中的所述虚拟障碍物标识对应的所述障碍物。
  16. 如权利要求15所述的自行走割草机,其中:所述控制模块包括处理数据的数据运算处理器,用于将所述虚拟障碍物标识位置信息转换到实际的所述障碍物的位置信息。
  17. 如权利要求16所述的自行走割草机,其中:所述控制模块还包括用于产生图像和场景建模的图像处理器,所述图像处理器根据图像采集模块采集的所述实时图像生成所述模拟实景图。
  18. 如权利要求15所述的自行走割草机,其中:所述自行走割草机还包括定位模块,所述定位模块包括GPS定位单元、IMU惯性测量单元、位移传感器的一种或组合,用于获取所述自行走割草机的位置信息。
  19. 如权利要求16所述的自行走割草机,其中:所述图像采集模块包括图像传感器、激光雷达、超声波传感器、摄像头、TOF传感器一种或组合。
  20. 一种自行走割草系统,包括:
    执行机构,包括用于实现割草功能的割草组件和用于实现行走功能的行走组件;
    机壳,用于支撑所述执行机构;
    图像采集模块,能采集包括至少部分割草区域以及位于所述割草区域内的至少一个障碍物的实时图像;
    显示模块,与所述图像采集模块电连接或者通信连接,所述显示模块被配置成用于显示所述实时图像或根据所述实时图像的生成的模拟实景图;
    处理组件,所述处理组件被设置为:
    根据用户输入的指令在所述实时图像或所述模拟实景图中生成与所述障碍物对应的虚拟障碍物标识以形成第一融合图像;
    控制所述执行机构避开所述第一融合图像中的所述虚拟障碍物标识对应的所述障碍物。
  21. 如权利要求20所述的自行走割草系统,其中:
    所述处理组件还包括:
    接收模块,用于接收用户输入的是否需要修正所述第一融合图像中的第一虚拟边界的信息;
    修正模块,在用户输入需要修正所述第一虚拟边界的信息时接收用户指令以修正所述第一虚拟边界以在所述实时图像或模拟实景图中生成第二虚拟边界从而形成第二融合图像;
    发送模块,发送无需修正的所述第一融合图像的信息或经过修正的所述第二融合图像的信息;
    控制模块,与所述发送模块电连接或者通讯连接,所述控制模块控制所述 执行机构在所述第一虚拟边界或者所述第二虚拟边界内运行。
  22. 如权利要求21所述的自行走割草系统,其中:
    所述处理组件还包括:
    路径生成模块,根据用户输入的指令在所述实时图像或模拟实景图中生成行走路径以形成第一融合图像;
    所述发送模块发送所述第一融合图像;
    所述控制模块与所述发送模块电连接或者通讯连接,所述控制模块控制所述行走组件沿所述第一融合图像中的行走路径行走。
  23. 如权利要求22所述的自行走割草系统,其中,包括:所述路径生成模块被设置生成预设路径刷,所述预设路径刷至少包括回字形路径刷、弓字形路径刷、直线路径刷,所述预设路径刷用于在所述割草区域内生成回字形路径、弓字形路径、直线形路径。
  24. 如权利要求20所述的自行走割草系统,其中:
    所述处理组件还被配置为:
    根据计算所述割草区域的特征参数在所述实时图像或模拟实景图中生成第一行走路径以形成第一融合图像;
    接收用户输入的是否需要修正所述第一融合图像中的第一行走路径的信息;
    在用户输入需要修正所述第一行走路径的信息时,接收用户指令以修正所述第一行走路径以在所述实时图像或模拟实景图中生成第二行走路径从而形成第二融合图像;
    控制所述行走组件沿所述第一融合图像中的第一行走路径或者第二融合图像中的第二行走路径行走。
  25. 如权利要求20所述的自行走割草系统,其中:所述处理组件还被配置为:
    接收用户输入的是否需要修正所述第一融合图像中的第一虚拟边界的信息;
    在用户输入需要修正所述第一虚拟边界的信息时接收用户指令以修正所述第一虚拟边界以在所述实时图像或模拟实景图中生成第二虚拟边界从而形成第二融合图像;
    控制所述执行机构在所述第一虚拟边界或者所述第二虚拟边界内运行。
  26. 一种户外自行走设备,包括:
    执行机构,包括用于实现行走功能的行走组件和用于实现预设功能的工作 组件;
    机壳,用于支撑所述执行机构;
    图像采集模块,能采集包括至少部分工作区域的实时图像;
    显示模块,与所述图像采集模块电连接或者通信连接,所述显示模块被配置成用于显示所述实时图像根据所述实时图像的生成的模拟实景图;
    接收模块,用于接收用户输入的指令;
    障碍物生成模块,根据用户输入的指令在所述实时图像或所述模拟实景图中生成第一虚拟障碍物标识以形成第一融合图像;
    发送模块,发送所述第一融合图像;
    控制模块,与所述发送模块电连接或者通讯连接,所述控制模块控制所述执行机构避开所述第一融合图像中的所述第一虚拟障碍物标识。
  27. 一种户外自行走设备,包括:
    执行机构,包括用于实现行走功能的行走组件和用于实现预设功能的工作组件;
    机壳,用于支撑所述执行机构;
    图像采集模块,能采集包括至少部分工作区域以及位于所述工作区域内的至少一个障碍物的实时图像;
    显示模块,与所述图像采集模块电连接或者通信连接,所述显示模块被配置成用于显示所述实时图像根据所述实时图像的生成的模拟实景图;
    障碍物生成模块,通过计算特征参数在所述实时图像或所述模拟实景图中生成与所述障碍物对应的第一虚拟障碍物标识以形成第一融合图像;
    接收模块,用于接收用户输入的是否需要修正所述第一融合图像中的第一虚拟障碍物标识的信息;
    修正模块,在用户输入需要修正所述第一虚拟障碍物标识的信息时,接收用户指令以修正所述第一虚拟障碍物标识以在所述实时图像或所述模拟实景图中生成第二虚拟障碍物标识从而形成第二融合图像;
    发送模块,发送无需修正的第一融合图像或经过修正的第二融合图像;
    控制模块,与所述发送模块电连接或者通讯连接,所述控制模块控制所述执行机构避开所述第一融合图像中的所述第一虚拟障碍物标识或者所述第二融合图像中的所述第二虚拟障碍物标识。
PCT/CN2020/121378 2019-10-18 2020-10-16 自行走割草系统、自行走割草机和户外自行走设备 WO2021073587A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20876278.1A EP4018802A4 (en) 2019-10-18 2020-10-16 AUTONOMOUS LAWN MOWING SYSTEM, AUTONOMOUS LAWN MOWER AND AUTONOMOUS DEVICE FOR OUTDOOR USE
US17/709,004 US20220217902A1 (en) 2019-10-18 2022-03-30 Self-moving mowing system, self-moving mower and outdoor self-moving device

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201910992552 2019-10-18
CN201910992552.8 2019-10-18
CN201911409433 2019-12-31
CN201911409433.1 2019-12-31

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/709,004 Continuation US20220217902A1 (en) 2019-10-18 2022-03-30 Self-moving mowing system, self-moving mower and outdoor self-moving device

Publications (1)

Publication Number Publication Date
WO2021073587A1 true WO2021073587A1 (zh) 2021-04-22

Family

ID=75537728

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/121378 WO2021073587A1 (zh) 2019-10-18 2020-10-16 自行走割草系统、自行走割草机和户外自行走设备

Country Status (3)

Country Link
US (1) US20220217902A1 (zh)
EP (1) EP4018802A4 (zh)
WO (1) WO2021073587A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4250041A1 (en) * 2022-03-24 2023-09-27 Willand (Beijing) Technology Co., Ltd. Method for determining information, remote terminal, and mower

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150296707A1 (en) * 2012-11-29 2015-10-22 Toshio Fukuda Autonomous travel work system
CN106325271A (zh) * 2016-08-19 2017-01-11 深圳市银星智能科技股份有限公司 智能割草装置及智能割草装置定位方法
US20180168097A1 (en) * 2014-10-10 2018-06-21 Irobot Corporation Robotic Lawn Mowing Boundary Determination
CN108337987A (zh) * 2018-02-13 2018-07-31 杭州慧慧科技有限公司 一种自动割草系统和割草机控制方法
CN108829103A (zh) * 2018-06-15 2018-11-16 米亚索能光伏科技有限公司 除草机的控制方法、除草机、终端、设备和存储介质
CN109247118A (zh) * 2018-08-24 2019-01-22 宁波市德霖机械有限公司 基于全景摄像头构建电子地图的智能割草机
CN109258061A (zh) * 2018-09-10 2019-01-25 安徽灵翔智能机器人技术有限公司 一种具有自主行走功能的智能割草机
CN109634286A (zh) * 2019-01-21 2019-04-16 深圳市傲基电子商务股份有限公司 割草机器人视觉避障方法、割草机器人和可读存储介质
CN109947115A (zh) * 2019-04-17 2019-06-28 河北农业大学 一种割草机控制系统及其控制方法

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2689650B1 (en) * 2012-07-27 2014-09-10 Honda Research Institute Europe GmbH Trainable autonomous lawn mower
US9420741B2 (en) * 2014-12-15 2016-08-23 Irobot Corporation Robot lawnmower mapping

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150296707A1 (en) * 2012-11-29 2015-10-22 Toshio Fukuda Autonomous travel work system
US20180168097A1 (en) * 2014-10-10 2018-06-21 Irobot Corporation Robotic Lawn Mowing Boundary Determination
CN106325271A (zh) * 2016-08-19 2017-01-11 深圳市银星智能科技股份有限公司 智能割草装置及智能割草装置定位方法
CN108337987A (zh) * 2018-02-13 2018-07-31 杭州慧慧科技有限公司 一种自动割草系统和割草机控制方法
CN108829103A (zh) * 2018-06-15 2018-11-16 米亚索能光伏科技有限公司 除草机的控制方法、除草机、终端、设备和存储介质
CN109247118A (zh) * 2018-08-24 2019-01-22 宁波市德霖机械有限公司 基于全景摄像头构建电子地图的智能割草机
CN109258061A (zh) * 2018-09-10 2019-01-25 安徽灵翔智能机器人技术有限公司 一种具有自主行走功能的智能割草机
CN109634286A (zh) * 2019-01-21 2019-04-16 深圳市傲基电子商务股份有限公司 割草机器人视觉避障方法、割草机器人和可读存储介质
CN109947115A (zh) * 2019-04-17 2019-06-28 河北农业大学 一种割草机控制系统及其控制方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4018802A4

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4250041A1 (en) * 2022-03-24 2023-09-27 Willand (Beijing) Technology Co., Ltd. Method for determining information, remote terminal, and mower

Also Published As

Publication number Publication date
EP4018802A4 (en) 2022-11-09
US20220217902A1 (en) 2022-07-14
EP4018802A1 (en) 2022-06-29

Similar Documents

Publication Publication Date Title
CN112764416A (zh) 自行走割草系统和户外行走设备
EP3355670B1 (en) User interface for mobile machines
EP3237983B1 (en) Robotic vehicle grass structure detection
EP3234721B1 (en) Multi-sensor, autonomous robotic vehicle with mapping capability
US9603300B2 (en) Autonomous gardening vehicle with camera
US20200233413A1 (en) Method for generating a representation and system for teaching an autonomous device operating based on such representation
WO2016098040A1 (en) Robotic vehicle with automatic camera calibration capability
US20220151147A1 (en) Self-moving lawn mower and supplementary operation method for an unmowed region thereof
EP3158409B1 (en) Garden visualization and mapping via robotic vehicle
CN113128747B (zh) 智能割草系统及其自主建图方法
US20180356832A1 (en) Method for Identifying at Least One Section of a Boundary Edge of an Area to Be Treated, Method for Operating an Autonomous Mobile Green Area Maintenance Robot, Identifying System and Green Area Maintenance System
WO2021073587A1 (zh) 自行走割草系统、自行走割草机和户外自行走设备
CN114721385A (zh) 虚拟边界建立方法、装置、智能终端及计算机存储介质
CN112438112B (zh) 自行走割草机
CN114995444A (zh) 建立虚拟工作边界的方法、装置、远程终端及存储介质
CN210610367U (zh) 自主型割草机
US20230320263A1 (en) Method for determining information, remote terminal, and mower
US20240061423A1 (en) Autonomous operating zone setup for a working vehicle or other working machine
WO2023119986A1 (ja) 農業機械、および、農業機械に用いるジェスチャ認識システム
CN117850418A (zh) 根据虚拟边界运行的自行走装置及其虚拟边界生成方法
US20170090740A1 (en) User Interface for Mobile Machines
WO2023239237A1 (en) A method of real-time controlling a remote device, and training a learning algorithm

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20876278

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020876278

Country of ref document: EP

Effective date: 20220325

NENP Non-entry into the national phase

Ref country code: DE