CN112684785A - Self-walking mowing system and outdoor walking equipment - Google Patents

Self-walking mowing system and outdoor walking equipment Download PDF

Info

Publication number
CN112684785A
CN112684785A CN201911409198.8A CN201911409198A CN112684785A CN 112684785 A CN112684785 A CN 112684785A CN 201911409198 A CN201911409198 A CN 201911409198A CN 112684785 A CN112684785 A CN 112684785A
Authority
CN
China
Prior art keywords
image
obstacle
module
mowing
walking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911409198.8A
Other languages
Chinese (zh)
Inventor
陈伟鹏
杨德中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Chervon Industry Co Ltd
Nanjing Deshuo Industrial Co Ltd
Original Assignee
Nanjing Deshuo Industrial Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Deshuo Industrial Co Ltd filed Critical Nanjing Deshuo Industrial Co Ltd
Publication of CN112684785A publication Critical patent/CN112684785A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0242Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0255Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/0278Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using satellite positioning signals, e.g. GPS
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/0285Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using signals transmitted via a public communication network, e.g. GSM network

Abstract

The invention provides a self-walking mowing system, which comprises: the actuating mechanism comprises a mowing assembly for realizing a mowing function and a walking assembly for realizing a walking function; the image acquisition module can acquire a real-time image comprising a mowing area and at least one obstacle positioned in the mowing area; the display module is used for displaying the real-time image or the simulation live-action image generated according to the real-time image; the obstacle generating module generates a virtual obstacle identifier corresponding to an obstacle in a real-time image or a simulated live-action image according to an instruction input by a user to form a first fusion image; and the control module controls the executing mechanism to avoid the obstacle corresponding to the virtual obstacle mark in the first fusion image. The invention also provides outdoor walking equipment, the self-walking mowing system and the outdoor walking equipment are convenient for a user to add the barrier mark to bypass the barrier area, and the working condition of the self-walking mowing system can be intuitively acquired.

Description

Self-walking mowing system and outdoor walking equipment
Technical Field
The invention relates to an outdoor electric tool, in particular to a self-walking mowing system and outdoor walking equipment.
Background
The self-walking mowing system is taken as an outdoor mowing tool, long-term operation of a user is not needed, and the self-walking mowing system is intelligent, convenient and popular with the user. In the traditional self-walking mowing system mowing process, obstacles such as trees and stones often exist in a mowing area where the self-walking mowing system is located, the obstacles can influence the walking track of the self-walking mowing system, and the self-walking mowing system is easy to damage due to the fact that the obstacles collide for many times. The user may have an area in which the user does not want to mow, such as a planted flower and grass area, and the traditional self-walking mowing system cannot detect the area, so that the user can mistakenly cut the area in which the user does not want to mow, and the mowing requirement of the user cannot be met. Other outdoor traveling equipment, such as snow plows, are also common and have the above problems.
Disclosure of Invention
In order to solve the defects of the prior art, the invention aims to provide a self-walking mowing system, which can enable a user to add an obstacle identifier on a simulated live-action image or a real-time image of an executing mechanism through displaying the simulated live-action image or the real-time image, control the self-walking mowing system to bypass an obstacle area and visually acquire the working condition of the self-walking mowing system.
To achieve the above main object of the present invention, there is provided a self-propelled mowing system including: the actuating mechanism comprises a mowing assembly for realizing a mowing function and a walking assembly for realizing a walking function; a housing for supporting the actuator; an image acquisition module capable of acquiring a real-time image including at least a portion of a mowing area and at least one obstacle located within the mowing area; a display module electrically or communicatively coupled to the image acquisition module, the display module configured to display the real-time image or a simulated live-action generated from the real-time image; the obstacle generating module generates a virtual obstacle identifier corresponding to an obstacle in a real-time image or a simulated live-action image according to an instruction input by a user to form a first fusion image; the sending module is used for sending the information of the first fusion image; and the control module is electrically connected or in communication connection with the sending module and controls the executing mechanism to avoid the obstacle corresponding to the virtual obstacle mark in the first fusion image.
Optionally, the display module includes a projection device, the simulated actual image or the actual image is projected through the projection device, and the projection device includes one of a mobile phone screen, a hardware display screen, VR glasses, and AR glasses.
Optionally, the control module comprises a data arithmetic processor for processing data and an image processor for generating an image and scene modeling, the data arithmetic processor establishing a pixel coordinate system and an actuator coordinate system for converting the virtual obstacle identification position information to the actual obstacle position information.
Optionally, the obstacle generating module is configured to include a preset obstacle model for adding the virtual obstacle identifier, where the preset obstacle model at least includes one or a combination of a stone model, a tree model, and a flower model.
To achieve the above main object of the present invention, there is provided a self-propelled mowing system including: the actuating mechanism comprises a mowing assembly for realizing a mowing function and a walking assembly for realizing a walking function; a housing for supporting the actuator; an image acquisition module capable of acquiring a real-time image including at least a portion of a mowing area and at least one obstacle located within the mowing area; a display module electrically or communicatively coupled to the image acquisition module, the display module configured to display the real-time image or a simulated live-action generated from the real-time image; the obstacle generating module is used for generating a first virtual obstacle identifier corresponding to an obstacle in a real-time image or a simulated live-action image by calculating characteristic parameters so as to form a first fusion image; the sending module is used for sending the information of the first fusion image; and the control module is electrically connected or in communication connection with the sending module and controls the executing mechanism to avoid the virtual barrier in the first fusion image.
Optionally, the image acquisition module includes one or a combination of an image sensor, a laser radar, an ultrasonic sensor, a camera, and a TOF sensor.
Optionally, the self-walking mowing system further includes a boundary generating module, the boundary generating module generates a first virtual boundary according to the information of the first boundary of the mowing area acquired by the image acquiring module, and the control module controls the executing mechanism to walk within the first boundary corresponding to the first virtual boundary.
Optionally, the self-walking mowing system further comprises a path generating module, the path generating module automatically generates a walking path within the first virtual boundary, and the control module controls the executing mechanism to walk within the first boundary according to the walking path.
In order to achieve the above main object of the invention, there is provided an outdoor self-walking apparatus, comprising: the actuating mechanism comprises a walking component for realizing a walking function and a working component for realizing a preset function; a housing for supporting the actuator; the image acquisition module can acquire a real-time image comprising at least part of a working area and at least one obstacle positioned in the working area; a display module electrically or communicatively coupled to the image acquisition module, the display module configured to display the real-time image or a simulated live-action generated from the real-time image; the obstacle generating module generates a virtual obstacle identifier corresponding to an obstacle in the real-time image according to an instruction input by a user to form a first fused image; the sending module is used for sending the information of the first fusion image; and the control module is electrically connected or in communication connection with the sending module and controls the executing mechanism to avoid the virtual barrier in the first fusion image.
In order to achieve the above main object of the invention, there is provided an outdoor self-walking apparatus, comprising: the actuating mechanism comprises a walking component for realizing a walking function and a working component for realizing a preset function; a housing for supporting the actuator; the image acquisition module can acquire a real-time image comprising at least part of a working area and at least one obstacle positioned in the working area; a display module electrically or communicatively coupled to the image acquisition module, the display module configured to display or generate a simulated live-action view from the real-time image; the obstacle generating module is used for generating a first virtual obstacle identifier corresponding to the obstacle in the real-time image by calculating the characteristic parameters so as to form a first fusion image; the sending module is used for sending the information of the first fusion image; and the control module is electrically connected or in communication connection with the sending module and controls the executing mechanism to avoid the virtual barrier in the first fusion image.
Drawings
FIG. 1 is a block diagram of an actuator of the self-propelled mowing system of the invention.
Fig. 2 is a schematic view of the connection of the actuator and the projection device in fig. 1.
Fig. 3 is a schematic view of a part of the internal structure of the actuator in fig. 2.
Fig. 4 is a schematic diagram of the frame of the actuator of fig. 1.
FIG. 5 is a schematic frame diagram of the self-propelled mowing system of FIG. 1.
Fig. 6 is a schematic view of the mowing area of the first embodiment of the present invention.
Fig. 7 is a schematic view of an interactive interface of the first embodiment of the present invention.
Fig. 8 is a schematic diagram of the interactive interface displaying real-time images according to the first embodiment of the invention.
Fig. 9 is a schematic diagram of an interactive interface displaying a first fused image according to the first embodiment of the present invention.
FIG. 10 is a diagram illustrating a second fused image in the interactive interface according to the first embodiment of the present invention.
Fig. 11 is a schematic diagram of the actuator coordinate system according to the first embodiment of the present invention.
Fig. 12 is a schematic diagram of a pixel coordinate system according to the first embodiment of the invention.
FIG. 13 is a frame diagram of a self-propelled mowing system of a second embodiment of the present invention.
Fig. 14 is a schematic view of the mowing area of the second embodiment of the present invention.
Fig. 15 is a schematic diagram of a first fused image according to a second embodiment of the present invention.
FIG. 16 is a frame diagram of a self-propelled mowing system of a third embodiment of the invention.
Fig. 17 is a schematic view of a mowing area of a third embodiment of the present invention.
Fig. 18 is a schematic diagram of a first fused image according to a third embodiment of the present invention.
Fig. 19 is a schematic diagram of a first fused image according to a third embodiment of the present invention.
Fig. 20 is a schematic diagram of a second fused image according to a third embodiment of the present invention.
FIG. 21 is a frame diagram of a self-propelled mowing system of a fourth embodiment of the invention.
Fig. 22 is a schematic view of a mowing area of a fourth embodiment of the present invention.
Fig. 23 is a schematic diagram of a first fused image according to a fourth embodiment of the present invention.
Fig. 24 is a schematic view of a first fused image according to a fourth embodiment of the present invention.
Fig. 25 is a schematic diagram of a second fused image according to a fourth embodiment of the present invention.
Fig. 26 is a schematic diagram of a virtual guide channel identification setting according to a fourth embodiment of the present invention.
Fig. 27 is a schematic structural view of an outdoor self-walking apparatus of a fifth embodiment of the present invention.
Detailed Description
The invention provides a self-walking mowing system, and referring to fig. 1 to 3, the self-walking mowing system comprises an actuating mechanism 100 for trimming vegetation, the actuating mechanism 100 at least comprises a mowing assembly 120 for realizing mowing function and a walking assembly 110 for realizing walking function, and comprises a main body 140 and a casing 130, and the casing 130 is used for packaging and supporting the main body 140, the mowing assembly 120 and the walking assembly 110. Mowing assembly 120 includes a mowing element 121 and an output motor 122, with output motor 122 driving mowing element 121 to rotate to trim vegetation, mowing element 121 may be a blade, or other element that can cut a trimmed lawn. The travel assembly 110 includes at least one travel wheel 111, and a drive motor 112 for driving the travel wheel 111, the drive motor 112 providing torque to the at least one travel wheel 111. By cooperating the mowing assembly 120 and the walking assembly 110, the self-walking mowing system can control the actuator 100 to move and work on vegetation.
Referring to fig. 4, the self-propelled mowing system further comprises a receiving module 200 and a computing component, the power supply 170, the receiving module 200 at least comprises a receiving module 200 for receiving user instructions, and the receiving module 200 is used for receiving control instructions for the self-propelled mowing system for input. The computing assembly includes at least a control module 150 for controlling operation of the self-propelled mowing system, the control module 150 for controlling operation of the drive motor 112 and the output motor 122 in accordance with the instructions and the operating parameters of the self-propelled mowing system to control the actuator 100 to walk and perform mowing operations within the corresponding work area. Power source 170 is used to power the walking assembly and the output assembly, and power source 170 is preferably a removable battery pack mounted to housing 130.
The self-walking mowing system comprises an image acquisition module 400 and a display module 500, the computing assembly comprises a control module 150 used for computing image information, the display module 500 is electrically or communicatively connected with the image acquisition module 400, the image acquisition module 400 can acquire a real-time image 530 comprising at least a part of mowing area and at least a part of mowing boundary, and the display module 500 displays the corresponding real-time image 530 of the mowing area and the mowing boundary. Referring to fig. 3 and 6, the image acquisition module 400 at least includes one or a combination of a camera 410, a laser radar 420, and a TOF sensor 430, and acquires the ambient environment information of the actuator 100 through the camera 410 and the laser radar 420, that is, the camera 410 acquires the ambient image of the mowing area and the mowing boundary to be worked, and can acquire the parameter information of the position, the distance from the current actuator 100, the slope, the shape, and the like of the object in the mowing area and the mowing boundary through the information reflected by the laser of the laser radar 420, and the control module 150 receives the image information of the mowing area and the mowing boundary acquired by the image acquisition module 400, and combines the parameter information of the object in the image into the image. The display module 500 displays real-time images 530 of the mowing area and the mowing boundary captured by the image capture module 400 to a user.
Referring to fig. 3, to improve the accuracy of position detection of the actuator 100, the self-propelled mowing system further includes a positioning module 300 for acquiring the position of the actuator 100, and acquiring control adjustments for advancing and mowing the actuator 100 through real-time positioning data analysis of the actuator 100. The positioning module 300 includes one or a combination of a GPS positioning unit 310, an IMU inertial measurement unit 320, and a displacement sensor 330 for acquiring the position of the actuator 100. The GPS positioning unit 310 is used to obtain position information or position estimates of the actuator 100 and a starting position of the actuator 100 movement. The IMU inertial measurement unit 320 includes an accelerometer and a gyroscope for detecting displacement information of the actuator 100 during travel. The displacement sensor 330 may be disposed on the drive motor 112 or the road wheel 111 for acquiring displacement data of the actuator 100. The information obtained by a plurality of devices is combined and corrected to obtain more accurate position information and obtain the real-time position and posture of the actuating mechanism 100.
In another mode, the control module 150 generates a simulation live-action diagram 540 for the mowing area according to the image information and the data information of the mowing area collected by the image collecting module 400, the simulation live-action diagram 540 simulates the boundary, the area, the obstacle and the like of the mowing area, establishes the actuator model 160, and displays the actuator model 160 in the simulation live-action diagram 540 according to the position of the actuator 100 in the mowing area, so that the position and the working state of the actuator model 160 are synchronized with the actual actuator 100.
Referring to fig. 5, the display module 500 is used for projecting a simulated live-action image 540. Specifically, the display module 500 generates the interactive interface 520 through projection by the projection device 510, and displays the simulated live-action 540 of the actuator 100 through the interactive interface 520. The control module 150 controls the interactive interface 520 generated by the display module 500 to generate a control panel 550 for the user to operate while generating the simulated live-action view 540, and the self-walking mowing system is directly controlled by the user through the receiving module 200 or through the interactive interface 520. The projection device 510 may be a cell phone screen, a hardware display screen, communicatively coupled to a computing component, and configured to display a simulated live view 540 or a real-time image 530.
Referring to fig. 3, the control module 150 includes a data operation processor 310 for Processing data and an image processor 320 for producing images and scene modeling, the data operation processor 310 may be a CPU or a microcontroller with higher data Processing speed, and the image processor 320 may be a separate gpu (graphics Processing unit) module. When the actuator 100 operates, the data operation processor 310 analyzes various operating data and environmental data of the actuator 100, the image processor 320 generates corresponding virtual reality map information according to the data modeling, generates a specific virtual reality map through the projection device 510, and controls the virtual reality map to update the display content synchronously along with the real-time operating state change of the actuator 100 so as to match with the operating state of the actual actuator 100. The control module 150 also includes a memory for storing data, which stores algorithms associated with the self-propelled mowing system and data information generated during operation of the self-propelled mowing system.
In the first embodiment of the present invention, the calculation component further includes a boundary generating module 700, a control module 150, and a transmitting module 600, and referring to fig. 7 and 8, a first virtual boundary 710 corresponding to the mowing boundary is generated in the real-time image 530 or the simulated live-action image 540 by calculating the characteristic parameters to form a first fused image 720. The boundary generating module 700 is provided with a boundary analyzing algorithm, which analyzes the grass mowing boundary of the area to be mowed by color, grass height and shape in the real-time image 530 or the simulated live-action image 540, so as to generate a first virtual boundary 710 in the corresponding grass mowing boundary position in the real-time image 530 or the simulated live-action image 540, and generate the first virtual boundary 710 in the corresponding grass mowing boundary position in the real-time image 530 or the simulated live-action image 540, so as to fuse the first virtual boundary 710 with the real-time image 530 or the simulated live-action image 540 to generate a first fused image 720, and the first fused image 720 includes the first virtual boundary 710 and a first virtual area 760 defined by the first virtual boundary 710, and the first virtual boundary 710 corresponds to an actual first boundary, which is the grass mowing boundary in the current environment detected by the boundary generating module 700. First virtual mowing area 760 corresponds to the distribution and location of objects of actual first mowing area 770. The sending module 600 is electrically connected or communicatively connected to the control module 150, the sending module 600 sends information of the first fused image 720 to the control module 150, the information of the first fused image 720 includes position information of the first virtual boundary 710, the control module controls the actuator to operate within the first virtual boundary, that is, the first virtual boundary 710 defines the first virtual mowing area 760, the control module 150 controls the actuator 100 to mow in an actual first mowing area 770 corresponding to the first virtual mowing area 760 according to the position information of the first virtual boundary 710, and controls the actuator 100 to operate only within the actual first boundary corresponding to the first virtual boundary 710 according to the position information of the actuator 100.
The control module 150 is connected with and controls the driving motor 112 and the output motor 122, so that the control module 150 controls the actuator 100 to travel along a supplementary working path and perform work mowing, the two traveling wheels 111 are respectively a first traveling wheel 113 and a second traveling wheel 114, the driving motor 112 is provided with a first driving motor 115 and a second driving motor 116, the control module 150 is connected with the first driving motor 115 and the second driving motor 116 in a control mode, and the control unit controls the rotating speeds of the first driving motor 115 and the second driving motor 116 through the driving controller so as to control the traveling state of the actuator 100. The computing component analyzes the control instructions for the actuator 100 by obtaining the real-time position of the actuator 100 to achieve control of the actuator 100 to operate within the first boundary. The control module 150 includes an output controller for controlling the output motor, and a driving controller for controlling the driving motor 112, the output controller being electrically connected to the output motor 122, and controlling the operation of the output motor through the output controller, thereby controlling the cutting state of the cutting blade. The driving controller is connected with and controls the driving motor 112, and the driving controller is communicably connected with the driving motor 112, so that after the receiving module 200 receives a starting instruction from a user or determines starting, the control module 150 analyzes a driving route of the actuator 100, and controls the driving motor 112 to drive the traveling wheels 111 to travel through the driving controller. The control module 150 acquires the position information corresponding to the first virtual boundary 710, analyzes the steering and speed information required by the actuator 100 to complete the operation within the preset first boundary according to the position information of the actuator 100 detected by the positioning module 300, and controls the driving controller to control the rotation speed of the driving motor 112 so that the actuator 100 runs at the preset speed. And may cause the actuator 100 to differentially rotate the two road wheels to steer the actuator 100. The user can operate the actuator 100 displacement through the receiving module 200 and the displacement of the image capturing module 400 to control the movement of the corresponding real-time image 530 or the simulated live-action image 540, so that the user can view the mowing area of the real-time image 530 or the simulated live-action image 540 and add a control instruction.
The receiving module 200 may be a peripheral device disposed outside the actuator 100, the peripheral device being communicatively connected to the actuator 100, the peripheral device receiving the control instruction of the user and sending the control instruction to the computing component, and the computing component analyzing the control instruction of the user to control the actuator 100 to execute the control instruction. The peripheral devices may be configured as any one or more of a keyboard, mouse, microphone, touch screen, remote control and/or handle, camera 410, lidar 420, cell phone, and like mobile devices. The user can directly and manually input command information through hardware such as a mouse, a keyboard, a remote controller, a mobile phone and the like, and can also input command information through signals such as voice, gestures, eye movement and the like. The camera 410 is arranged for acquiring information characteristics of eye movement or hand movement of the user, so as to analyze a control instruction given by the user.
In another embodiment, the projection device 510 employs virtual imaging technology, displays images through an AR device or within a VR glasses device by holographic projection through interference and diffraction principles, and generates a virtual control panel 550 accordingly, and enables command input through a communicatively coupled peripheral device 310, such as a remote control or a handle. Preferably, the interaction module 400 includes a motion capture unit configured as a camera 410 and/or an infrared sensing device for capturing the motion of the user's hand or controller, and an interaction positioning device for acquiring the position of the projection device 510, analyzing the user's selection of the generated virtual control panel 550 by analyzing the displacement of the user's hand and the relative position of the projection device 510, and generating a corresponding control command.
In one embodiment, the projection device 510 is mounted on a peripheral device, such as a mobile phone or a computer or a VR device selected from the peripheral device 310, and the projection device 510 corresponds to a mobile phone screen, a computer screen, a curtain, VR glasses, etc.
The display module 500 has at least a projection device 510 and an interactive interface 520, the interactive interface 520 is displayed through the projection device 510, and a real-time image 530 or a simulated live-action image 540 and a first fused image 720 are displayed in the interactive interface 520. The projection device 510 may be implemented as a hardware display screen, which may be an electronic device mounted to a peripheral device, such as a cell phone, computer, etc., or directly mounted on the actuator 100, or such that the computing components are communicably mated to various display screens and the projected objects are selected by the user to display the corresponding real-time image 530 or simulated live view 540.
Referring to fig. 9, the receiving module 200 may also generate a control panel 550 on the interactive interface 520 to receive a control instruction of the user through the control panel 550. For receiving user input of information whether the first virtual boundary 710 in the first fused image 720 needs to be modified. When the user selects to correct the information of the first fused image 720, the user manually inputs an instruction to correct the first virtual boundary 710, so as to generate a second virtual boundary 730 specified by the user, after the boundary display module 500 generates the first fused image 720 by calculation, the display module 500 generates an interactive interface 520 through the projection device 510 to display the first fused image 720 and the first virtual boundary 710, the receiving module 200 inquires whether the user needs to correct the first virtual boundary 710 through the interactive interface 520, the user selects to correct through the receiving module 200, and the first virtual boundary 710 is corrected in the displayed first fused image 720 through the control panel 550 in combination with the mowing boundary actually required. The computing assembly further includes a modification module 800, the modification module 800 receiving a user instruction to modify the first virtual boundary 710 to generate a second virtual boundary 730 in the real-time image 530 or the simulated live view 540 to form a second fused image 740 when the user inputs information that requires modification of the first virtual boundary 710.
The second fused image 740 includes a second virtual boundary 730 and a second virtual mowing area defined by the second virtual boundary 730, where the second virtual boundary 730 corresponds to an actual second boundary, and the second boundary is an actual mowing area corrected by the user. The second virtual mowing area corresponds to the object distribution and location of the actual second mowing area. The control module controls the actuator to operate within the second virtual boundary, that is, the second virtual boundary defines a second virtual mowing area, the control module 150 controls the actuator 100 to mow in an actual second mowing area corresponding to the second virtual mowing area according to the position information of the second virtual boundary 730, and controls the actuator 100 to operate only in the actual second boundary corresponding to the second virtual boundary 730 according to the position information of the actuator 100.
Referring to fig. 10 and 11, in order to identify the modification instruction of the user on the first fused image 720 to generate the second fused image 740, i.e. to fuse the modification instruction of the user into the real-time image 530 or the simulated live-action image 540, the data arithmetic processor establishes an actuator coordinate system 750 for positioning analysis of the actuator 100 in the environment to be cut according to the positioning of the first fused image 720 and the actuator 100 acquired by the positioning module 300 and the image acquisition module 400. The data operation processor establishes a pixel coordinate system 760 for the generated first fused image 720, so that the pixels in the first fused image 720 correspond to the pixel coordinates thereof, respectively, and generates the real-time image 530 or the simulated live-action image 540 through analysis. When a user selects a line segment or region in the first fused image 720 via the interactive interface 520, the collection of pixels on the first fused image 720 is essentially selected. The correction module 800 calculates the position information of the actual second boundary by analyzing the position of the real-time actuator 100 in the actuator coordinate system 750, the rotation angle of the image capture module 400, and the set of pixel coordinates corresponding to the user-selected second virtual boundary 730, so as to project the user-selected corrected second virtual boundary 730 on the first fused image 720 into the actual mowing area to obtain the user-specified second mowing area, and fuse the second virtual boundary 730 into the real-time image 530 or the simulated live view 540 to generate the second fused image 740, wherein the coordinates of the second virtual boundary 730 are fixed in the actuator coordinate system 750, and the position moves in the pixel coordinate system 760 as the user controls the conversion of the real-time image 530 or the simulated live view 540. The error of the automatic recognition of the mowing boundary obtained by the self-walking mowing system can be corrected through the correction of the user, so that the boundary of the mowing area can be set intuitively and accurately, the first virtual boundary 710 is generated through recognition of an image sensor and other devices, the user only needs to correct the first virtual boundary 710 to generate the second virtual boundary 730, and the user can set the mowing boundary conveniently.
In another embodiment, a user may set the first virtual boundary 710 on the real-time image 530 or the simulated live-action image 540 directly through the receiving module 200, obtain the position information of the first virtual boundary 710 set by the user through the boundary identifying module, project the position information onto the actuator 100 coordinates, and detect the position of the actuator 100 through the positioning module 300, so as to control the actuator 100 to move on the first boundary corresponding to the first virtual boundary 710 through the control module 150, thereby facilitating the user to set the mowing boundary quickly.
In a second embodiment of the present invention, referring to fig. 13 and 14, the computing component includes an image acquisition module 400A and an obstacle generation module 800A, the image acquisition module 400A includes one or a combination of an image sensor, a laser radar 420A, an ultrasonic sensor, a camera 410A, and a TOF sensor 430A, the ultrasonic sensor detects whether there is an obstacle in the mowing area by transmitting ultrasonic waves and recording position information of the obstacle according to return time of the ultrasonic waves, and the laser radar 420A transmits laser light and detects an obstacle in the mowing area according to reflection time of the laser light; the image sensor analyzes the shape and color of the acquired image, and analyzes the corresponding image conforming to the obstacle through an algorithm. The obstacle generating module 800a fuses the obstacle detection information of the mowing area to the real-time image 530a or the simulated live-action image 540a according to the image capturing module 400a, and generates the first virtual obstacle identifier 810a in the corresponding position in the mowing area in the real-time image 530a or the simulated live-action image 540a through the display module 500a, thereby generating the first fused image 720a, wherein the first fused image 720a is the real-time image 530a or the simulated live-action image 540a including the first virtual obstacle identifier 810 a. The transmitting module 600a transmits the information of the first fused image 720a to the control module 150 a. The control module 150a controls the actuator 100a to avoid the virtual obstacle while running mowing according to the information of the first fused image 720 a. The data operation processor establishes a pixel coordinate system and an execution mechanism 100a coordinate system, calculates the pixel coordinate of the first virtual obstacle identifier 810a added on the first fused image 720a by the user, converts the first virtual obstacle identifier position information into the position information of the actual obstacle 820a according to the preset coordinate conversion method, and controls the execution mechanism 100a to avoid the obstacle 820a in the operation process by the control module 150a, so that the user adds the first virtual obstacle identifier 810a on the real-time image 530a or the simulated real scene image 540a, and the self-walking mowing system can identify the obstacle and bypass the obstacle, thereby facilitating the operation of the user and accurately adding the obstacle information into the mowing area.
In another embodiment, referring to fig. 15, the obstacle generating module 800a generates a virtual obstacle identifier corresponding to an obstacle in the real-time image 530a or the simulated live-action image 540a according to an instruction input by a user to form the first fused image 720 a. The user sets a virtual obstacle flag as a flag for setting an area that the actuator 100a does not need to perform work and needs to be bypassed during actual mowing work, based on the position of the obstacle in the actual mowing area or the position of the area that does not need to mow, in the real-time image 530a or the simulated live-action image 540a through the receiving module 200 a.
For the obstacles such as stones, trees, etc. in the mowing area, the obstacle generating module 800a presets obstacle models such as stone models, tree models, flower models for the user to select. The user determines the position of the obstacle corresponding to the simulated live-action image 540a or the real-time image 530a according to the environmental characteristics shown by the simulated live-action image 540a or the real-time image 530a in combination with the actual state of the mowing area through the simulated live-action image 540a or the real-time image 530a of the real-action state simulated by the interactive interface 520a, and selects or selects the type of the obstacle and the position and the size of the obstacle in the simulated live-action image 540a or the real-time image 530a through the receiving module 200a, and after the user inputs related information, the image processor 320 generates the corresponding simulated obstacle 640 in the generated simulated live-action image 540a, and the control module 150a controls the actuator 100a to avoid the obstacle during operation.
The obstacle generating module 800a generates a virtual obstacle identifier corresponding to the obstacle in the real-time image 530a or the simulated live-action image 540a to form a first fused image 720a, and the first fused image 720a includes size, shape, and position information of the virtual obstacle identifier. The sending module 600a sends the information of the first fused image 720a to the control module 150a, so that the control module 150a controls the actuator 100a to bypass the virtual obstacle identifier when mowing in the mowing area according to the information of the first fused image 720a, so as to meet the requirement of avoiding the obstacle.
The first fused image 720a may further include a first virtual boundary 710a, and the boundary generating module 700a generates the first virtual boundary 710a corresponding to the mowing boundary in the live-view 530a or the simulated live-view 540a by calculating the characteristic parameters, so that the control module 150a controls the actuator 100a to operate in a first mowing area corresponding to a first virtual mowing area inside the first virtual boundary 710a and outside the virtual obstacle identification according to the information of the first fused image 720a, thereby limiting the actuator 100a to operate in the range of the first boundary and avoiding the virtual obstacle identification. The obstacle can be an object occupying space such as stones and articles, and can also be an area which does not need to be mowed, such as flowers and special plants; the obstacle may also be understood as an area that the user desires not to work in the area within the current first virtual boundary 710a, and may be formed in a special pattern or shape to satisfy the user's lawn beautification requirement.
In the third embodiment of the present invention, referring to fig. 16 to 19, the obstacle generating module 800b generates a first virtual obstacle 810b corresponding to a mowing obstacle in the live view 530b or the simulated live view 540b by calculating the characteristic parameter to form a first fused image 720 b. The first fused image 720b includes a first virtual mowing area 760b and a first virtual obstacle 810b within the first virtual mowing area 760b, the first virtual mowing area 760b corresponds to the actual first mowing area 770b, the first virtual mowing area 760b corresponds to the object distribution and position of the actual first mowing area 770b, and the first mowing area 770b is a mowing area where the actuator 100b needs to perform work. The obstacle generating module 800b is internally provided with an obstacle analyzing algorithm, detects an obstacle 820b of the to-be-mowed area through the image acquisition module 400b, and generates a first virtual obstacle 810b in the position of the corresponding mowing obstacle 820b in the real-time image 530b or the simulated live-action image 540b, so that the first virtual obstacle 810b is fused with the real-time image 530b or the simulated live-action image 540b to generate a first fused image 720 b. The virtual live view 540b or the real-time image 530b is displayed through the display module 500 b. The first fused image 720b includes a first virtual obstacle 810b, and the first virtual obstacle 810b corresponds to at least one actual obstacle 820b, which is a mowing obstacle 820b in the current environment detected by the obstacle generating module 800 b. The sending module 600b is electrically connected or communicatively connected with the control module 150b, the sending module 600b sends the information of the first fused image 720b to the control module 150b, the information of the first fused image 720b includes the position information of the first virtual obstacle 810b, the control module 150b controls the actuator 100b to cut grass in the actual first grass cutting area 770b corresponding to the first virtual grass cutting area 760b according to the position information of the first virtual obstacle 810b, and controls the actuator 100b to work only in the actual first obstacle corresponding to the first virtual obstacle 810b according to the position of the detected actuator 100 b.
Further, referring to fig. 20, after the obstacle generating module 800b generates the first fused image 720b, the receiving module 200b inquires of the user through the display interface whether the information of the first virtual obstacle 810b in the current first fused image 720b needs to be corrected, and receives the information input by the user whether the information of the first virtual obstacle 810b in the first fused image 720b needs to be corrected. When the user selects to correct the information of the first fused image 720b, the user manually inputs an instruction to correct the first virtual obstacle 810b, so as to generate a second virtual obstacle 830b specified by the user, and the user corrects the first virtual obstacle 810b in the displayed first fused image 720b through the control panel in combination with the mowing obstacle actually required. The computing assembly further includes a modification module 800b, the modification module 800b receiving a user instruction to modify the first virtual obstacle 810b to generate a second virtual obstacle 830b in the live-image 530b or the simulated live-view 540b to form a second fused image 740b when the user inputs information that the first virtual obstacle 810b needs to be modified.
The second fused image 740b includes a modified second virtual obstacle 830b, and the second virtual obstacle 830b corresponds to at least one obstacle 820b that the actual user needs to avoid. The control module 150b controls the actuator 100b to mow in the actual first mowing area 770b corresponding to the first virtual mowing area 760b according to the position information of the second virtual obstacle 830b, controls the actuator 100b to work only in the actual second obstacle corresponding to the second virtual obstacle 830b according to the position information of the detected actuator 100b, and controls the actuator 100b to avoid the actual obstacle position corresponding to the second virtual obstacle 830b during mowing according to the information of the first fused image 720b, so that a user can conveniently adjust the working area during working of the self-walking mowing system by avoiding the working area during working of the self-walking mowing system, wherein the obstacle can be an object occupying space such as stones and articles, and can also be an area not needing mowing such as flowers and special plants.
In a fourth embodiment of the present invention, referring to fig. 21, the calculation component includes a path generation module 900c, and the path generation module 900c generates a walking path 910c in the real-time image 530c or the simulated live-action image according to an instruction input by the user to form a first fused image 720 c. The path generation module 900c is configured with a predetermined mowing path mode, such as a zigzag path, in which the actuator 100c is controlled to work in a reciprocating progressive manner in the boundary, or a zigzag path in which the actuator 100c is controlled to work in a central encircling progressive manner.
Referring to fig. 22, the computing assembly includes a boundary generating module 700c, a user sends an opening instruction, a boundary analyzing algorithm is provided in the boundary generating module 700c, and a first virtual boundary 710c is generated in a position of a mowing boundary in the real-time image 530c or the simulated live-action image by analyzing the mowing boundary of the area to be mowed for colors, grass heights, and shapes in the real-time image 530c or the simulated live-action image. Referring to fig. 23 and 24, the path generating module 900c installs a preset algorithm in the generated first virtual boundary 710c to design the walking path 910c in the mowing area, and calculates the pixel coordinates in the corresponding pixel coordinate system according to the position coordinates in the coordinate system of the actuator 100c corresponding to the generated walking path 910c, so as to display the generated walking path 910c in the real-time image 530c or the simulated live-action image, and fuse the generated walking path 910c to the real-time image 530c or the simulated live-action image to generate the first fused image 720 c. The transmission module 600c transmits the first fused image 720c to the control module 150c, and the control module 150c controls the walking assembly 110c to walk along the walking path 910c in the first fused image 720c and perform a mowing operation on the mowing area.
Further, referring to fig. 25, the computing assembly further includes a modification module 800c, and the user may modify the walking path 910c in the first fused image 720c through the receiving module 200c, and modify the first fused image 720c generated by the path generation module 900c through the modification module 800 c. The generated walking path 910c is corrected on the first fused image 720c through the interactive interface 520c, the path is deleted by selecting partial path selection and deleting, and a segment is added in the first fused image 720c to add a new path, the correction module 800c reads the pixel coordinate set of the path selected by the user or the added path, converts the pixel coordinate set into the coordinate set of the execution mechanism 100c according to a preset algorithm and projects the coordinate set onto a position corresponding to a mowing area, so that the traveling control instruction and the mowing control instruction for controlling the execution mechanism 100c are analyzed according to the positioning tracking of the execution mechanism 100c, and the execution mechanism 100c walks along the walking path 910c modified by the user.
In another embodiment, the path generating module 900c includes a predetermined algorithm for calculating and generating the first walking path 910c according to the characteristic parameters of the mowing area, and displaying the first walking path in the real-time image 530c or the simulated live-action image displayed by the display module 500 c. The path generation module 900c automatically calculates and generates the first walking path 910c according to the acquired mowing boundary information and the area information. The path generation module 900c is configured to generate a first travel path 910c, such as a zig-zag path, or a random path, based on the characteristic parameters of the mowing area. And the first walking path 910c to be followed for mowing within the corresponding mowing area is shown to the user in the real-time image 530c or the simulated live-action. The receiving module 200c receives information that whether the user needs to correct the first walking path 910c in the first fused image 720c, the user selects correction and inputs a correction instruction through the receiving module 200c, a part of line segments or regions are deleted from the first walking path 910c, and a part of line segments or regions are added to the first walking path 910c, so that a second walking path 920c is generated in the real-time image 530c or the simulated live-action image, and the correcting module 800c identifies the correction instruction of the user, and fuses the coordinates of the second walking path 920c into the real-time image 530c or the simulated live-action image, so as to generate a second fused image 740 c. The transmission module 600c transmits the information of the second fused image 740c to the control module 150c, and the control module 150c controls the actuator 100c to travel along the actual path in the mowing area corresponding to the second travel path 920c according to the information of the second travel path 920 c.
In another embodiment, the path generating module 900c generates the preset path brush, such as a zigzag path brush, and a linear path brush, for the user to select. The path generating module 900c forms an alternative path brush on the interactive interface 520c, and the user selects the corresponding path brush and brushes an area expecting the operation of the executing mechanism 100c in the real-time image 530c or the simulated live-action image, so as to generate a zigzag path, a zigzag path and a linear path in the corresponding area; the control module 150c controls the actuator 100c to perform the traveling operation along the path in the actual mowing area corresponding to the traveling path 910c by generating the corresponding traveling path 910c in the live view 530c or the simulated live view.
In another mode, the path generating module 900c may receive a pattern, a character, and other graphics sent by the user through the receiving module 200c, and generate a corresponding walking path 910c according to the pattern calculation, and the control module 150c controls the execution mechanism 100c to walk and mow according to the generated walking path 910c, so as to print a mowing trace of the graphics sent by the user in a mowing area, so as to achieve the purpose of printing a mowing area, thereby enriching the appearance type of the mowing area.
In the above embodiment, when the boundary generating module 700, the path generating module 900c, and the obstacle generating module 800b generate the corresponding virtual boundary, the virtual obstacle identifier, and the walking path 910c, the subsequent work execution state of the actuator and the state of the mowing area after completing the mowing operation may be previewed through the actuator model in the real-time image or the simulated live-action image displayed by the display module, so that the user may know the subsequent mowing state and the mowing effect of the actuator under the current setting in advance, for example, the mowing operation and the mowing effect of the self-walking mowing system avoiding the first virtual obstacle identifier may be previewed through the real-time image or the simulated live-action image, thereby facilitating the user to adjust and set the self-walking mowing system in time.
The user determines the position of the obstacle corresponding to the simulated live-action image 540c or the real-time image 530c according to the environmental characteristics shown by the simulated live-action image 540c or the real-time image 530c in combination with the actual state of the mowing area by using the simulated live-action image 540c or the real-time image 530c of the real-action state simulated by the interactive interface 520c, and selects or selects the type of the obstacle and the position and the size of the obstacle in the simulated live-action image 540c or the real-time image 530c through the receiving module 200c, and after the user inputs related information, the image processor generates the corresponding simulated obstacle in the generated simulated live-action image 540c, and the control module 150c controls the execution mechanism 100c to avoid the obstacle during operation.
Referring to fig. 26, the computing component further includes a guidance channel setting module, and the guidance channel setting module controls the interactive interface 520c projected by the projection device 510 to generate a guidance channel setting key or a setting interface, and the user adds a virtual guidance channel identifier 560c to the simulated live view 540c or the real-time image 530c through the guidance channel setting module. There may be a plurality of relatively independent work areas in the user's area to be worked, such as the front and rear courtyards of the user's courtyard, so that the user can guide the actuator 100c to move from one work area to another through a guide path desired by the user by adding a virtual guide path identification 560c between the two independent work areas. Specifically, the self-walking mowing system detects a mowing area, identifies and generates a corresponding first virtual sub-mowing area 770c and a corresponding second virtual sub-mowing area 780c when the working environment has a plurality of relatively independent working areas, or selects a target working area by a user, and selects at least the first virtual sub-mowing area 770c and the second virtual sub-mowing area 780c through a simulated live-action diagram 540 c. The guiding channel setting module is used for receiving a virtual guiding channel set by a user between the first virtual sub mowing area 770c and the second virtual sub mowing area 780c, and guiding the walking path 910c of the actuator 100c between the first sub mowing area and the second sub mowing area corresponding to the first virtual sub mowing area 770c and the second virtual sub mowing area 780 c. The user selects a corresponding virtual guide channel identifier 560c in the simulated live-action drawing 540c according to the required moving channel of the actuator 100c between the first mowing area and the second mowing area, and the control module 150c controls the actuator 100c to guide the actuator 100c to travel according to the virtual guide channel identifier 560c fused in the simulated live-action drawing.
The self-propelled mowing system further includes a detection device for detecting operating conditions of the actuator 100c, such as machine parameters, operating modes, machine fault conditions, and alarm information of the actuator 100 c. The display module can also display the machine parameters, the working mode, the machine fault condition and the alarm information of the execution mechanism through the interactive interface, and the data operation processor 310 calculates the display information to control the projection equipment to dynamically reflect the machine information in real time, so that a user can conveniently control and acquire the running state of the execution mechanism.
In order to better detect the operating state of the actuator, the self-propelled mowing system further comprises a voltage sensor and/or a current sensor, a rainfall sensor and a boundary recognition sensor. Generally, the above sensors may be disposed in the actuator, and the voltage sensor and the current sensor are used for detecting current and voltage values of the actuator during operation to analyze current operation information of the actuator. The rainfall sensor is used for detecting the rainwater condition of the environment of the actuating mechanism. The boundary recognition sensor is used for detecting the boundary of the working area, and can be a sensor matched with the boundary electronic buried line, an imaging device for acquiring environmental information by imaging, and a positioning device.
Optionally, the current rainfall information is detected by a rainfall sensor, and the image sensor calculates a generated simulation live-action picture to simulate a corresponding rainfall scene and rainfall magnitude. And acquiring the surrounding environment and height information of the actuating mechanism through detection devices such as a laser radar, a camera and a state sensor of the detection device, and correspondingly displaying the surrounding environment and height information in the simulated live-action picture. Optionally, a capacitive sensor is arranged to detect load information of the mowing blade, so that grass height information after operation of the executing mechanism is simulated.
The computing assembly in the above embodiment is communicably connected to the actuator, and at least a part of the structure of the computing assembly may be disposed inside the actuator or outside the actuator, and transmits a signal to the controller of the actuator to control the operations of the output motor and the walking motor, so as to control the walking and mowing states of the actuator.
In a fifth embodiment of the present invention, referring to fig. 27, there is provided an outdoor self-walking apparatus, which may be a snow sweeper, comprising: the actuating mechanism 100d comprises a walking component 110d for realizing a walking function and a working component for realizing a preset function; a housing for supporting the actuator 100 d; an image capture module 400d capable of capturing a real-time image 530d comprising at least a portion of the work area and at least a portion of the work boundary; a display module 500d electrically or communicatively coupled to the image acquisition module 400d, the display module 500d configured to display the real-time image 530d or a simulated live-action 540d generated from the real-time image 530 d; a boundary generating module 700d for generating a first virtual boundary corresponding to the working boundary in the real-time image 530d by calculating the characteristic parameters to form a first fused image; a receiving module 200d, configured to receive information that is input by a user and that whether the first virtual boundary in the first fused image needs to be modified; a modification module 800d for receiving a user instruction to modify the first virtual boundary to generate a second virtual boundary 730d in the real-time image 530d or the simulated live-action image 540d to form a second fused image when the user inputs information that the first virtual boundary needs to be modified; a sending module 600d, configured to send a first fused image that does not need to be modified or a second fused image that is modified; the control module 300d is electrically connected or communicatively connected to the sending module 600d, and the control module 300d controls the actuator 100d to operate within the first virtual boundary or the second virtual boundary 730 d.
Optionally, the boundary generating module 700d generates a first virtual boundary corresponding to the working boundary in the real-time image 530d by calculating the characteristic parameter to form a first fused image; a transmitting module 600d for transmitting the first fused image; and the control module 300d is electrically or communicatively connected with the sending module 600d, and the control module 300d controls the execution mechanism 100d to operate within the first virtual boundary.
Optionally, the outdoor self-walking device further includes an obstacle generating module, which generates a virtual obstacle identifier corresponding to the obstacle in the real-time image 530d according to an instruction input by the user to form a first fused image; the image capturing module 400d captures a real-time image 530d including at least a portion of the working area and at least one obstacle located in the working area, and is electrically or communicatively connected to the sending module 600d, and the control module 300d controls the executing mechanism 100d to avoid the virtual obstacle in the first fused image.
Optionally, the obstacle generating module generates a first virtual obstacle identifier corresponding to the obstacle in the real-time image 530d by calculating the characteristic parameter to form a first fused image, and the control module 300d controls the executing mechanism 100d to avoid the virtual obstacle in the first fused image.
Optionally, the obstacle generating module generates a first virtual obstacle identifier corresponding to the obstacle in the real-time image 530d or the simulated live-action image 540d by calculating the characteristic parameter to form a first fused image; the receiving module 200d receives information that whether the first virtual obstacle identifier in the first fusion image needs to be corrected and is input by a user; when the user inputs information that the first virtual obstacle identifier needs to be corrected, the correction module 800d receives a user instruction to correct the first virtual obstacle identifier to generate a second virtual obstacle identifier in the real-time image 530d or the simulated live-action image 540d so as to form a second fused image; the sending module 600d sends the first fused image without modification or the second fused image after modification; the control module 300d is electrically connected or communicatively connected to the sending module 600d, and the control module 300d controls the executing mechanism 100d to avoid the first virtual obstacle identifier in the first fusion image or the second virtual obstacle identifier in the second fusion image.
Optionally, the obstacle generating module generates a first virtual obstacle identifier in the real-time image 530d or the simulated live-action image 540d according to an instruction input by the user to form a first fused image; a transmitting module 600d for transmitting the first fused image; the control module 300d is electrically connected or communicatively connected to the sending module 600d, and the control module 300d controls the executing mechanism 100d to avoid the first virtual obstacle identifier in the first fused image.
Optionally, the path generating module generates a walking path in the real-time image 530d or the simulated live-action image 540d according to an instruction input by the user to form a first fused image; the sending module 600d sends the first fused image; the control module 300d is electrically connected or communicatively connected with the sending module 600d, and the control module 300d controls the walking component 110d to walk along the walking path in the first fused image.
Optionally, the path generating module generates a first walking path in the real-time image 530d or the simulated live-action image 540d according to the feature parameters of the calculation work area to form a first fused image; the receiving module 200d is configured to receive information, which is input by a user and is used for determining whether the first walking path in the first fusion image needs to be modified; when the user inputs information that the first walking path needs to be corrected, the correcting module 800d receives a user instruction to correct the first walking path so as to generate a second walking path in the real-time image 530d or the simulated live-action image 540d to form a second fused image; the sending module 600d sends the first fused image without modification or the second fused image after modification; the control module 300d is electrically connected or communicatively connected to the sending module 600d, and the control module 300d controls the walking component 110d to walk along the first walking path in the first fused image or the second walking path in the second fused image.
The foregoing illustrates and describes the principles, general features, and advantages of the present invention. It should be understood by those skilled in the art that the above embodiments do not limit the present invention in any way, and all technical solutions obtained by using equivalent alternatives or equivalent variations fall within the scope of the present invention.

Claims (10)

1. A self-propelled mowing system comprising:
the actuating mechanism comprises a mowing assembly for realizing a mowing function and a walking assembly for realizing a walking function;
a housing for supporting the actuator;
an image capture module capable of capturing a real-time image including at least a portion of a mowing area and at least one obstacle located within the mowing area;
a display module electrically or communicatively coupled to the image acquisition module, the display module configured to display the real-time image or a simulated live-action generated from the real-time image;
the obstacle generating module generates a virtual obstacle identifier corresponding to the obstacle in the real-time image or the simulated live-action image according to an instruction input by a user to form a first fused image;
the sending module is used for sending the information of the first fusion image;
and the control module is electrically connected or in communication connection with the sending module and controls the executing mechanism to avoid the obstacle corresponding to the virtual obstacle identifier in the first fusion image.
2. A self-propelled mowing system according to claim 1, comprising: the display module comprises a projection device, the simulation actual image or the actual image is projected through the projection device, and the projection device comprises one of a mobile phone screen, a hardware display screen, VR glasses and AR glasses.
3. A self-propelled mowing system according to claim 2, comprising: the control module includes a data arithmetic processor that processes data and an image processor for generating an image and scene modeling, the data arithmetic processor establishing a pixel coordinate system and an actuator coordinate system to convert the virtual obstacle identification location information to actual obstacle location information.
4. A self-propelled mowing system according to claim 2, comprising: the obstacle generating module is arranged to include a preset obstacle model for adding the virtual obstacle identifier, wherein the preset obstacle model at least includes one or a combination of a stone model, a tree model and a flower model.
5. A self-propelled mowing system comprising:
the actuating mechanism comprises a mowing assembly for realizing a mowing function and a walking assembly for realizing a walking function;
a housing for supporting the actuator;
an image capture module capable of capturing a real-time image including at least a portion of a mowing area and at least one obstacle located within the mowing area;
a display module electrically or communicatively coupled to the image acquisition module, the display module configured to display the real-time image or a simulated live-action generated from the real-time image;
the obstacle generating module is used for generating a first virtual obstacle identifier corresponding to the obstacle in the real-time image or the simulated live-action image by calculating characteristic parameters so as to form a first fusion image;
the sending module is used for sending the information of the first fusion image;
and the control module is electrically connected or in communication connection with the sending module and controls the executing mechanism to avoid the virtual barrier in the first fusion image.
6. A self-propelled mowing system according to claim 5, comprising: the image acquisition module comprises one or a combination of an image sensor, a laser radar, an ultrasonic sensor, a camera and a TOF sensor.
7. A self-propelled mowing system according to claim 5, comprising: the self-walking mowing system further comprises a boundary generating module, the boundary generating module generates a first virtual boundary according to the information of the first boundary of the mowing area, which is acquired by the image acquisition module, and the control module controls the executing mechanism to walk and work in the first boundary corresponding to the first virtual boundary.
8. A self-propelled mowing system according to claim 7, comprising: the self-walking mowing system further comprises a path generating module, the path generating module automatically generates a walking path in the first virtual boundary, and the control module controls the executing mechanism to walk and operate in the first boundary according to the walking path.
9. An outdoor self-walking device comprising:
the actuating mechanism comprises a walking component for realizing a walking function and a working component for realizing a preset function;
a housing for supporting the actuator;
an image acquisition module capable of acquiring a real-time image comprising at least a portion of a work area and at least one obstacle located within the work area;
a display module electrically or communicatively coupled to the image acquisition module, the display module configured to display the real-time image or a simulated live-action generated from the real-time image;
the obstacle generating module generates a virtual obstacle identifier corresponding to the obstacle in the real-time image according to an instruction input by a user to form a first fused image;
the sending module is used for sending the information of the first fusion image;
and the control module is electrically connected or in communication connection with the sending module and controls the executing mechanism to avoid the virtual barrier in the first fusion image.
10. An outdoor self-walking device comprising:
the actuating mechanism comprises a walking component for realizing a walking function and a working component for realizing a preset function;
a housing for supporting the actuator;
an image acquisition module capable of acquiring a real-time image comprising at least a portion of a work area and at least one obstacle located within the work area;
a display module electrically or communicatively coupled to the image acquisition module, the display module configured to display the simulated live-action view generated or from the real-time image;
the obstacle generating module is used for generating a first virtual obstacle identifier corresponding to the obstacle in the real-time image by calculating characteristic parameters so as to form a first fusion image;
the sending module is used for sending the information of the first fusion image;
and the control module is electrically connected or in communication connection with the sending module and controls the executing mechanism to avoid the virtual barrier in the first fusion image.
CN201911409198.8A 2019-10-18 2019-12-31 Self-walking mowing system and outdoor walking equipment Pending CN112684785A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910992552 2019-10-18
CN2019109925528 2019-10-18

Publications (1)

Publication Number Publication Date
CN112684785A true CN112684785A (en) 2021-04-20

Family

ID=75445228

Family Applications (4)

Application Number Title Priority Date Filing Date
CN201911409198.8A Pending CN112684785A (en) 2019-10-18 2019-12-31 Self-walking mowing system and outdoor walking equipment
CN201911409440.1A Pending CN112673799A (en) 2019-10-18 2019-12-31 Self-walking mowing system and outdoor walking equipment
CN201911409201.6A Pending CN112764416A (en) 2019-10-18 2019-12-31 Self-walking mowing system and outdoor walking equipment
CN201911417081.4A Pending CN112684786A (en) 2019-10-18 2019-12-31 Self-walking mowing system and outdoor walking equipment

Family Applications After (3)

Application Number Title Priority Date Filing Date
CN201911409440.1A Pending CN112673799A (en) 2019-10-18 2019-12-31 Self-walking mowing system and outdoor walking equipment
CN201911409201.6A Pending CN112764416A (en) 2019-10-18 2019-12-31 Self-walking mowing system and outdoor walking equipment
CN201911417081.4A Pending CN112684786A (en) 2019-10-18 2019-12-31 Self-walking mowing system and outdoor walking equipment

Country Status (1)

Country Link
CN (4) CN112684785A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112673799A (en) * 2019-10-18 2021-04-20 南京德朔实业有限公司 Self-walking mowing system and outdoor walking equipment
CN116088533A (en) * 2022-03-24 2023-05-09 未岚大陆(北京)科技有限公司 Information determination method, remote terminal, device, mower and storage medium

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113950934A (en) * 2021-11-02 2022-01-21 甘肃畜牧工程职业技术学院 Lawn mower visual system capable of being remotely controlled
CN114115265A (en) * 2021-11-23 2022-03-01 未岚大陆(北京)科技有限公司 Path processing method of self-moving equipment and self-moving equipment
CN115500143B (en) * 2022-11-02 2023-08-29 无锡君创飞卫星科技有限公司 Mower control method and device with laser radar

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2689650A1 (en) * 2012-07-27 2014-01-29 Honda Research Institute Europe GmbH Trainable autonomous lawn mower
CN105468033A (en) * 2015-12-29 2016-04-06 上海大学 Control method for medical suspension alarm automatic obstacle avoidance based on multi-camera machine vision
CN106155053A (en) * 2016-06-24 2016-11-23 桑斌修 A kind of mowing method, device and system
CN206115271U (en) * 2016-09-20 2017-04-19 深圳市银星智能科技股份有限公司 Mobile robot with manipulator arm traction device
CN106647765A (en) * 2017-01-13 2017-05-10 深圳拓邦股份有限公司 Planning platform based on mowing robot
WO2018053942A1 (en) * 2016-09-20 2018-03-29 深圳市银星智能科技股份有限公司 Mobile robot and navigation method therefor
CN108444390A (en) * 2018-02-08 2018-08-24 天津大学 A kind of pilotless automobile obstacle recognition method and device
CN110168466A (en) * 2017-11-16 2019-08-23 南京德朔实业有限公司 Intelligent mowing system

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3318170B2 (en) * 1995-11-02 2002-08-26 株式会社日立製作所 Route generation method for automatic traveling machinery
JP3237705B2 (en) * 1999-02-04 2001-12-10 日本電気株式会社 Obstacle detection device and moving object equipped with obstacle detection device
CN101777263B (en) * 2010-02-08 2012-05-30 长安大学 Traffic vehicle flow detection method based on video
TW201305761A (en) * 2011-07-21 2013-02-01 Ememe Robot Co Ltd An autonomous robot and a positioning method thereof
KR101334961B1 (en) * 2011-08-03 2013-11-29 엘지전자 주식회사 Lawn mower robot system and control method for the same
CN103891464B (en) * 2012-12-28 2016-08-17 苏州宝时得电动工具有限公司 Automatically mow system
US9420741B2 (en) * 2014-12-15 2016-08-23 Irobot Corporation Robot lawnmower mapping
US10583561B2 (en) * 2017-08-31 2020-03-10 Neato Robotics, Inc. Robotic virtual boundaries
CN108829103A (en) * 2018-06-15 2018-11-16 米亚索能光伏科技有限公司 Control method, weeder, terminal, equipment and the storage medium of weeder
CN109258060B (en) * 2018-08-24 2020-04-21 宁波市德霖机械有限公司 Map construction intelligent mower based on special image identification recognition
CN109062225A (en) * 2018-09-10 2018-12-21 扬州方棱机械有限公司 The method of grass-removing robot and its generation virtual boundary based on numerical map
CN109491397B (en) * 2019-01-14 2021-07-30 傲基科技股份有限公司 Mowing robot and mowing area defining method thereof
CN109634286B (en) * 2019-01-21 2021-06-25 傲基科技股份有限公司 Visual obstacle avoidance method for mowing robot, mowing robot and readable storage medium
CN109634287B (en) * 2019-01-22 2022-02-01 重庆火虫创新科技有限公司 Mower path planning method and system
CN109871013B (en) * 2019-01-31 2022-12-09 莱克电气股份有限公司 Cleaning robot path planning method and system, storage medium and electronic equipment
CN112684785A (en) * 2019-10-18 2021-04-20 南京德朔实业有限公司 Self-walking mowing system and outdoor walking equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2689650A1 (en) * 2012-07-27 2014-01-29 Honda Research Institute Europe GmbH Trainable autonomous lawn mower
US20140032033A1 (en) * 2012-07-27 2014-01-30 Honda Research Institute Europe Gmbh Trainable autonomous lawn mower
CN105468033A (en) * 2015-12-29 2016-04-06 上海大学 Control method for medical suspension alarm automatic obstacle avoidance based on multi-camera machine vision
CN106155053A (en) * 2016-06-24 2016-11-23 桑斌修 A kind of mowing method, device and system
CN206115271U (en) * 2016-09-20 2017-04-19 深圳市银星智能科技股份有限公司 Mobile robot with manipulator arm traction device
WO2018053942A1 (en) * 2016-09-20 2018-03-29 深圳市银星智能科技股份有限公司 Mobile robot and navigation method therefor
CN106647765A (en) * 2017-01-13 2017-05-10 深圳拓邦股份有限公司 Planning platform based on mowing robot
CN110168466A (en) * 2017-11-16 2019-08-23 南京德朔实业有限公司 Intelligent mowing system
CN108444390A (en) * 2018-02-08 2018-08-24 天津大学 A kind of pilotless automobile obstacle recognition method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112673799A (en) * 2019-10-18 2021-04-20 南京德朔实业有限公司 Self-walking mowing system and outdoor walking equipment
CN116088533A (en) * 2022-03-24 2023-05-09 未岚大陆(北京)科技有限公司 Information determination method, remote terminal, device, mower and storage medium

Also Published As

Publication number Publication date
CN112684786A (en) 2021-04-20
CN112764416A (en) 2021-05-07
CN112673799A (en) 2021-04-20

Similar Documents

Publication Publication Date Title
CN112684785A (en) Self-walking mowing system and outdoor walking equipment
EP3553620B1 (en) Robotic vehicle grass structure detection
US10444760B2 (en) Robotic vehicle learning site boundary
WO2019096264A1 (en) Smart lawn mowing system
CN109287246B (en) Intelligent mower for building map based on laser radar
EP3234717B1 (en) Robot vehicle parcel navigation following a minimum workload path.
US20170303466A1 (en) Robotic vehicle with automatic camera calibration capability
US20150163993A1 (en) Autonomous gardening vehicle with camera
EP3158409B1 (en) Garden visualization and mapping via robotic vehicle
CN113128747B (en) Intelligent mowing system and autonomous image building method thereof
US10809740B2 (en) Method for identifying at least one section of a boundary edge of an area to be treated, method for operating an autonomous mobile green area maintenance robot, identifying system and green area maintenance system
EP3998517A1 (en) Self-propelled mowing system, and method of performing supplementary mowing operation on missed regions
US20200233413A1 (en) Method for generating a representation and system for teaching an autonomous device operating based on such representation
CN113115621B (en) Intelligent mowing system and autonomous image building method thereof
CN114721385A (en) Virtual boundary establishing method and device, intelligent terminal and computer storage medium
US20220217902A1 (en) Self-moving mowing system, self-moving mower and outdoor self-moving device
CN114937258B (en) Control method for mowing robot, and computer storage medium
CN114995444A (en) Method, device, remote terminal and storage medium for establishing virtual working boundary
US20230320263A1 (en) Method for determining information, remote terminal, and mower
WO2024038852A1 (en) Autonomous operating zone setup for a working vehicle or other working machine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 211106 No. 529, 159, Jiangjun Avenue, Jiangning District, Nanjing, Jiangsu Province

Applicant after: Nanjing Quanfeng Technology Co.,Ltd.

Address before: No. 529, Jiangjun Avenue, Jiangning Economic and Technological Development Zone, Nanjing, Jiangsu Province

Applicant before: NANJING CHERVON INDUSTRY Co.,Ltd.

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination