WO2024008016A1 - 作业地图构建方法、装置、割草机器人以及存储介质 - Google Patents

作业地图构建方法、装置、割草机器人以及存储介质 Download PDF

Info

Publication number
WO2024008016A1
WO2024008016A1 PCT/CN2023/105187 CN2023105187W WO2024008016A1 WO 2024008016 A1 WO2024008016 A1 WO 2024008016A1 CN 2023105187 W CN2023105187 W CN 2023105187W WO 2024008016 A1 WO2024008016 A1 WO 2024008016A1
Authority
WO
WIPO (PCT)
Prior art keywords
map
target
laser point
point cloud
obstacles
Prior art date
Application number
PCT/CN2023/105187
Other languages
English (en)
French (fr)
Inventor
王宁
杜鹏举
黄振昊
Original Assignee
松灵机器人(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 松灵机器人(深圳)有限公司 filed Critical 松灵机器人(深圳)有限公司
Publication of WO2024008016A1 publication Critical patent/WO2024008016A1/zh

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01DHARVESTING; MOWING
    • A01D34/00Mowers; Mowing apparatus of harvesters
    • A01D34/006Control or measuring arrangements
    • A01D34/008Control or measuring arrangements for automated or remotely controlled operation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3837Data obtained from a single source

Definitions

  • This application relates to the field of computer technology, and specifically to a method and device for constructing a work map, a lawn mowing robot, and a storage medium.
  • Lawn mowing robots are widely used in the maintenance of home courtyard lawns and the mowing of large lawns.
  • the lawn mowing robot combines motion control, multi-sensor fusion and path planning technologies.
  • the mowing path of the lawn mower robot needs to be planned so that it can completely cover all working areas.
  • Embodiments of the present application provide a work map construction method, device, lawn mowing robot, and storage medium, which can improve work map construction efficiency.
  • embodiments of the present application provide a method for constructing a job map, including:
  • an operation area and a non-operation area are divided in the target map.
  • Identify candidate obstacles including:
  • the reflection value corresponding to the three-dimensional laser point and the three-dimensional coordinates are determined in the target map.
  • determining candidate obstacles in the target map based on the map coordinates, the reflection value corresponding to the three-dimensional laser point, and the three-dimensional coordinates includes:
  • candidate obstacles are determined in the target map.
  • the reflection value corresponding to the three-dimensional laser point is Rendering into the target map includes:
  • the reflection value corresponding to the three-dimensional laser point is rendered into the target map.
  • determining a target obstacle among the candidate obstacles based on the characteristic image includes:
  • the candidate obstacle with the classification label as the target label is determined as the target obstacle.
  • dividing the operating area and the non-operating area in the target map according to the target obstacle includes:
  • contour information and the position of the target obstacle in the target map output an isolation curve surrounding the target obstacle
  • the area within the isolation curve is determined as a non-operating area, and the area other than the non-operating area is The area is determined as the operating area.
  • the method further includes:
  • the non-working area is highlighted using a second color.
  • a job map construction device including:
  • Collection module used to collect laser point cloud data of target maps
  • a first determination module configured to determine candidate obstacles in the target map according to the laser point cloud data
  • An acquisition module used to acquire the characteristic image of the candidate obstacle
  • a second determination module configured to determine a target obstacle among the candidate obstacles based on the characteristic image
  • a dividing module configured to divide operating areas and non-operating areas in the target map according to the target obstacles.
  • candidate obstacles are determined in the target map based on the laser point cloud data, and then characteristic images of the candidate obstacles are obtained, and The target obstacle is determined among the candidate obstacles based on the characteristic image. Finally, the operating area and the non-operating area are divided in the target map according to the target obstacle.
  • Figure 1a is a schematic diagram of a scenario of a method for constructing a job map provided by an embodiment of the present application
  • Figure 1b is a schematic flowchart of a method for constructing a job map provided by an embodiment of the present application
  • Figure 2a is a schematic structural diagram of a work map construction device provided by an embodiment of the present application.
  • Figure 2b is another structural schematic diagram of the operation map construction device provided by the embodiment of the present application.
  • Figure 3 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • connection can be used for either fixation or circuit connection.
  • first and second are used for descriptive purposes only and cannot be understood as indicating or implying relative importance or implicitly indicating the quantity of indicated technical features. Therefore, features defined as “first” and “second” may explicitly or implicitly include one or more of these features. In the description of the embodiments of this application, “plurality” means two or more, unless otherwise explicitly and specifically limited.
  • Embodiments of the present application provide a work map construction method, device, lawn mowing robot, and storage medium.
  • the operation map construction device can be integrated in the microcontroller unit (MCU) of the lawn mowing robot, or in a smart terminal or server.
  • MCU is also called a single chip microcomputer (Single Chip Microcomputer) or a single chip microcomputer.
  • CPU Central Processing Unit
  • peripherals such as memory, counter (Timer), USB, analog-to-digital conversion/digital-to-analog conversion, UART, PLC, DMA, etc. interface
  • a chip-level computer is formed to perform different combination controls for different applications.
  • the lawn mowing robot can walk automatically to prevent collisions, automatically return to charge within the range, has safety detection and battery power detection, and has a certain climbing ability. It is especially suitable for lawn mowing and maintenance in home courtyards, public green spaces and other places. Its characteristics are: automatic Cut grass, clean grass clippings, automatically avoid rain, automatically charge, automatically avoid obstacles, compact appearance, electronic virtual fence, network control, etc.
  • the terminal can be a smartphone, tablet, laptop, desktop computer, smart speaker, smart watch, etc., but is not limited to this. Terminals and servers can be connected directly or indirectly through wired or wireless communication methods.
  • the server can be an independent physical server, a server cluster or distributed system composed of multiple physical servers, or a cloud service or cloud database. , cloud computing, cloud functions, cloud storage, network services, cloud communications, middleware services, domain name services, security services, CDN, and cloud servers for basic cloud computing services such as big data and artificial intelligence platforms, this application will not be used here. limit.
  • This application provides a lawn mowing system, including a lawn mowing robot 10, a server 20 and a user device 30 that have established communication connections with each other.
  • the lawn mowing robot 10 is equipped with a lidar, which can collect laser point cloud data in the environment corresponding to the target map through the lidar. Then, the lawn mowing robot 10 can determine candidate obstacles in the target map based on the laser point cloud data.
  • the target map can be an environmental map or an electronic map. Subsequently, the camera in the lawn mowing robot 10 obtains the characteristic image of the candidate obstacles, and determines the target obstacle among the candidate obstacles based on the characteristic image.
  • the lawn mowing robot 10 divides the working area and the non-working area in the target map according to the target obstacles. After dividing the working area and the non-working area, the lawn mowing robot 10 can synchronize the data of the working area and the data of the non-working area to the server. 20 and the user device 30 to facilitate subsequent monitoring of the lawn mowing operation of the lawn mower robot 10 .
  • the mowing plan provided by this application uses laser point cloud data to determine candidate obstacles in the target map, and determines the target obstacles based on the characteristic images of the candidate obstacles. Finally, based on the target obstacles, the operations are divided in the target map.
  • Areas and non-operation areas that is, through the combination of image vision technology and laser point cloud technology, it is possible to avoid the problem of missing or wrong divisions due to manual division of operation areas and non-operation areas; in addition, since lidar uniformly scans the operation environment , the location of all obstacles can be determined at one time without the need to delineate obstacles one by one, which avoids the problems of slow overall delineation speed and low mapping efficiency due to the large size of obstacles. As a result, this solution improves the efficiency of job map construction.
  • a method for constructing a job map including: collecting laser point cloud data in the environment corresponding to the target map, determining candidate obstacles in the target map based on the laser point cloud data, obtaining characteristic images of the candidate obstacles, and constructing the candidate obstacles based on the characteristic images. Determine the target obstacle among the obstacles, and divide the operation area and non-operation area in the target map according to the target obstacle.
  • Figure 1b is a schematic flowchart of a job map construction method provided by an embodiment of the present application.
  • the specific process of this job map construction method can be as follows:
  • the target map is the mowing map corresponding to the lawn mower robot.
  • the lawn mower robot can perform lawn mowing operations in the area corresponding to the target map. Subsequently, the operation area and non-operation area can be divided into the target map, and within the target map It does not include buildings such as houses that cannot be operated.
  • a three-dimensional lidar can be installed on the body of the lawn mowing robot.
  • the three-dimensional lidar is a measuring instrument that instantaneously measures three-dimensional coordinate values in space through the principle of laser ranging (including pulse laser and phase laser).
  • the spatial point cloud data obtained by 3D laser scanning technology can quickly establish a 3D visualization model of complex and irregular scenes.
  • three-dimensional lidar is based on the simultaneous localization and mapping (SLAM) method to obtain the pose information and three-dimensional point cloud corresponding to each collection point.
  • SLAM simultaneous localization and mapping
  • Three-dimensional lidar can be a mobile data collection device such as a handheld, backpack or vehicle-mounted device to achieve mobile scanning.
  • the collection point initially collected by the 3D lidar is used as the origin of the coordinates, and a point cloud coordinate system is constructed.
  • the initial collection here refers to the 3D lidar collecting the first frame of the 3D point cloud corresponding to the 3D point cloud map.
  • the collection point can be the location of the center of gravity of the three-dimensional lidar or a fixed reference point on the device, as long as it meets the requirements of establishing a coordinate system and defining the origin of the coordinates.
  • the Z-axis is located in the vertical scanning plane and upward is positive, and the X and Y axes are located in the horizontal scanning plane and the three are perpendicular to each other to form a left-handed coordinate system.
  • the real-time pose of the 3D lidar and the 3D point cloud at that moment can be obtained in real time based on the SLAM method.
  • reflection intensity is an important characteristic of laser sensors, which can reflect materials in the environment. Therefore, the reflection intensity can be used to determine candidate obstacles in the target map, that is, optionally, in some embodiments, the step "determine candidate obstacles in the target map based on laser point cloud data", specifically, include:
  • the corresponding pixel value can be rendered in the target map.
  • a three-dimensional laser point with a reflection value of a has a corresponding pixel value of 10
  • a three-dimensional laser point with a reflection value of b its corresponding pixel value is 45, which can be set according to the actual situation, and will not be described again here.
  • the map coordinate system of the target map is a two-dimensional coordinate system
  • the coordinates of the three-dimensional laser point also include height information. It is also necessary to obtain the point cloud coordinate system corresponding to the laser point cloud data in order to subsequently determine candidate obstacles in the target map. That is, optionally, in some embodiments, the step "determine candidate obstacles in the target map based on the map coordinates, the reflection value corresponding to the three-dimensional laser point, and the three-dimensional coordinates" may specifically include:
  • the reflection value corresponding to the three-dimensional laser point is rendered into the target map
  • the three-dimensional coordinates corresponding to the three-dimensional laser point can be converted into the map coordinates in the target map according to the conversion relationship between the map coordinate system and the point cloud coordinate system. Then, based on the converted map coordinates, the three-dimensional laser point can be The reflection value corresponding to the point is rendered into the target map. Finally, candidate obstacles are determined in the target map based on the pixel values in the rendered target map.
  • the step "renders the reflection value corresponding to the three-dimensional laser point into the target map based on the three-dimensional coordinates corresponding to the three-dimensional laser point and the conversion relationship between the map coordinate system and the point cloud coordinate system.” specifically can include:
  • the reflection value corresponding to the three-dimensional laser point is rendered into the target map.
  • the three-dimensional coordinates corresponding to the three-dimensional laser point can be converted through a preset formula.
  • the preset formula represents the conversion relationship between the three-dimensional coordinate system and the two-dimensional coordinate system, that is, between the point cloud coordinate system and the map coordinate system.
  • the conversion relationship between the System and image coordinate system, world coordinate system (xw, yw, zw), also known as measurement coordinate system, is a three-dimensional rectangular coordinate system, based on which the spatial position of the camera and the object to be measured can be described;
  • the camera coordinate system (xc, yc, zc) is also a three-dimensional rectangular coordinate system.
  • the origin is located at the optical center of the lens.
  • the xc and yc axes are parallel to both sides of the image plane respectively.
  • the zc axis is the optical axis of the lens and is aligned with the image plane.
  • the plane is vertical; the image coordinate system (x, y) is a two-dimensional rectangular coordinate system on the image plane.
  • the origin of the image coordinate system is the intersection of the lens optical axis and the image plane (also called the principal point). Its x-axis is parallel to the xc-axis of the camera coordinate system, and its y-axis is parallel to the yc-axis of the camera coordinate system.
  • the coordinate conversion relationship can be determined based on the external parameters between the lidar device and the image acquisition device (that is, the acquisition device of the target map), and the internal parameters of the image acquisition device.
  • External parameters refer to the parameters of the image acquisition device in the world coordinate system, such as the position and rotation direction of the image acquisition device;
  • internal parameters refer to parameters related to the characteristics of the image acquisition device itself, such as the focal length and pixel size of the image acquisition device. wait
  • fx, fy are the focal lengths of the image acquisition equipment
  • cx, cy are the main points of the image acquisition equipment, in the point cloud three-dimensional coordinates pij (x, y, z), the corresponding map two-dimensional coordinates are p'i'j' (x',y').
  • the candidate obstacles are determined through laser point cloud data, that is, the determination of the obstacles is determined based on the reflection of the lidar signal.
  • lawn mowing robots are generally used for outdoor operations. If the scanner is If there are other partially transparent materials between the object and the measured object, such as rain, snow, dust, etc. that are common in outdoor environments, part of the laser energy will be reflected back in advance. As long as the trigger threshold is reached, it will It is considered to be the measured object, which leads to measurement errors. Therefore, the candidate obstacles determined based on the laser point cloud data may not be real obstacles. Therefore, in this application, visual technology and point cloud technology are combined to detect obstacles. Object location identification.
  • the convolutional neural network can also be used to determine the target obstacle among the candidate obstacles, that is, optionally, in some embodiments, the step "determine the target among the candidate obstacles based on the feature image" "Obstacles" may specifically include:
  • the image classification network may be pre-trained, and the image classification network may specifically include:
  • Convolution layer Mainly used for feature extraction from input images (such as training samples or images that need to be recognized).
  • the size of the convolution kernel and the number of convolution kernels can be determined according to the actual application. For example, from the first layer
  • the convolution kernel sizes from the convolutional layer to the fourth convolutional layer can be (7, 7), (5, 5), (3, 3), (3, 3); optionally, in order to reduce the computational cost complexity and improve calculation efficiency.
  • the convolution kernel sizes of these four convolution layers can be set to (3, 3), and the activation functions all use “relu (linear rectification function, Rectified Linear Unit)" , and the padding (padding, refers to the space between the attribute definition element border and the element content) mode is set to "same".
  • the "same" padding mode can be simply understood as padding the edge with 0s, and padding the left (top) with the number of 0s. The number of padded 0s on the right (bottom) is the same or one less.
  • the convolutional layers can be directly connected to each other to speed up network convergence.
  • the downsampling operation is basically the same as the convolution operation, except that the downsampling convolution kernel only takes the maximum value (max pooling) or average (average) of the corresponding position. pooling), etc., for the convenience of description, in the embodiment of the present invention, the down-sampling operation is performed in the second convolution layer and the third convolution layer, and the down-sampling operation is specifically max pooling. illustrate.
  • the layer where the activation function is located and the downsampling layer are both classified into the convolution layer. It should be understood that this layer can also be considered
  • the structure includes a convolution layer, the layer where the activation function is located, a downsampling layer (i.e., pooling layer) and a fully connected layer. Of course, it can also include an input layer for input data and an output layer for output data, which will not be discussed here. Repeat.
  • Fully connected layer The learned features can be mapped to the sample label space. It mainly plays the role of "classifier" in the entire convolutional neural network. Each node of the fully connected layer is related to the previous layer (such as convolution). All nodes output by the downsampling layer in the product layer are connected. Among them, a node in the fully connected layer is called a neuron in the fully connected layer.
  • the number of neurons in the fully connected layer can be based on the needs of the actual application. It depends. For example, in this text detection model, the number of neurons in the fully connected layer can be set to 512, or it can also be set to 128, and so on.
  • nonlinear factors can also be added by adding an activation function. For example, the activation function sigmoid (S-shaped function) can be added.
  • the image classification network can be used to identify the feature image, obtain the probability that the candidate obstacle belongs to each type of obstacle, and output the corresponding classification label based on this probability. Finally, determine the candidate obstacle with the classification label as the target label. For the target obstacle, for example, determine the candidate obstacle with the classification label as a flower bed as the target obstacle.
  • the outline information of the target obstacle can be obtained, and the operating area and the non-operating area can be divided in the target map. For example, a curve can be output to surround the target obstacle.
  • dividing the operating areas in the target map may specifically include:
  • the area within the isolation curve is determined as the non-operating area, and the area outside the non-operating area is determined as the operating area.
  • the preset height can be set to be slightly higher than the height of the lawn mower robot. For example, if the height of the lawn mower robot is 30 cm, then the preset height can be set to 35 cm to ensure that the lawn mower robot is The mowing operation will not be interrupted by obstacles.
  • the distance between adjacent target obstacles can be calculated, and the target obstacles with a distance less than a threshold can be divided into the same non-operation area.
  • the threshold can be based on Setting the size of the lawn mowing robot can avoid the problem that the delineated working area is too small, causing the lawn mowing robot to be unable to perform operations, and thus prevent this type of working area from affecting the subsequent lawn mowing process. This can improve subsequent lawn mowing operations. Mowing efficiency.
  • the operating area and the non-operating area can be distinguished by different colors, that is, optionally, after the step "dividing the operating area and the non-operating area in the target map according to the target obstacles", specifically May also include:
  • the working area is highlighted in a first color, and the non-working area is highlighted in a second color.
  • the first color may be yellow and the second color may be red.
  • the selection may be made according to the actual situation and will not be described again here.
  • candidate obstacles are determined in the target map based on the laser point cloud data, and then characteristic images of the candidate obstacles are obtained, and based on the characteristic images, candidate obstacles are identified.
  • the target obstacles are determined among the objects.
  • the operating area and the non-operating area are divided into the target map according to the target obstacles.
  • laser point cloud data is used to determine candidate obstacles in the target map, and the target obstacle is determined based on the characteristic image of the candidate obstacle. Finally, based on the target obstacle, the target obstacle is determined in the target map.
  • Dividing operating areas and non-operating areas that is, through the combination of image vision technology and laser point cloud technology, can avoid the problem of missed or incorrect divisions caused by manual division of operating areas and non-operating areas; in addition, due to the unified scanning of laser radar, Environment, the location of all obstacles can be determined at one time, without the need to delineate obstacles one by one, which avoids the problems of slow delineation of obstacles and low mapping efficiency due to the large size of obstacles. As a result, this solution improves the efficiency of job map construction.
  • Figure 2a is a schematic structural diagram of a work map construction device provided by an embodiment of the present application.
  • the work map construction device may include a collection module 201, a first determination module 202, an acquisition module 203, a second determination module 204 and
  • the division module 205 may be specifically as follows:
  • the collection module 201 is used to collect laser point cloud data in the environment corresponding to the target map.
  • the collection module 201 can use the spatial point cloud obtained by three-dimensional laser scanning technology. Data can be used to quickly establish three-dimensional visualization models of complex and irregular scenes.
  • the first determination module 202 is used to determine candidate obstacles in the target map according to the laser point cloud data.
  • the first determination module 202 may specifically include:
  • An extraction unit used to extract the reflection value and three-dimensional coordinates corresponding to each three-dimensional laser point from the laser point cloud data
  • the determination unit is used to determine candidate obstacles in the target map based on the map coordinates, the reflection value corresponding to the three-dimensional laser point, and the three-dimensional coordinates.
  • the determining unit may specifically include:
  • the first determination subunit is used to determine the point cloud coordinate system corresponding to the laser point cloud data
  • the rendering subunit is used to render the reflection value corresponding to the three-dimensional laser point into the target map based on the three-dimensional coordinates corresponding to the three-dimensional laser point and the conversion relationship between the map coordinate system and the point cloud coordinate system;
  • the second determination subunit is used to determine candidate obstacles in the target map according to the pixel values in the rendered target map.
  • the rendering subunit can be specifically used to:) Convert the three-dimensional coordinates corresponding to the three-dimensional laser point based on the conversion relationship between the map coordinate system and the point cloud coordinate system to obtain the three-dimensional laser point in The map coordinates in the target map; according to the map coordinates of the three-dimensional laser point in the target map, the reflection value corresponding to the three-dimensional laser point is rendered into the target map.
  • the acquisition module 203 is used to acquire characteristic images of candidate obstacles.
  • the second determination module 204 is used to determine the target obstacle among the candidate obstacles based on the characteristic image.
  • the second determination module 204 may specifically use a convolutional neural network to determine the target obstacle among the candidate obstacles.
  • the second determination module 204 may specifically be used to: input the feature image to the preset image.
  • the classification network In the classification network, the classification label of the candidate obstacle is obtained; the classification label is the candidate obstacle of the target label.
  • the dividing module 205 is used to divide the operating area and the non-operating area in the target map according to the target obstacles.
  • the dividing module 205 can obtain the outline information of the target obstacle, and divide the operating area and the non-operating area in the target map. That is, optionally, in some embodiments, the dividing module 205 has The object can be used to: obtain at least the outline information of the target obstacle in the target map; output an isolation curve surrounding the target obstacle based on the outline information and the position of the target obstacle in the target map; determine the area within the isolation curve as non- Operation area, and determine the area outside the non-operation area as the operation area.
  • the operation map construction device of the present application may further include a display module 206.
  • the display module 206 may be used to: highlight the operation area using a first color, and using a second color to highlight non-working areas.
  • the first determination module 202 determines candidate obstacles in the target map based on the laser point cloud data, and then the acquisition module 203 obtains the candidate obstacles.
  • the second determination module 204 determines the target obstacle among the candidate obstacles based on the characteristic image.
  • the dividing module 205 divides the operation area and the non-operation area in the target map according to the target obstacle.
  • laser point cloud data is used to determine candidate obstacles in the target map, and the target obstacle is determined based on the characteristic image of the candidate obstacle.
  • the target obstacle is determined in the target map. Dividing operating areas and non-operating areas, that is, through the combination of image vision technology and laser point cloud technology, can avoid the problem of missing or wrong divisions caused by manual division of operating areas and non-operating areas, thereby improving the construction of operating maps. efficiency.
  • the embodiment of the present application also provides a lawn mowing robot, as shown in Figure 3, which shows a schematic structural diagram of the lawn mowing robot involved in the embodiment of the present application. Specifically:
  • the lawn mowing robot may include a control module 301, a traveling mechanism 302, a cutting module 303, a power supply 304 and other components.
  • a control module 301 may control the traveling mechanism 302
  • a cutting module 303 may control the cutting module 303
  • a power supply 304 may supply power to the lawn mowing robot.
  • FIG. 3 does not constitute a limitation on the electronic device, and may include more or fewer components than shown in the figure, or combine certain components, or arrange different components. in:
  • the control module 301 is the control center of the lawn mowing robot.
  • the control module 301 may specifically include a central processing unit (CPU), memory, input/output ports, system bus, timer/counter, digital-to-analog converter and Components such as analog-to-digital converters, the CPU performs various functions of the lawn mowing robot and processes data by running or executing software programs and/or modules stored in the memory, and calling data stored in the memory; preferably, the CPU can Integrated application processor and modem processor, among which the application processor mainly processes the operating system and application programs, etc., and the modem processing The processor mainly handles wireless communications. It is understandable that the above modem processor may not be integrated into the CPU.
  • the memory can be used to store software programs and modules, and the CPU executes various functional applications and data processing by running the software programs and modules stored in the memory.
  • the memory may mainly include a storage program area and a storage data area.
  • the storage program area may store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), etc.;
  • the storage data area may store electronic files according to the electronic data. Data created by the use of the device, etc.
  • the memory may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
  • the memory may also include a memory controller to provide the CPU with access to the memory.
  • the traveling mechanism 302 is electrically connected to the control module 301, and is used to respond to the control signal transmitted by the control module 301, adjust the traveling speed and direction of the lawn mower robot, and realize the self-moving function of the lawn mower robot.
  • the cutting module 303 is electrically connected to the control module 301, and is used to respond to the control signal transmitted by the control module, adjust the height and rotation speed of the cutting blade, and implement lawn mowing operations.
  • the power supply 304 can be logically connected to the control module 301 through a power management system, so that functions such as charging, discharging, and power consumption management can be implemented through the power management system.
  • Power supply 304 may also include one or more DC or AC power supplies, recharging systems, power failure detection circuits, power converters or inverters, power status indicators, and other arbitrary components.
  • the lawn mowing robot may also include a communication module, a sensor module, a prompt module, etc., which will not be described again here.
  • the communication module is used to receive and send signals in the process of sending and receiving information. By establishing a communication connection with the user equipment, base station or server, it realizes signal sending and receiving with the user equipment, base station or server.
  • the sensor module is used to collect internal environmental information or external environmental information, and feeds the collected environmental data to the control module for decision-making, realizing the precise positioning and intelligent obstacle avoidance functions of the lawn mowing robot.
  • the sensors may include: ultrasonic sensors, infrared sensors, collision sensors, rain sensors, lidar sensors, inertial measurement units, wheel speedometers, image sensors, position sensors and other sensors, without limitation.
  • the prompt module is used to prompt the user about the current working status of the lawn mower robot.
  • the prompt module includes but is not limited to indicator lights, buzzers, etc.
  • a lawnmower robot can prompt users with indicator lights Current power supply status, motor working status, sensor working status, etc.
  • a buzzer can be used to provide an alarm.
  • the processor in the control module 301 will load the executable files corresponding to the processes of one or more application programs into the memory according to the following instructions, and the processor will run the executable files stored in the memory. application to achieve various functions, as follows:
  • Target obstacles divide the operation area and non-operation area in the target map.
  • candidate obstacles are determined in the target map based on the laser point cloud data, and then characteristic images of the candidate obstacles are obtained, and based on the characteristic images, candidate obstacles are identified.
  • the target obstacles are determined among the objects.
  • the operating area and the non-operating area are divided into the target map according to the target obstacles.
  • laser point cloud data is used to determine candidate obstacles in the target map, and the target obstacle is determined based on the characteristic image of the candidate obstacle. Finally, based on the target obstacle, the target obstacle is determined in the target map.
  • Dividing operating areas and non-operating areas that is, through the combination of image vision technology and laser point cloud technology, can avoid the problem of missing or wrong divisions caused by manual division of operating areas and non-operating areas, thereby improving the construction of operating maps.
  • Efficiency in addition, because lidar scans the working environment in a unified manner, it can determine the location of all obstacles at once, without the need to delineate obstacles one by one, avoiding the slow overall delineation speed and poor mapping efficiency caused by the large size of obstacles. low question.
  • embodiments of the present application provide a storage medium in which multiple instructions are stored, and the instructions can be loaded by the processor to execute steps in any of the job map construction methods provided by the embodiments of the present application.
  • this command can perform the following steps:
  • Collect laser point cloud data in the environment corresponding to the target map determine candidate obstacles in the target map based on the laser point cloud data, obtain the characteristic image of the candidate obstacle, and based on the characteristic image, identify the candidate obstacle Determine the target obstacles among the obstacles, and divide the operation area and non-operation area in the target map based on the target obstacles.
  • the storage medium may include: read-only memory (ROM, Read Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk, etc.
  • ROM read-only memory
  • RAM random access memory
  • magnetic disk or optical disk etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Environmental Sciences (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Harvester Elements (AREA)
  • Guiding Agricultural Machines (AREA)

Abstract

本申请实施例公开的作业地图构建方法可以采集目标地图对应环境中的激光点云数据,根据激光点云数据,在目标地图中确定候选障碍物,获取候选障碍物的特征图像,并基于特征图像在候选障碍物中确定目标障碍物,根据目标障碍物,在目标地图中划分作业区域和非作业区域,以提高作业地图构建效率。

Description

作业地图构建方法、装置、割草机器人以及存储介质
本申请要求于2022年07月08日提交中国专利局、申请号为CN202210806394.4、申请名称为“作业地图构建方法、装置、割草机器人以及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,具体涉及一种作业地图构建方法、装置、割草机器人以及存储介质。
背景技术
割草机器人被广泛应用于家庭庭院草坪的维护和大型草地的修剪。割草机器人融合了运动控制、多传感器融合以及路径规划等技术。为了控制割草机器人实现割草作业,需要对割草机器人的割草路径进行规划,使其可以完全覆盖所有的作业区域。
割草机器人在对新的环境进行割草时,需要工作人员对场地进行实时检测,并将数据传输至割草机器人中,从而建立电子地图,以供割草机器人进行使用,针对不同草坪,都需要重新进行测量与输入,即,目前的作业地图构建效率低下。
发明内容
本申请实施例提供一种作业地图构建方法、装置、割草机器人以及存储介质,可以提高作业地图构建效率。
第一方面,本申请实施例提供了一种作业地图构建方法,包括:
获取预设的割草区域;
采集目标地图对应环境中的激光点云数据;
根据所述激光点云数据,在所述目标地图中确定候选障碍物;
获取所述候选障碍物的特征图像,并基于所述特征图像在所述候选障碍物中确定目标障碍物;
根据所述目标障碍物,在所述目标地图中划分作业区域和非作业区域。
可选地,在一些实施例中,所述根据所述激光点云数据,在所述目标地图 中确定候选障碍物,包括:
获取所述目标地图的地图坐标系;
从所述激光点云数据中提取每个三维激光点对应的反射值和三维坐标;
基于所述地图坐标、所述三维激光点对应的反射值和三维坐标,在所述目标地图中确定候选障碍物。
可选地,在一些实施例中,所述基于所述地图坐标、所述三维激光点对应的反射值和三维坐标,在所述目标地图中确定候选障碍物,包括:
确定所述激光点云数据对应的点云坐标系;
基于所述三维激光点对应的三维坐标、以及所述地图坐标系与点云坐标系之间的转换关系,将所述三维激光点对应的反射值渲染至所述目标地图中;
根据渲染后的目标地图中的像素值,在所述目标地图中确定候选障碍物。
可选地,在一些实施例中,所述基于所述三维激光点对应的三维坐标、以及所述地图坐标系与点云坐标系之间的转换关系,将所述三维激光点对应的反射值渲染至所述目标地图中,包括:
基于所述地图坐标系与点云坐标系之间的转换关系,对所述三维激光点对应的三维坐标进行转换,得到所述三维激光点在所述目标地图中的地图坐标;
根据所述三维激光点在所述目标地图中的地图坐标,将所述三维激光点对应的反射值渲染至所述目标地图中。
可选地,在一些实施例中,所述基于所述特征图像在所述候选障碍物中确定目标障碍物,包括:
将所述特征图像输入至预设图像分类网络中,得到所述候选障碍物的分类标签;
将所述分类标签为目标标签的候选障碍物确定为目标障碍物。
可选地,在一些实施例中,所述根据所述目标障碍物,在所述目标地图中划分作业区域和非作业区域,包括:
至少获取所述目标障碍物低于预设高度下的轮廓信息;
根据所述轮廓信息以及所述目标障碍物在所述目标地图中的位置,输出包围所述目标障碍物的隔离曲线;
将所述隔离曲线内的区域确定为非作业区域,并将除所述非作业区域之外 的区域确定为作业区域。
可选地,在一些实施例中,所述根据所述目标障碍物,在所述目标地图中划分作业区域和非作业区域之后,还包括
采用第一颜色高亮地显示所述作业区域,以及;
采用第二颜色高亮地显示所述非作业区域。
第二方面,本申请实施例提供了一种作业地图构建装置,包括:
采集模块,用于采集目标地图的激光点云数据;
第一确定模块,用于根据所述激光点云数据,在所述目标地图中确定候选障碍物;
获取模块,用于获取所述候选障碍物的特征图像;
第二确定模块,用于基于所述特征图像在所述候选障碍物中确定目标障碍物;
划分模块,用于根据所述目标障碍物,在所述目标地图中划分作业区域和非作业区域。
本申请实施例在采集目标地图对应环境中的激光点云数据后,根据所述激光点云数据,在所述目标地图中确定候选障碍物,然后,获取所述候选障碍物的特征图像,并基于所述特征图像在所述候选障碍物中确定目标障碍物,最后,根据所述目标障碍物,在所述目标地图中划分作业区域和非作业区域,在本申请提供的割草方案中,利用激光点云数据在目标地图中确定候选障碍物,并根据候选障碍物的特征图像,确定目标障碍物,最后,根据该目标障碍物,在目标地图中划分作业区域和非作业区域,即,通过图像视觉技术和激光点云技术的结合,可以避免人工划分作业区域和非作业区域而出现漏划分或错误划分的问题,由此,提高了作业地图构建效率。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1a是本申请实施例提供的作业地图构建方法的场景示意图;
图1b是本申请实施例提供的作业地图构建方法的流程示意图;
图2a是本申请实施例提供的作业地图构建装置的结构示意图;
图2b是本申请实施例提供的作业地图构建装置的另一结构示意图
图3是本申请实施例提供的电子设备的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
需要说明的是,当元件被称为“固定于”或“设置于”另一个元件,它可以直接在另一个元件上或者间接在该另一个元件上。当一个元件被称为是“连接于”另一个元件,它可以是直接连接到另一个元件或间接连接至该另一个元件上。另外,连接既可以是用于固定作用也可以是用于电路连通作用。
需要理解的是,术语“长度”、“宽度”、“上”、“下”、“前”、“后”、“左”、“右”、“竖直”、“水平”、“顶”、“底”、“内”、“外”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明实施例和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明的限制。
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多该特征。在本申请实施例的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。
本申请实施例提供一种作业地图构建方法、装置、割草机器人和存储介质。
其中,该作业地图构建装置具体可以集成在割草机器人的微控制单元(Microcontroller Unit,MCU)中,还可以集成在智能终端或服务器中,MCU又称单片微型计算机(Single Chip Microcomputer)或者单片机,是把中央处理器(Central Process Unit,CPU)的频率与规格做适当缩减,并将内存(memory)、计数器(Timer)、USB、模数转换/数模转换、UART、PLC、DMA等周边接口, 形成芯片级的计算机,为不同的应用场合做不同组合控制。割草机器人可以自动行走,防止碰撞,范围之内自动返回充电,具备安全检测和电池电量检测,具备一定爬坡能力,尤其适合家庭庭院、公共绿地等场所进行草坪修剪维护,其特点是:自动割草、清理草屑、自动避雨、自动充电、自动躲避障碍物、外形小巧、电子虚拟篱笆、网络控制等。
终端可以是智能手机、平板电脑、笔记本电脑、台式计算机、智能音箱、智能手表等,但并不局限于此。终端以及服务器可以通过有线或无线通信方式进行直接或间接地连接,服务器可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、CDN、以及大数据和人工智能平台等基础云计算服务的云服务器,本申请在此不做限制。
例如,请参阅图1a,本申请提供一种割草系统,包括相互之间建立有通信连接的割草机器人10、服务器20以及用户设备30。割草机器人10上安装有激光雷达,其可以通过激光雷达采集目标地图对应环境中的激光点云数据,然后,割草机器人10可以根据该激光点云数据,在目标地图中确定候选障碍物,其中,该目标地图可以为环境地图,也可以为电子地图,随后,割草机器人10中的摄像头获取候选障碍物的特征图像,并基于该特征图像在候选障碍物中确定目标障碍物,最后,割草机器人10根据目标障碍物,在目标地图中划分作业区域和非作业区域,在划分作业区域和非作业区域后,割草机器人10可以将作业区域的数据和非作业区域的数据同步至服务器20和用户设备30中,以便于后续监控割草机器人10的割草作业。本申请提供的割草方案,利用激光点云数据在目标地图中确定候选障碍物,并根据候选障碍物的特征图像,确定目标障碍物,最后,根据该目标障碍物,在目标地图中划分作业区域和非作业区域,即,通过图像视觉技术和激光点云技术的结合,可以避免因人工划分作业区域和非作业区域而出现漏划分或错误划分的问题;另外,由于激光雷达统一扫描作业环境,可一次性确定所有障碍物的位置,无需逐个对障碍物进行圈定,避免了因障碍物的体量大导致障碍物整体圈定速度慢、建图效率低的问题。由此,本方案提高了作业地图构建效率。
以下分别进行详细说明。需说明的是,以下实施例的描述顺序不作为对实施例优先顺序的限定。
一种作业地图构建方法,包括:采集目标地图对应环境中的激光点云数据,根据激光点云数据,在目标地图中确定候选障碍物,获取候选障碍物的特征图像,并基于特征图像在候选障碍物中确定目标障碍物,根据目标障碍物,在目标地图中划分作业区域和非作业区域。
请参阅图1b,图1b为本申请实施例提供的作业地图构建方法的流程示意图。该作业地图构建方法的具体流程可以如下:
101、采集目标地图对应环境中的激光点云数据。
目标地图为割草机器人对应的割草地图,割草机器人可以在该目标地图对应的区域内进行割草作业,后续可以在该目标地图中划分作业区域和非作业区域,且在该目标地图内未包含有房子等无法进行作业的建筑物等。
例如,具体的,可以在割草机器人的机身上安装三维激光雷达,该三维激光雷达是通过激光测距原理(包括脉冲激光和相位激光),瞬时测得空间三维坐标值的测量仪器,利用三维激光扫描技术获取的空间点云数据,可快速建立结构复杂、不规则的场景的三维可视化模型。
在实际应用时,三维激光雷达基于即时定位与地图构建(SLAM,simultaneous localization and mapping)方法,获取各采集点对应的位姿信息和三维点云。三维激光雷达可以是手持式、背包式或者车载式等可以移动的数据采集设备,从而实现移动扫描。
比如,以三维激光雷达初始采集的采集点作为坐标原点,并构建点云坐标系,此处的初始采集是指三维激光雷达采集该三维点云地图对应的第一帧三维点云。采集点可以是三维激光雷达的重心所处位置或者设备上的某一固定参考点,满足建立坐标系并定义坐标原点的要求即可。在一示例中,点云坐标系中,Z轴位于竖向扫描面内,向上为正,X、Y轴都位于横向扫描面内且三者相互垂直构成左手坐标系。在移动扫描过程中,依据SLAM方法可以实时获取三维激光雷达的实时位姿和该时刻的三维点云。
102、根据激光点云数据,在目标地图中确定候选障碍物。
其中,反射强度是激光传感器的一个重要特性,它可以反映环境中的物质 特性,因此,可以利用该反射强度在目标地图中确定候选障碍物,即,可选地,在一些实施例中,步骤“根据激光点云数据,在目标地图中确定候选障碍物”,具体可以包括:
(11)获取目标地图的地图坐标系;
(12)从激光点云数据中提取每个三维激光点对应的反射值和三维坐标;
(13)基于地图坐标、三维激光点对应的反射值和三维坐标,在目标地图中确定候选障碍物。
例如,可以根据每个三维激光点对应的反射值,在目标地图中渲染相应的像素值,如反射值为a的三维激光点,其对应的像素值为10,反射值为b的三维激光点,其对应的像素值为45,具体可以根据实际情况进行设置,在此不再赘述。
其中,目标地图的地图坐标系为二维坐标系,而三维激光点的坐标还包含有高度信息,还需要获取该激光点云数据对应的点云坐标系,以便后续在目标地图中确定候选障碍物,即,可选地,在一些实施例中,步骤“基于地图坐标、三维激光点对应的反射值和三维坐标,在目标地图中确定候选障碍物”,具体可以包括:
(21)确定激光点云数据对应的点云坐标系;
(22)基于三维激光点对应的三维坐标、以及地图坐标系与点云坐标系之间的转换关系,将三维激光点对应的反射值渲染至目标地图中;
(23)根据渲染后的目标地图中的像素值,在目标地图中确定候选障碍物。
例如,具体的,可以根据地图坐标系与点云坐标系之间的转换关系,将三维激光点对应的三维坐标转换为目标地图中的地图坐标,随后,基于转换后的地图坐标,将三维激光点对应的反射值渲染至目标地图中,最后,根据渲染后的目标地图中的像素值,在目标地图中确定候选障碍物。
可选地,在一些实施例中,步骤“基于三维激光点对应的三维坐标、以及地图坐标系与点云坐标系之间的转换关系,将三维激光点对应的反射值渲染至目标地图中”,具体可以包括:
(41)基于地图坐标系与点云坐标系之间的转换关系,对三维激光点对应的三维坐标进行转换,得到三维激光点在所述目标地图中的地图坐标;
(42)根据三维激光点在所述目标地图中的地图坐标,将三维激光点对应的反射值渲染至目标地图中。
其中,可以通过预设公式将对三维激光点对应的三维坐标进行转换,该预设公式表征了三维坐标系和二维坐标系之间的转换关系,即,点云坐标系和地图坐标系之间的转换关系,需要说明的是,在图像处理领域中,坐标系转换就是为了将空间的三维世界坐标系转换至图像处理的二维像素坐标系,常用的坐标系包括世界坐标系、相机坐标系和图像坐标系,世界坐标系(world coordinate)(xw,yw,zw),也称为测量坐标系,是一个三维直角坐标系,以其为基准可以描述相机和待测物体的空间位置;相机坐标系(camera coordinate)(xc,yc,zc),也是一个三维直角坐标系,原点位于镜头光心处,xc、yc轴分别与像面的两边平行,zc轴为镜头光轴,与像平面垂直;图像坐标系(image coordinate)(x,y),是图像平面上的二维直角坐标系。图像坐标系的原点为镜头光轴与像平面的交点(也称主点,principal point),它的x轴与相机坐标系的xc轴平行,它的y轴与相机坐标系的yc轴平行。
具体的,该坐标转换关系可根据激光雷达设备与图像采集设备(即目标地图的采集设备)之间的外参,以及图像采集设备的内参进行确定。外参指的是图像采集设备在世界坐标系中的参数,如图像采集设备的位置和旋转方向等;内参指的是与图像采集设备自身特性相关的参数,如图像采集设备的焦距和像素大小等
例如,将点云三维坐标pij转换为地图二维坐标p’i’j’,可以按照以下方式进行转换:
x’=(x/z)*fx+cx
y’=(y/z)*fy+cy
其中,fx,fy为图像采集设备的焦距,cx,cy是图像采集设备的主点,在点云三维坐标pij(x,y,z),对应的地图二维坐标为p’i’j’(x’,y’)。
103、获取候选障碍物的特征图像,并基于特征图像在候选障碍物中确定目标障碍物。
需要说明的是,由于该候选障碍物是通过激光点云数据确定,即,障碍物的确定是基于激光雷达信号的反射而确定的,然而,割草机器人一般用于室外作业,如果在扫描器和被测物之间有其它部分透光的物质,比如在室外环境中常见的雨雪尘等,部分的激光能量就会提前反射回来,只要达到触发阈值就会 被认为是被测物而导致测量的错误,因此,根据激光点云数据所确定的候选障碍物可能并不是真实存在的障碍物,故,在本申请中,结合视觉技术和点云技术对障碍物进行定位识别。
例如,具体的,可以采集该候选障碍物的图像,并对采集的图像进行特征提取,得到候选障碍物的特征图像,具体的,可以采用卷积神经网络(Convolutional Neural Networks,CNN)对采集的图像进行特征提取,进一步的,还可以利用该卷积神经网络在候选障碍物中确定目标障碍物,即,可选地,在一些实施例中,步骤“基于特征图像在候选障碍物中确定目标障碍物”,具体可以包括:
(51)将特征图像输入至预设图像分类网络中,得到候选障碍物的分类标签;
(52)将分类标签为目标标签的候选障碍物确定为目标障碍物。
其中,该图像分类网络可以是预先训练得到的,该图像分类网络具体可以包括:
卷积层:主要用于对输入的图像(比如训练样本或需要识别的图像)进行特征提取,其中,卷积核大小以及卷积核的数量可以根据实际应用而定,比如,从第一层卷积层至第四层卷积层的卷积核大小依次可以为(7,7),(5,5),(3,3),(3,3);可选的,为了降低计算的复杂度,提高计算效率,在本实施例中,这四层卷积层的卷积核大小可以都设置为(3,3),激活函数均采用“relu(线性整流函数,Rectified Linear Unit)”,而padding(padding,指属性定义元素边框与元素内容之间的空间)方式均设置为“same”,“same”填充方式可以简单理解为以0填充边缘,左边(上边)补0的个数和右边(下边)补0的个数一样或少一个。可选的,卷积层与卷积层之间可以通过直连的方式连接,从而加快网络收敛速度,为了进一步减少计算量,还可以在第二至第四层卷积层中的所有层或任意1~2层进行下采样(pooling)操作,该下采样操作与卷积的操作基本相同,只不过下采样的卷积核为只取对应位置的最大值(max pooling)或平均值(average pooling)等,为了描述方便,在本发明实施例中,将均以在第二层卷积层和第三次卷积层中进行下采样操作,且该下采样操作具体为max pooling为例进行说明。
需说明的是,为了描述方便,在本申请实施例中,将激活函数所在层和下采样层(也称为池化层)均归入卷积层中,应当理解的是,也可以认为该结构包括卷积层、激活函数所在层、下采样层(即池化层)和全连接层,当然,还可以包括用于输入数据的输入层和用于输出数据的输出层,在此不再赘述。
全连接层:可以将学到的特征映射到样本标记空间,其在整个卷积神经网络中主要起到“分类器”的作用,全连接层的每一个结点都与上一层(如卷积层中的下采样层)输出的所有结点相连,其中,全连接层的一个结点即称为全连接层中的一个神经元,全连接层中神经元的数量可以根据实际应用的需求而定,比如,在该文本检测模型中,全连接层的神经元数量可以均设置为512个,或者,也可以均设置为128个,等等。与卷积层类似,可选的,在全连接层中,也可以通过加入激活函数来加入非线性因素,比如,可以加入激活函数sigmoid(S型函数)。
具体的,可以利用该图像分类网络对特征图像进行识别,得到候选障碍物属于每类障碍物的概率,并基于该概率输出相应的分类标签,最后,将分类标签为目标标签的候选障碍物确定为目标障碍物,比如,将分类标签为花坛的候选障碍物确定为目标障碍物。
104、根据目标障碍物,在目标地图中划分作业区域和非作业区域。
具体的,可以获取该目标障碍物的轮廓信息,在目标地图中划分作业区域和非作业区域,例如,输出一条曲线包围该目标障碍物,在确定非作业区域后,基于预先设定的作业边界和非作业区域,在目标地图中划分作业区域,即,可选地,在一些实施例中,步骤“根据目标障碍物,在目标地图中划分作业区域和非作业区域”,具体可以包括:
(61)至少获取目标障碍物低于预设高度下的轮廓信息;
(62)根据轮廓信息以及目标障碍物在目标地图中的位置,输出包围目标障碍物的隔离曲线;
(63)将隔离曲线内的区域确定为非作业区域,并将除非作业区域之外的区域确定为作业区域。
可选地,该预设高度可以设置为略高于割草机器人的高度,比如,割草机器人的高度为30厘米,那么可以将预设高度设置为35厘米,确保割草机器人在 执行割草作业时不会收到障碍物的阻挡而中断割草作业。
需要说明的是,当目标地图中存在有多个目标障碍物时,可以计算相邻目标障碍物之间的距离,将距离小于阈值的目标障碍物划分至同一非作业区域中,该阈值可以根据割草机器人的尺寸进行设置,可以避免圈定出的作业区域过小而导致割草机器人无法进行作业的问题,进而避免该类型的作业区域影响后续割草作业的流程,由此,可以提高后续的割草效率。
进一步的,在一些实施例中,可以通过不同颜色的区分作业区域和非作业区域,即,可选地,步骤“根据目标障碍物,在目标地图中划分作业区域和非作业区域”之后,具体还可以包括:
采用第一颜色高亮地显示作业区域,以及采用第二颜色高亮地显示非作业区域。
例如,具体的,该第一颜色可以为黄色,第二颜色可以为红色,具体可以根据实际情况进行选择,在此不再赘述。
本申请实施例在采集目标地图对应环境中的激光点云数据后,根据激光点云数据,在目标地图中确定候选障碍物,然后,获取候选障碍物的特征图像,并基于特征图像在候选障碍物中确定目标障碍物,最后,根据目标障碍物,在目标地图中划分作业区域和非作业区域。在本申请提供的割草方案中,利用激光点云数据在目标地图中确定候选障碍物,并根据候选障碍物的特征图像,确定目标障碍物,最后,根据该目标障碍物,在目标地图中划分作业区域和非作业区域,即,通过图像视觉技术和激光点云技术的结合,可以避免人工划分作业区域和非作业区域而出现漏划分或错误划分的问题;另外,由于激光雷达统一扫描作业环境,可一次性确定所有障碍物的位置,无需逐个对障碍物进行圈定,避免了因障碍物的体量大导致障碍物整体圈定速度慢、建图效率低的问题。由此,本方案提高了作业地图构建效率。
请参阅图2a,图2a为本申请实施例提供的作业地图构建装置的结构示意图,其中该作业地图构建装置可以包括采集模块201、第一确定模块202、获取模块203、第二确定模块204以及划分模块205,具体可以如下:
采集模块201,用于采集目标地图对应环境中的激光点云数据。
例如,具体的,采集模块201可以利用三维激光扫描技术获取的空间点云 数据,可快速建立结构复杂、不规则的场景的三维可视化模型。
第一确定模块202,用于根据激光点云数据,在目标地图中确定候选障碍物。
可选地,在一些实施例中,第一确定模块202具体可以包括:
获取单元,用于获取目标地图的地图坐标系;
提取单元,用于从激光点云数据中提取每个三维激光点对应的反射值和三维坐标;
确定单元,用于基于地图坐标、三维激光点对应的反射值和三维坐标,在目标地图中确定候选障碍物。
可选地,在一些实施例中,确定单元具体可以包括:
第一确定子单元,用于确定激光点云数据对应的点云坐标系;
渲染子单元,用于基于三维激光点对应的三维坐标、以及地图坐标系与点云坐标系之间的转换关系,将三维激光点对应的反射值渲染至目标地图中;
第二确定子单元,用于根据渲染后的目标地图中的像素值,在目标地图中确定候选障碍物。
可选地,在一些实施例中,渲染子单元具体可以用于:)基于地图坐标系与点云坐标系之间的转换关系,对三维激光点对应的三维坐标进行转换,得到三维激光点在所述目标地图中的地图坐标;根据三维激光点在所述目标地图中的地图坐标,将三维激光点对应的反射值渲染至目标地图中。
获取模块203,用于获取候选障碍物的特征图像。
第二确定模块204,用于基于特征图像在候选障碍物中确定目标障碍物。
第二确定模块204具体可以利用卷积神经网络在候选障碍物中确定目标障碍物,可选地,在一些实施例中,第二确定模块204具体可以用于:将特征图像输入至预设图像分类网络中,得到候选障碍物的分类标签;将分类标签为目标标签的候选障碍物。
划分模块205,用于根据目标障碍物,在目标地图中划分作业区域和非作业区域。
具体的,划分模块205可以获取该目标障碍物的轮廓信息,在目标地图中划分作业区域和非作业区域,即,可选地,在一些实施例中,划分模块205具 体可以用于:至少获取目标障碍物在目标地图中的轮廓信息;根据轮廓信息以及目标障碍物在目标地图中的位置,输出包围目标障碍物的隔离曲线;将隔离曲线内的区域确定为非作业区域,并将除非作业区域之外的区域确定为作业区域。
可选地,在一些实施例中,请参阅图2b,本申请的作业地图构建装置具体还可以包括显示模块206,该显示模块206具体可以用于:采用第一颜色高亮地显示作业区域,以及采用第二颜色高亮地显示非作业区域。
本申请实施例的采集模块201在采集目标地图对应环境中的激光点云数据后,第一确定模块202根据激光点云数据,在目标地图中确定候选障碍物,然后,获取模块203获取候选障碍物的特征图像,第二确定模块204基于特征图像在候选障碍物中确定目标障碍物,最后,划分模块205根据目标障碍物,在目标地图中划分作业区域和非作业区域。在本申请提供的割草方案中,利用激光点云数据在目标地图中确定候选障碍物,并根据候选障碍物的特征图像,确定目标障碍物,最后,根据该目标障碍物,在目标地图中划分作业区域和非作业区域,即,通过图像视觉技术和激光点云技术的结合,可以避免人工划分作业区域和非作业区域而出现漏划分或错误划分的问题,由此,提高了作业地图构建效率。
此外,本申请实施例还提供一种割草机器人,如图3所示,其示出了本申请实施例所涉及的割草机器人的结构示意图,具体来讲:
该割草机器人可以包括控制模块301、行进机构302、切割模块303以及电源304等部件。本领域技术人员可以理解,图3中示出的电子设备结构并不构成对电子设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。其中:
控制模块301是该割草机器人的控制中心,该控制模块301具体可以包括中央处理器(Central Process Unit,CPU)、存储器、输入/输出端口、系统总线、定时器/计数器、数模转换器和模数转换器等组件,CPU通过运行或执行存储在存储器内的软件程序和/或模块,以及调用存储在存储器内的数据,执行割草机器人的各种功能和处理数据;优选的,CPU可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统和应用程序等,调制解调处理 器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到CPU中。
存储器可用于存储软件程序以及模块,CPU通过运行存储在存储器的软件程序以及模块,从而执行各种功能应用以及数据处理。存储器可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据电子设备的使用所创建的数据等。此外,存储器可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。相应地,存储器还可以包括存储器控制器,以提供CPU对存储器的访问。
行进机构302与控制模块301电性相连,用于响应控制模块301传递的控制信号,调整割草机器人的行进速度和行进方向,实现割草机器人的自移动功能。
切割模块303与控制模块301电性相连,用于响应控制模块传递的控制信号,调整切割刀盘的高度和转速,实现割草作业。
电源304可以通过电源管理系统与控制模块301逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。电源304还可以包括一个或一个以上的直流或交流电源、再充电系统、电源故障检测电路、电源转换器或者逆变器、电源状态指示器等任意组件。
尽管未示出,该割草机器人还可以包括通信模块、传感器模块、提示模块等,在此不再赘述。
通信模块用于收发信息过程中信号的接收和发送,通过与用户设备、基站或服务器建立通信连接,实现与用户设备、基站或服务器之间的信号收发。
传感器模块用于采集内部环境信息或外部环境信息,并将采集到的环境数据反馈给控制模块进行决策,实现割草机器人的精准定位和智能避障功能。可选地,传感器可以包括:超声波传感器、红外传感器、碰撞传感器、雨水感应器、激光雷达传感器、惯性测量单元、轮速计、图像传感器、位置传感器及其他传感器,对此不做限定。
提示模块用于提示用户当前割草机器人的工作状态。本方案中,提示模块包括但不限于指示灯、蜂鸣器等。例如,割草机器人可以通过指示灯提示用户 当前的电源状态、电机的工作状态、传感器的工作状态等。又例如,当检测到割草机器人出现故障或被盗时,可以通过蜂鸣器实现告警提示。
具体在本实施例中,控制模块301中的处理器会按照如下的指令,将一个或一个以上的应用程序的进程对应的可执行文件加载到存储器中,并由处理器来运行存储在存储器中的应用程序,从而实现各种功能,如下:
采集目标地图对应环境中的激光点云数据,根据激光点云数据,在目标地图中确定候选障碍物,获取候选障碍物的特征图像,并基于特征图像在候选障碍物中确定目标障碍物,根据目标障碍物,在目标地图中划分作业区域和非作业区域。
以上各个操作的具体实施可参见前面的实施例,在此不再赘述。
本申请实施例在采集目标地图对应环境中的激光点云数据后,根据激光点云数据,在目标地图中确定候选障碍物,然后,获取候选障碍物的特征图像,并基于特征图像在候选障碍物中确定目标障碍物,最后,根据目标障碍物,在目标地图中划分作业区域和非作业区域。在本申请提供的割草方案中,利用激光点云数据在目标地图中确定候选障碍物,并根据候选障碍物的特征图像,确定目标障碍物,最后,根据该目标障碍物,在目标地图中划分作业区域和非作业区域,即,通过图像视觉技术和激光点云技术的结合,可以避免人工划分作业区域和非作业区域而出现漏划分或错误划分的问题,由此,提高了作业地图构建效率;另外,由于激光雷达统一扫描作业环境,可一次性确定所有障碍物的位置,无需逐个对障碍物进行圈定,避免了因障碍物的体量大导致障碍物整体圈定速度慢、建图效率低的问题。
本领域普通技术人员可以理解,上述实施例的各种方法中的全部或部分步骤可以通过指令来完成,或通过指令控制相关的硬件来完成,该指令可以存储于一计算机可读存储介质中,并由处理器进行加载和执行。
为此,本申请实施例提供一种存储介质,其中存储有多条指令,该指令能够被处理器进行加载,以执行本申请实施例所提供的任一种作业地图构建方法中的步骤。例如,该指令可以执行如下步骤:
采集目标地图对应环境中的激光点云数据,根据激光点云数据,在目标地图中确定候选障碍物,获取候选障碍物的特征图像,并基于特征图像在候选障 碍物中确定目标障碍物,根据目标障碍物,在目标地图中划分作业区域和非作业区域。
以上各个操作的具体实施可参见前面的实施例,在此不再赘述。
其中,该存储介质可以包括:只读存储器(ROM,Read Only Memory)、随机存取记忆体(RAM,Random Access Memory)、磁盘或光盘等。
由于该存储介质中所存储的指令,可以执行本申请实施例所提供的任一种作业地图构建方法中的步骤,因此,可以实现本申请实施例所提供的任一种作业地图构建方法所能实现的有益效果,详见前面的实施例,在此不再赘述。
以上对本申请实施例所提供的一种作业地图构建方法、装置、割草机器人以及存储介质进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (10)

  1. 一种作业地图构建方法,其中,包括:
    采集目标地图对应环境中的激光点云数据;
    根据所述激光点云数据,在所述目标地图中确定候选障碍物;
    获取所述候选障碍物的特征图像,并基于所述特征图像在所述候选障碍物中确定目标障碍物;
    根据所述目标障碍物,在所述目标地图中划分作业区域和非作业区域。
  2. 根据权利要求1所述的方法,其中,所述根据所述激光点云数据,在所述目标地图中确定候选障碍物,包括:
    获取所述目标地图的地图坐标系;
    从所述激光点云数据中提取每个三维激光点对应的反射值和三维坐标;
    基于所述地图坐标、所述三维激光点对应的反射值和三维坐标,在所述目标地图中确定候选障碍物。
  3. 根据权利要求2所述的方法,其中,所述基于所述地图坐标、所述三维激光点对应的反射值和三维坐标,在所述目标地图中确定候选障碍物,包括:
    确定所述激光点云数据对应的点云坐标系;
    基于所述三维激光点对应的三维坐标、以及所述地图坐标系与点云坐标系之间的转换关系,将所述三维激光点对应的反射值渲染至所述目标地图中;
    根据渲染后的目标地图中的像素值,在所述目标地图中确定候选障碍物。
  4. 根据权利要求3所述的方法,其中,所述基于所述三维激光点对应的三维坐标、以及所述地图坐标系与点云坐标系之间的转换关系,将所述三维激光点对应的反射值渲染至所述目标地图中,包括:
    基于所述地图坐标系与点云坐标系之间的转换关系,对所述三维激光点对应的三维坐标进行转换,得到所述三维激光点在所述目标地图中的地图坐标;
    根据所述三维激光点在所述目标地图中的地图坐标,将所述三维激光点对应的反射值渲染至所述目标地图中。
  5. 根据权利要求1至4任一项所述的方法,其中,所述基于所述特征图像在所述候选障碍物中确定目标障碍物,包括:
    将所述特征图像输入至预设图像分类网络中,得到所述候选障碍物的分类 标签;
    将所述分类标签为目标标签的候选障碍物确定为目标障碍物。
  6. 根据权利要求1至4任一项所述的方法,其中,所述根据所述目标障碍物,在所述目标地图中划分作业区域和非作业区域,包括:
    至少获取所述目标障碍物低于预设高度下的轮廓信息;
    根据所述轮廓信息以及所述目标障碍物在所述目标地图中的位置,输出包围所述目标障碍物的隔离曲线;
    将所述隔离曲线内的区域确定为非作业区域,并将除所述非作业区域之外的区域确定为作业区域。
  7. 根据权利要求1所述的方法,其中,所述根据所述目标障碍物,在所述目标地图中划分作业区域和非作业区域之后,还包括
    采用第一颜色高亮地显示所述作业区域,以及;
    采用第二颜色高亮地显示所述非作业区域。
  8. 一种作业地图构建装置,其中,包括:
    采集模块,用于采集目标地图对应环境中的激光点云数据;
    第一确定模块,用于根据所述激光点云数据,在所述目标地图中确定候选障碍物;
    获取模块,用于获取所述候选障碍物的特征图像;
    第二确定模块,用于基于所述特征图像在所述候选障碍物中确定目标障碍物;
    划分模块,用于根据所述目标障碍物,在所述目标地图中划分作业区域和非作业区域。
  9. 一种割草机器人,其中,包括存储器,处理器及存储在存储器上并可在处理器上运行的计算机程序,其中,所述处理器执行所述程序时实现如权利要求1-7所述作业地图构建方法的步骤。
  10. 一种存储介质,其中,其上存储有计算机程序,其中,所述计算机程序被处理器执行时实现如权利要求1-7所述作业地图构建方法的步骤。
PCT/CN2023/105187 2022-07-08 2023-06-30 作业地图构建方法、装置、割草机器人以及存储介质 WO2024008016A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210806394.4 2022-07-08
CN202210806394.4A CN115235485A (zh) 2022-07-08 2022-07-08 作业地图构建方法、装置、割草机器人以及存储介质

Publications (1)

Publication Number Publication Date
WO2024008016A1 true WO2024008016A1 (zh) 2024-01-11

Family

ID=83671263

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/105187 WO2024008016A1 (zh) 2022-07-08 2023-06-30 作业地图构建方法、装置、割草机器人以及存储介质

Country Status (2)

Country Link
CN (1) CN115235485A (zh)
WO (1) WO2024008016A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115235485A (zh) * 2022-07-08 2022-10-25 松灵机器人(深圳)有限公司 作业地图构建方法、装置、割草机器人以及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106097444A (zh) * 2016-05-30 2016-11-09 百度在线网络技术(北京)有限公司 高精地图生成方法和装置
CN106447697A (zh) * 2016-10-09 2017-02-22 湖南穗富眼电子科技有限公司 一种基于动平台的特定动目标快速跟踪方法
CN113079801A (zh) * 2021-04-27 2021-07-09 河南科技大学 基于ros系统的智能割草机器人及激光扫描雷达地图构建方法
KR20210115493A (ko) * 2020-03-13 2021-09-27 한양대학교 산학협력단 운전자의 시선에 기반한 장애물 후보들에 대한 분석 우선순위를 이용하여 장애물을 분석하는 방법 및 시스템
CN115235485A (zh) * 2022-07-08 2022-10-25 松灵机器人(深圳)有限公司 作业地图构建方法、装置、割草机器人以及存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109658432A (zh) * 2018-12-27 2019-04-19 南京苏美达智能技术有限公司 一种移动机器人的边界生成方法及系统
CN113359692B (zh) * 2020-02-20 2022-11-25 杭州萤石软件有限公司 一种障碍物的避让方法、可移动机器人
CN112101092A (zh) * 2020-07-31 2020-12-18 北京智行者科技有限公司 自动驾驶环境感知方法及系统
CN112419494B (zh) * 2020-10-09 2022-02-22 腾讯科技(深圳)有限公司 用于自动驾驶的障碍物检测、标记方法、设备及存储介质
CN112486184B (zh) * 2020-12-10 2023-08-11 北京小狗吸尘器集团股份有限公司 一种扫地机器人及其避障路径确定方法
CN113115622B (zh) * 2021-03-08 2022-09-30 深圳拓邦股份有限公司 视觉机器人避障控制方法、装置及割草机器人

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106097444A (zh) * 2016-05-30 2016-11-09 百度在线网络技术(北京)有限公司 高精地图生成方法和装置
CN106447697A (zh) * 2016-10-09 2017-02-22 湖南穗富眼电子科技有限公司 一种基于动平台的特定动目标快速跟踪方法
KR20210115493A (ko) * 2020-03-13 2021-09-27 한양대학교 산학협력단 운전자의 시선에 기반한 장애물 후보들에 대한 분석 우선순위를 이용하여 장애물을 분석하는 방법 및 시스템
CN113079801A (zh) * 2021-04-27 2021-07-09 河南科技大学 基于ros系统的智能割草机器人及激光扫描雷达地图构建方法
CN115235485A (zh) * 2022-07-08 2022-10-25 松灵机器人(深圳)有限公司 作业地图构建方法、装置、割草机器人以及存储介质

Also Published As

Publication number Publication date
CN115235485A (zh) 2022-10-25

Similar Documents

Publication Publication Date Title
CN109144067B (zh) 一种智能清洁机器人及其路径规划方法
WO2024022337A1 (zh) 障碍物检测方法、装置、割草机器人以及存储介质
CN107247460B (zh) 一种机器蜜蜂的集群控制方法与系统
WO2024008016A1 (zh) 作业地图构建方法、装置、割草机器人以及存储介质
WO2024012192A1 (zh) 智能避障方法、割草机器人以及存储介质
WO2024001880A1 (zh) 智能避障方法、装置、割草机器人以及存储介质
CN113741438A (zh) 路径规划方法、装置、存储介质、芯片及机器人
WO2021143543A1 (zh) 机器人及其控制方法
CN111290403B (zh) 搬运自动导引运输车的运输方法和搬运自动导引运输车
CN108958223A (zh) 一种智慧式共享电脑及办公设备及其共享系统和商业模式
CN110750097A (zh) 一种室内机器人导航系统及建图、定位和移动方法
CN113675923B (zh) 充电方法、充电装置及机器人
WO2024012286A1 (zh) 割草方法、装置、割草机器人以及存储介质
WO2023246802A1 (zh) 割草方法、装置、割草机器人以及存储介质
WO2024002061A1 (zh) 割草方法、装置、割草机器人以及存储介质
US20230297120A1 (en) Method, apparatus, and device for creating map for self-moving device with improved map generation efficiency
WO2024016958A1 (zh) 割草方法、装置、割草机器人以及存储介质
WO2024008018A1 (zh) 割草方法、装置、割草机器人以及存储介质
WO2024017032A1 (zh) 割草机器人回充方法、割草机器人以及存储介质
CN113536820B (zh) 位置识别方法、装置以及电子设备
CN113687648A (zh) 多功能校园防疫机器人
CN115268438A (zh) 智能避障方法、装置、割草机器人以及存储介质
US12033394B2 (en) Automatic robotic lawn mowing boundary detection using 3D semantic segmentation
US20230206647A1 (en) Automatic Robotic Lawn Mowing Boundary Detection Using 3D Semantic Segmentation
CN115617053B (zh) 障碍物遍历方法、装置、割草机器人以及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23834783

Country of ref document: EP

Kind code of ref document: A1