WO2024077708A1 - 自移动设备的沿边控制方法、介质及自移动设备 - Google Patents

自移动设备的沿边控制方法、介质及自移动设备 Download PDF

Info

Publication number
WO2024077708A1
WO2024077708A1 PCT/CN2022/132388 CN2022132388W WO2024077708A1 WO 2024077708 A1 WO2024077708 A1 WO 2024077708A1 CN 2022132388 W CN2022132388 W CN 2022132388W WO 2024077708 A1 WO2024077708 A1 WO 2024077708A1
Authority
WO
WIPO (PCT)
Prior art keywords
self
moving device
image
working area
boundary
Prior art date
Application number
PCT/CN2022/132388
Other languages
English (en)
French (fr)
Inventor
张泫舜
刘元财
王雷
陈熙
Original Assignee
深圳市正浩创新科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市正浩创新科技股份有限公司 filed Critical 深圳市正浩创新科技股份有限公司
Publication of WO2024077708A1 publication Critical patent/WO2024077708A1/zh

Links

Images

Definitions

  • the present application belongs to the field of artificial intelligence technology, and specifically relates to an edge control method, medium and self-moving device of a self-moving device.
  • self-moving devices have been increasingly used in people's daily work and life.
  • self-moving devices are used for lawn maintenance, environmental cleaning, cargo handling, etc.
  • Self-moving devices usually move within a specified working area. When moving to the edge of the working area, the self-moving device needs to move along the edge.
  • the self-moving device is moved along the edge by setting a working map for the self-moving device and then positioning the self-moving device in the working map.
  • the positioning accuracy of the self-moving device is low, which makes it impossible for the self-moving device to accurately identify the edge of the working area, resulting in poor edge-moving effect.
  • a method for edge control of a self-moving device, a medium, and a self-moving device are provided.
  • a method for controlling an edge of a self-moving device comprising:
  • the image segmentation processing includes multiple target feature extraction operations in series, each target feature extraction operation includes multiple convolution operations in parallel and a fusion operation on the results of the multiple convolution operations;
  • the segmented image is used to indicate the working area and the non-working area in the environment image;
  • an edge control device for a self-moving device comprising:
  • An environment image acquisition module is configured to acquire an environment image when detecting that the mobile device moves to a designated area
  • the image segmentation module is configured to perform image segmentation processing on the environment image to obtain a segmented image;
  • the image segmentation processing includes multiple target feature extraction operations in series, each target feature extraction operation includes multiple convolution operations in parallel and a fusion operation on the results of the multiple convolution operations;
  • the segmented image is used to indicate the working area and the non-working area in the environment image;
  • a boundary image acquisition module is configured to extract a plurality of boundary pixel points between the working area and the non-working area in the segmented image to obtain a boundary image
  • the edge module is configured to control the movement of the mobile device along the edge according to the boundary image.
  • a computer-readable medium on which a computer program is stored.
  • the computer program is executed by a processor, the method for controlling the edge of a self-moving device in the above technical solution is implemented.
  • an electronic device comprising: a processor; and a memory for storing executable instructions of the processor; wherein the processor executes the executable instructions so that the electronic device executes the edge control method of the self-mobile device as in the above technical solution.
  • a computer program product or a computer program including computer instructions, the computer instructions being stored in a computer-readable storage medium.
  • a processor of a computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the edge control method of a self-moving device in the above technical solution.
  • FIG1 schematically shows a structural block diagram of a self-moving device to which the technical solution of the present application is applied.
  • FIG2 schematically shows a flow chart of a method for controlling an edge of a self-moving device provided by an embodiment of the present application.
  • FIG. 3A schematically shows a flowchart of an image segmentation process provided by an embodiment of the present application.
  • FIG3B schematically shows a schematic diagram of a target feature extraction process provided by an embodiment of the present application.
  • FIG3C schematically shows a schematic diagram of a target feature extraction process provided by an embodiment of the present application.
  • FIG. 4 schematically shows a schematic diagram of a boundary image provided by an embodiment of the present application.
  • FIG. 5 schematically shows a schematic diagram of a boundary image provided by an embodiment of the present application.
  • FIG6 schematically shows a structural block diagram of an edge control device for a self-moving device provided in an embodiment of the present application.
  • FIG. 7 schematically shows a structural block diagram of a self-moving device suitable for implementing an embodiment of the present application.
  • FIG1 schematically shows a structural block diagram of a self-moving device to which the technical solution of the present application is applied.
  • the self-moving device includes a body 110 and a control module 120, the body 110 includes a body 111 and wheels 112, and the control module 120 is placed on the body 110.
  • the control module 120 is placed on the body 111, and the control module 120 is used to receive control instructions from the self-moving device, or generate various control instructions for the self-moving device.
  • the self-moving device in the embodiment of the present application may be a device including a self-moving auxiliary function.
  • the self-moving auxiliary function may be implemented by a vehicle-mounted terminal, and the corresponding self-moving device may be a vehicle with the vehicle-mounted terminal.
  • the self-moving device may also be a semi-self-moving device or a fully autonomous mobile device, such as a sweeping robot, a mopping robot, a vegetable delivery robot, a transport robot, a lawn mowing robot and other robots.
  • a semi-self-moving device such as a sweeping robot, a mopping robot, a vegetable delivery robot, a transport robot, a lawn mowing robot and other robots.
  • the embodiment of the present application does not limit the specific type and function of the self-moving device. It can be understood that the self-moving device in this embodiment may also include other devices with self-moving functions.
  • control module 120 is used to implement the edge control method of the self-moving device provided in any embodiment of the present application.
  • the self-moving device may be provided with a camera device 130, which is connected to the control module 120 inside the self-moving device.
  • an environmental image is acquired.
  • the specific process may be: when the control module 120 detects that the self-mobile device has moved to the designated area, a photo-taking instruction is sent to the camera device 130, thereby acquiring the environmental image through the camera device 130.
  • the camera device 130 may be fixed, or may be non-fixed and rotatable, which is not limited in the embodiments of the present application.
  • the environmental image captured by the camera device 130 may be a color image, a black and white image, an infrared image, etc., which is not limited in the embodiments of the present application.
  • the camera device 110 is an RGB camera, and the RGB camera captures the environment in the forward direction of the self-mobile device to obtain an environmental image.
  • control module 120 performs image segmentation processing on the environmental image to obtain a segmented image; wherein the image segmentation processing includes multiple serial target feature extraction operations, each target feature extraction operation includes multiple parallel convolution operations and a fusion operation on the results of the multiple convolution operations; the segmented image is used to indicate the working area and the non-working area in the environmental image.
  • control module 120 extracts a plurality of boundary pixel points between the working area and the non-working area in the segmented image to obtain a boundary image.
  • control module 120 controls the self-moving device to move along the edge according to the boundary image.
  • the control module 120 is also connected to the driving components of the self-moving device, such as the steering shaft, steering wheel, motor, etc. of the self-moving device, to control the movement and steering of the self-moving device, thereby controlling the self-moving device to move along the edge.
  • FIG2 schematically shows a flow chart of a method for controlling an edge of a self-moving device provided by an embodiment of the present application. As shown in FIG2 , the method includes steps 210 to 240, which are as follows:
  • Step 210 When it is detected that the mobile device moves to the designated area, an environment image is acquired.
  • the designated area is a pre-set area close to the boundary of the working area of the self-mobile device, for example, the designated area is an area whose distance from the boundary of the working area is less than a preset threshold.
  • the environmental image refers to an image of the current physical environment in the direction of the self-mobile device, and the environmental image can be obtained by taking a picture with a camera installed on the self-mobile device.
  • the environmental image can be an RGB image or a depth image, etc., which is not limited here.
  • the process of detecting whether a mobile device has moved to a designated area includes: obtaining a working map of the mobile device; obtaining positioning information of the mobile device; determining the distance from the mobile device to a boundary based on the positioning information; and determining that the mobile device has moved to a designated area in the working area when the distance is within a preset distance range.
  • the self-mobile device usually moves within the working area planned by the working map, and the working map includes the boundaries of each working area of the self-mobile device, which means that the working map has the positioning information of the boundaries of each working area.
  • the positioning information of the self-mobile device is obtained.
  • the positioning information of the self-mobile device can be determined by means of the Global Navigation Satellite System (GNSS), which includes but is not limited to the Global Positioning System (GPS), the BeiDou Navigation Satellite System (BDS), the GLONASS Satellite Navigation System (GLOBAL NAVIGATION SATELLITE SYSTEM), the Galileo Satellite Positioning System, and the like.
  • GNSS Global Navigation Satellite System
  • GPS Global Positioning System
  • BDS BeiDou Navigation Satellite System
  • GLONASS Satellite Navigation System GLOBAL NAVIGATION SATELLITE SYSTEM
  • Galileo Satellite Positioning System and the like.
  • the current distance of the self-mobile device from the boundary of the working area is calculated.
  • the distance is within the preset distance range, it means that the self-mobile device has moved to the vicinity of the boundary of the working area, and it can be determined that the self-mobile device has moved to the designated area in the working area.
  • the distance between the self-mobile device and the boundary of the working area is less than the set distance range of 2 meters, it is considered that the self-mobile device has moved to the designated area.
  • GPS positioning, BDS positioning and other methods can be used to determine whether the self-moving device has reached the boundary of the working area, the positioning accuracy of this positioning method is not high, and it is difficult to accurately determine whether the self-moving device has reached the boundary of the working area.
  • the embodiment of the present application first determines that the self-moving device has reached the boundary of the working area through coarse positioning, and then accurately locates the boundary of the working area in combination with the image processing in the subsequent steps, that is, realizes the precise positioning of the boundary of the working area, which can effectively improve the accuracy of the self-moving device along the edge control.
  • Step 220 performing image segmentation processing on the environment image to obtain a segmented image;
  • the image segmentation processing includes multiple serial target feature extraction operations, each target feature extraction operation includes multiple parallel convolution operations and a fusion operation on the results of the multiple convolution operations;
  • the segmented image is used to indicate the working area and the non-working area in the environment image.
  • image segmentation is to identify the working area and the non-working area in the environment image, and obtain a segmented image that can distinguish the working area from the non-working area, so as to facilitate the subsequent extraction of the boundary between the working area and the non-working area.
  • the image segmentation process includes multiple target feature extraction operations in series, and each target feature extraction operation includes multiple parallel convolution operations and a fusion operation on the results of the multiple convolution operations.
  • Multiple target feature extraction operations in series are equivalent to multiple target feature extraction operations connected in series in sequence, and the output data of the previous target feature extraction operation is the input data of the next target feature extraction operation.
  • Multiple parallel convolution operations mean that multiple convolution operations are performed in parallel or synchronously, and the data processed by one convolution operation has no direct correlation with the data processed by other convolution operations.
  • the fusion operation of the results of multiple convolution operations is to fuse the results of each convolution operation into one feature. For example, the results of each convolution operation are added, weighted summed, and so on.
  • FIG3A schematically shows a flowchart of an image segmentation process provided by an embodiment of the present application.
  • the output data of the i-1 target feature extraction is the input data of the i target feature extraction, thereby constituting a serial K target feature extraction operations.
  • K is the preset number of feature extractions.
  • the input data of the first target feature extraction is a feature map obtained by performing a convolution operation on the environment image, that is, the environment image enters the target feature extraction operation after a convolution operation.
  • the segmented image is output.
  • multiple convolution operations are first performed on the input data in parallel, and one convolution operation obtains a convolution result, thereby obtaining multiple convolution results.
  • FIG3B schematically shows a schematic diagram of a target feature extraction process provided by an embodiment of the present application.
  • the i-th target feature extraction operation includes M parallel convolution operations, and the input data of each convolution operation is the input data of the i-th target feature extraction operation.
  • the i-th target feature extraction operation can be understood as any target feature extraction operation.
  • the activation function can be ReLU (Linear rectification function), sigmoid (S-type function), etc.
  • the convolution processing, multiple convolution operations, etc. in this application all involve convolution calculations, but the calculation parameters involved in the convolution processing or convolution operation may be different.
  • the convolution kernel size, step size, number of channels, etc. involved in the convolution processing or convolution operation may be different, and multiple convolution operations in parallel may have the same calculation parameters or involve different calculation parameters.
  • FIG3C schematically shows a schematic diagram of a target feature extraction process provided by an embodiment of the present application.
  • the normalization process can be batch normalization (Batch Normalization, BN) or group normalization (Group Normalization, GN).
  • the image segmentation processing in the related art adopts a residual network, which is usually an ordinary serial convolution operation, and its image segmentation accuracy needs to be improved.
  • the serial multiple target feature extraction makes the depth of feature extraction continue to deepen, and the deep features of the image can be extracted; the parallel multiple convolution operations in each target feature extraction retain the shallow features of the feature extraction.
  • the fusion operation allows the deep features and shallow features to be merged, which is conducive to improving the accuracy of image segmentation, thereby improving the recognition and positioning accuracy of boundaries.
  • Table 1 shows the accuracy comparison between the image segmentation processing performed by the existing residual network and the image segmentation processing provided by the present application.
  • the accuracy threshold is set to 50% to 95%
  • the corresponding recognition probabilities of the segmented images are 97%, 95%, 94%, 92%, 90%, 89%, 83%, 76%, 65% and 38%, respectively, and the above recognition probabilities are averaged to obtain an accuracy (i.e., average value) of approximately 82%.
  • the accuracy of the related technical solution calculated in the same way is 80%. It can be seen that the present application improves the image segmentation accuracy and is more conducive to accurate boundary identification.
  • Step 230 extract multiple boundary pixel points between the working area and the non-working area in the segmented image to obtain a boundary image.
  • multiple boundary pixels in the segmented image can be extracted to obtain a boundary image including the boundary.
  • the pixels in the working area and the pixels in the non-working area are two different pixels.
  • the pixel values of the pixels in the working area are different from the pixel values of the pixels in the non-working area. Since the boundary pixels are located at the junction of the working area and the non-working area, two types of pixels are included around the boundary pixels.
  • the pixel points of the segmented image include gradient values.
  • Step 240 Control the mobile device to move along the edge according to the boundary image.
  • the boundary in the boundary image represents the precise boundary of the working area of the self-moving device, and an edge path is generated according to the boundary, and then the self-moving device is controlled to move along the edge according to the edge path.
  • the self-moving device moves to the designated area in the working area
  • the environment image is acquired and the environment image is subjected to image segmentation processing to obtain a segmented image that distinguishes the working area from the non-working area, and then a boundary image is obtained from the segmented image
  • the image segmentation processing includes multiple target feature extractions in series, and each target feature extraction includes multiple convolution operations in parallel and a fusion operation of the results of multiple parallel convolution operations
  • the self-moving device is controlled to move along the edge according to the boundary image.
  • the automatic movement of the mobile device to the designated area is equivalent to the rough positioning of the self-moving device.
  • the environment image is processed to obtain the boundary image, which is the precise positioning of the boundary where the self-moving device is located.
  • a method combining rough positioning and precise positioning is implemented to control the movement of the self-moving device along the edge and improve the accuracy of boundary positioning.
  • the serial multiple feature extractions make the depth of feature extraction continuously deepened, thereby obtaining the deep features of the environmental image; at the same time, the multiple convolution operations in parallel in each feature extraction process retain the shallow features of the environmental image.
  • the final result fusion operation makes the deep features and shallow features of the environmental image merge, which is conducive to improving the accuracy of image segmentation, and then improves the recognition accuracy of boundary pixels in the boundary image, so that the self-moving device can more accurately locate the boundary of the working area during the edge process, effectively improving the edge effect of the self-moving device.
  • the process of controlling the movement of the self-moving device along the edge also includes: determining the perpendicular bisector of the boundary image and the image edge of the boundary image; using the intersection point of the perpendicular bisector and the image edge as the projection pixel point of the self-moving device; determining the target boundary pixel point closest to the projection pixel point from multiple boundary pixel points in the boundary image; and determining the moving direction of the self-moving device according to the position of the projection pixel point and the position of the target boundary pixel point.
  • the perpendicular bisector of the boundary image refers to a line that passes through the center point of the boundary image and is perpendicular to the edge of the image.
  • the perpendicular bisector and the edge of the image have two intersection points.
  • the intersection point close to the working area that is, the intersection point of the perpendicular bisector and the lower edge of the image, is used as the projection pixel point of the self-moving device in the boundary image.
  • Figure 4 schematically shows a schematic diagram of a boundary image provided by an embodiment of the present application. As shown in Figure 4, the intersection point of the perpendicular bisector and the edge of the image includes point A and point A'.
  • Point A is inside the working area and is the intersection point of the perpendicular bisector and the lower edge of the image.
  • Point A' is in the non-working area and is the intersection point of the perpendicular bisector and the upper edge of the image. Therefore, point A is the projection pixel point of the self-moving device.
  • the determination of the projection pixel point is similar to the position of the camera in the image it captures, usually the intersection point of the perpendicular bisector and the lower edge of the image.
  • the moving direction of the self-moving device can be determined, and then the movement of the self-moving device along the edge can be controlled based on the moving direction. Specifically, if the target boundary pixel point is to the left of the projection pixel point, the self-moving device is controlled to move to the left to the target boundary pixel point; if the target boundary pixel point is to the right of the projection pixel point, the self-moving device is controlled to move to the right to the target boundary pixel point.
  • the upper left corner of the boundary image is taken as the origin O, and the two image edges intersecting the origin are taken as the x-axis and the y-axis respectively, to construct a coordinate system of the boundary image.
  • the position of the projected pixel point is the pixel coordinate (x 0 , y 0 ) in the coordinate system
  • the position of the boundary pixel point is the pixel coordinate (x, y) in the coordinate system
  • the distance D between the projected pixel point and the boundary pixel point is:
  • the boundary pixel corresponding to the minimum distance is found to be the target boundary pixel.
  • d>0 it means that the target boundary pixel point is on the left side of the projection point, indicating that the moving direction of the self-equipped device is to the left; when d ⁇ 0, it means that the target boundary pixel point is on the right side of the projection point, indicating that the moving direction of the self-equipped device is to the right.
  • the distance that the self-mobile device moves from the projection pixel point to the target boundary pixel point according to the moving direction is determined according to the PID control.
  • the self-mobile device moves to the target boundary pixel point, it moves according to the boundary in the boundary image. For example, as shown in FIG4 , the target boundary pixel point B is located on the right side of the projection pixel point A, then the self-mobile device is controlled to move right to the target boundary pixel point B, and then the self-mobile device is controlled to move along the edge according to the identified boundary.
  • the edge control method of the self-moving device of the present application also includes: in the process of the self-moving device moving along the edge, detecting whether the self-moving device reaches the turning point; when the self-moving device reaches the turning point, adjusting the moving direction of the self-moving device according to the currently detected working area.
  • the turning point refers to the position where the moving direction of the self-moving device needs to be changed. Since the self-moving device moves along the edge, when the self-moving device reaches the turning point, it means that a section of the edge path has been completed, and the working area in the boundary image must be reduced. Therefore, the area of the working area can be used to determine whether the self-moving device has reached the turning point.
  • the process of detecting whether the self-moving device has reached a turning point includes: calculating the area of the currently detected working area; when the area of the currently detected working area is less than a preset area threshold, determining that the self-moving device has reached the turning point.
  • the self-mobile device starts to move along the edge from the target boundary pixel point B in the boundary image shown in Figure 4.
  • the boundary shown in Figure 4 is divided into boundary 1, boundary 2, and boundary 3.
  • the self-mobile device starts from the target pixel point B and moves along boundary 1.
  • the boundary image is continuously acquired and the area of the working area is calculated.
  • the boundary image acquired by the self-mobile device at this time is shown in Figure 5.
  • the area of the working area is less than the preset area threshold, so it is determined that the self-mobile device is currently located at a turning point.
  • whether the turning point has been reached can also be detected by the ratio of the working area area to the non-working area area, that is, when the ratio of the working area area to the non-working area area is less than a preset threshold, it is determined that the mobile device has reached the turning point.
  • the moving direction of the mobile device is adjusted according to the currently detected working area area. Specifically, the moving direction of the mobile device is adjusted according to the working area area on both sides of the turning point. The moving direction of the mobile device is adjusted to the side with the larger working area area.
  • the process of adjusting the moving direction at a turning point includes: dividing the currently detected working area into a first working area and a second working area according to a preset dividing line; when the area of the first working area is larger than the area of the second working area, controlling the self-moving device to adjust a first preset angle to a first direction; wherein the first direction is a direction toward the first working area; when the area of the second working area is larger than the area of the first working area, controlling the self-moving device to adjust a second preset angle to a second direction; wherein the second direction is a direction toward the second working area.
  • the preset dividing line is a line passing through the projected pixel points of the self-mobile device in the boundary image, for example, the perpendicular bisector of the boundary image.
  • the self-mobile device is controlled to adjust the second preset angle toward the second direction of the second working area, and after the angle is adjusted, the moving direction of the self-mobile device is toward the second working area.
  • the working area is divided into a first working area and a second working area by a vertical line in the image.
  • the area of the first working area is larger than that of the second working area, so the mobile device is controlled to adjust a first preset angle to a first direction, for example, the mobile device is controlled to rotate 90° to the left.
  • the first preset angle and the second preset angle can be set according to the angle between the current orientation of the self-moving device and the next boundary.
  • the current orientation of the self-moving device is the direction of the vertical line in the image
  • the next boundary is boundary 2
  • the vertical line in the image and boundary 2 form an angle ⁇
  • the self-moving device can be controlled to rotate counterclockwise by angle ⁇ , so that the direction of the self-moving device is consistent with the extension direction of boundary 2.
  • the first preset angle and the second preset angle may also be preset values, such as angle values of 30°, 40°, 50°, etc.
  • the embodiment of the present application does not limit the specific setting method of the first preset angle and the second preset angle.
  • FIG6 schematically shows a block diagram of the structure of the edge control device of the self-moving device provided in the embodiment of the present application. As shown in FIG6, the device includes:
  • the environment image acquisition module 610 is configured to acquire an environment image when detecting that the mobile device moves to a designated area;
  • the image segmentation module 620 is configured to perform image segmentation processing on the environment image to obtain a segmented image; the image segmentation processing includes multiple target feature extraction operations in series, each target feature extraction operation includes multiple convolution operations in parallel and a fusion operation on the results of the multiple convolution operations; the segmented image is used to indicate the working area and the non-working area in the environment image;
  • the boundary image acquisition module 630 is configured to extract a plurality of boundary pixel points between the working area and the non-working area in the segmented image to obtain a boundary image;
  • the edge module 640 is configured to control the movement of the self-moving device along the edge according to the boundary image.
  • the image segmentation module 620 is specifically configured as follows:
  • the output data of the i-1th target feature extraction is used as the input data of the i-th target feature extraction; wherein 2 ⁇ i ⁇ K, K is the preset number of feature extractions; the input data of the first target feature extraction is the feature map obtained by performing a convolution operation on the environment image;
  • the multiple convolution results are fused to obtain fused features, and the fused features are activated to obtain output data of the i-th target feature extraction.
  • the device further comprises:
  • a moving direction determination module is configured to determine the perpendicular bisector of the boundary image and the image edge of the boundary image; use the intersection of the perpendicular bisector and the image edge as the projection pixel point of the self-moving device; determine the target boundary pixel point closest to the projection pixel point from multiple boundary pixel points in the boundary image; and determine the moving direction of the self-moving device according to the position of the projection pixel point and the position of the target boundary pixel point.
  • the device further comprises:
  • a turning point detection module is configured to detect whether the self-moving device reaches a turning point during the process of the self-moving device moving along the edge;
  • the moving direction adjustment module is configured to adjust the moving direction of the self-moving device according to the currently detected working area when the self-moving device reaches the turning point.
  • the turning point detection module is specifically configured:
  • the moving direction adjustment module is specifically configured:
  • controlling the self-moving device to adjust a first preset angle in a first direction; wherein the first direction is a direction toward the first working area;
  • the self-moving device is controlled to adjust a second preset angle in a second direction; wherein the second direction is a direction toward the second working area.
  • the device further comprises:
  • the detection module is configured to obtain a working map of the self-moving device; wherein the working map includes boundaries of each working area; obtain positioning information of the self-moving device; determine the distance from the self-moving device to the boundary based on the positioning information; and when the distance is within a preset distance range, determine that the self-moving device has moved to a designated area in the working area.
  • FIG. 7 schematically shows a block diagram of a computer system structure of a self-mobile device for implementing an embodiment of the present application.
  • the self-equipped device 700 shown in FIG. 7 is only an example and should not bring any limitation to the functions and scope of use of the embodiments of the present application.
  • the mobile device 700 includes a central processing unit 701 (CPU), which can perform various appropriate actions and processes according to the program stored in the read-only memory 702 (ROM) or the program loaded from the storage part 708 to the random access memory 703 (RAM). Various programs and data required for system operation are also stored in the random access memory 703.
  • the central processing unit 701, the read-only memory 702 and the random access memory 703 are connected to each other through a bus 704.
  • the input/output interface 705 Input/Output interface, i.e., I/O interface
  • I/O interface input/output interface
  • the following components are connected to the input/output interface 705: an input section 706 including a keyboard, a mouse, etc.; an output section 707 including a cathode ray tube (CRT), a liquid crystal display (LCD), etc., and a speaker; a storage section 708 including a hard disk, etc.; and a communication section 709 including a network interface card such as a LAN card, a modem, etc.
  • the communication section 709 performs communication processing via a network such as the Internet.
  • a drive 710 is also connected to the input/output interface 705 as needed.
  • a removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, etc., is installed on the drive 710 as needed so that a computer program read therefrom is installed into the storage section 708 as needed.
  • the process described in each method flow chart can be implemented as a computer software program.
  • an embodiment of the present application includes a computer program product, which includes a computer program carried on a computer readable medium, and the computer program contains a program code for executing the method shown in the flow chart.
  • the computer program can be downloaded and installed from the network through the communication part 709, and/or installed from the removable medium 711.
  • the central processor 701 various functions defined in the system of the present application are executed.
  • the computer-readable medium shown in the embodiment of the present application may be a computer-readable signal medium or a computer-readable storage medium or any combination of the above two.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device or device, or any combination of the above.
  • Computer-readable storage media may include, but are not limited to: an electrical connection with one or more wires, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM), a flash memory, an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium containing or storing a program that can be used by or in combination with an instruction execution system, device or device.
  • a computer-readable signal medium may include a data signal propagated in a baseband or as part of a carrier wave, which carries a computer-readable program code. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
  • Computer readable signal media may also be any computer readable medium other than computer readable storage media, which may send, propagate, or transmit programs for use by or in conjunction with an instruction execution system, apparatus, or device.
  • the program code contained on the computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the above.
  • the technical solution according to the implementation of the present application can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a USB flash drive, a mobile hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which can be a personal computer, a server, a touch terminal, or a network device, etc.) to execute the method according to the implementation of the present application.
  • a non-volatile storage medium which can be a CD-ROM, a USB flash drive, a mobile hard disk, etc.
  • a computing device which can be a personal computer, a server, a touch terminal, or a network device, etc.

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种自移动设备的沿边控制方法,包括:当检测到自移动设备移动至指定区域时,获取环境图像;对环境图像进行图像分割处理,得到分割图像;图像分割处理包括串行的多次目标特征提取操作,每次目标特征提取操作包括并行的多次卷积操作以及对多次卷积操作的结果的融合操作;分割图像用于指示环境图像中的工作区域和非工作区域;提取分割图像中工作区域和非工作区域之间的多个边界像素点,得到边界图像;根据边界图像控制自移动设备沿边移动。

Description

自移动设备的沿边控制方法、介质及自移动设备
相关申请的交叉引用
本申请要求于2022年10月14日提交中国专利局、申请号为202211259807.8、发明名称为“自移动设备的沿边控制方法、装置、介质及自移动设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请属于人工智能技术领域,具体涉及一种自移动设备的沿边控制方法、介质及自移动设备。
背景技术
这里的陈述仅提供与本申请有关的背景信息,而不必然地构成示例性技术。
近年来,自移动设备在人们的日常工作和生活中的应用越发广泛,例如,使用自移动设备进行草坪维护、环境清洁、货物搬运等。自移动设备通常在规定的工作区域内移动,当移动至工作区域边缘时,自移动设备需要沿边移动。相关技术中,通过设定自移动设备的工作地图,然后基于自移动设备在工作地图中的定位实现自移动设备的沿边。然而,在一些情况下,自移动设备的定位精度较低,致使自移动设备无法准确的识别工作区域边缘,从而导致沿边效果不佳。
需要说明的是,在上述背景技术部分公开的信息仅用于加强对本申请的背景的理解,因此可以包括不构成对本领域普通技术人员已知的现有技术的信息。
发明内容
根据本申请的各种实施例,提供一种自移动设备的沿边控制方法、介质及自移动设备。
本申请的其他特性和优点将通过下面的详细描述变得显然,或部分地 通过本申请的实践而习得。
根据本申请实施例的一个方面,提供一种自移动设备的沿边控制方法,包括:
当检测到自移动设备移动至指定区域时,获取环境图像;
对环境图像进行图像分割处理,得到分割图像;图像分割处理包括串行的多次目标特征提取操作,每次目标特征提取操作包括并行的多次卷积操作以及对多次卷积操作的结果的融合操作;分割图像用于指示环境图像中的工作区域和非工作区域;
提取分割图像中工作区域和非工作区域之间的多个边界像素点,得到边界图像;
根据边界图像控制自移动设备沿边移动。
根据本申请实施例的一个方面,提供一种自移动设备的沿边控制装置,包括:
环境图像获取模块,被配置为当检测到自移动设备移动至指定区域时,获取环境图像;
图像分割模块,被配置为对环境图像进行图像分割处理,得到分割图像;图像分割处理包括串行的多次目标特征提取操作,每次目标特征提取操作包括并行的多次卷积操作以及对多次卷积操作的结果的融合操作;分割图像用于指示环境图像中的工作区域和非工作区域;
边界图像获取模块,被配置为提取分割图像中工作区域和非工作区域之间的多个边界像素点,得到边界图像;
沿边模块,被配置为根据边界图像控制自移动设备沿边移动。
根据本申请实施例的一个方面,提供一种计算机可读介质,其上存储有计算机程序,该计算机程序被处理器执行时实现如以上技术方案中的自移动设备的沿边控制方法。
根据本申请实施例的一个方面,提供一种电子设备,该电子设备包括:处理器;以及存储器,用于存储处理器的可执行指令;其中,处理器执行可执行指令使得电子设备执行如以上技术方案中的自移动设备的沿边控制方法。
根据本申请实施例的一个方面,提供一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读 取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行如以上技术方案中的自移动设备的沿边控制方法。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其他特征、目的和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例或示例性技术中的技术方案,下面将对实施例或示例性技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1示意性地示出了应用本申请技术方案的自移动设备的结构框图。
图2示意性地示出了本申请一个实施例提供的自移动设备的沿边控制方法的流程图。
图3A示意性地示出了本申请一个实施例提供的图像分割处理的流程图。
图3B示意性地示出了本申请一个实施例提供的目标特征提取过程的示意图。
图3C示意性地示出了本申请一个实施例提供的目标特征提取过程的示意图。
图4示意性地示出了本申请一个实施例提供的边界图像的示意图。
图5示意性地示出了本申请一个实施例提供的边界图像的示意图。
图6示意性地示出了本申请实施例提供的自移动设备的沿边控制装置的结构框图。
图7示意性示出了适于用来实现本申请实施例的自移动设备的结构框图。
具体实施方式
现在将参考附图更全面地描述示例实施方式。然而,示例实施方式能够以多种形式实施,且不应被理解为限于在此阐述的范例;相反,提供这 些实施方式使得本申请将更加全面和完整,并将示例实施方式的构思全面地传达给本领域的技术人员。
参阅图1,图1示意性地示出了应用本申请技术方案的自移动设备的结构框图。
如图1所示,该自移动设备包括车体110和控制模块120,车体110包括车身111和车轮112,控制模块120置于车体110上,一般的,控制模块120置于车身111上,该控制模块120用于接收自移动设备的控制指令,或为自移动设备生成各类控制指令。本申请实施例中的自移动设备可以是包含自移动辅助功能的设备。其中,自移动辅助功能可以是车载终端实现,相应的自移动设备可以是具有该车载终端的车辆。自移动设备还可以是半自移动设备或者完全自主移动设备,例如扫地机器人、拖地机器人、送菜机器人、运输机器人、割草机器人等机器人,本申请实施例对自移动设备的具体类型、功能不作限定。可以理解,本实施例中的自移动设备还可以包括其他具有自移动功能的设备。
在本申请实施例中,控制模块120用于实施本申请任意实施例提供的自移动设备的沿边控制方法。自移动设备上可以设有摄像装置130,摄像装置130连接自移动设备内部的控制模块120。
首先,当控制模块120检测到自移动设备移动至指定区域时,获取环境图像。具体过程可以是:当控制模块120检测到自移动设备移动至指定区域时,向摄像装置130发送拍照指令,从而通过摄像装置130获取环境图像。摄像装置130可以是固定的,也可以是非固定、可转动的,本申请实施例对此不作限定。摄像装置130拍摄的环境图像可以是彩色图像、黑白图像、红外图像等,本申请实施例对此不作限定。示例性的,摄像装置110为RGB相机,RGB相机对自移动设备前进方向上的环境进行拍摄,得到环境图像。
接下来,控制模块120对环境图像进行图像分割处理,得到分割图像;其中,图像分割处理包括串行的多次目标特征提取操作,每次目标特征提取操作包括并行的多次卷积操作以及对多次卷积操作的结果的融合操作;分割图像用于指示环境图像中的工作区域和非工作区域。
然后,控制模块120提取分割图像中工作区域和非工作区域之间的多个边界像素点,得到边界图像。
最后,控制模块120根据边界图像控制自移动设备沿边移动。控制模块120还连接自移动设备的驱动部件,例如自移动设备的转向轴、转向轮、电机等,用于控制自移动设备的移动、转向等,进而控制自移动设备沿边 移动。
下面结合具体实施方式对本申请提供的自移动设备的沿边控制方法做出详细说明。
图2示意性地示出了本申请一个实施例提供的自移动设备的沿边控制方法的流程图。如图2所示,该方法包括步骤210至步骤240,具体如下:
步骤210、当检测到自移动设备移动至指定区域时,获取环境图像。
具体地,指定区域为预先设定的距离自移动设备的工作区域的边界较近的区域,例如,指定区域为距离工作区域边界的距离小于预设阈值的区域。环境图像是指自移动设备前进方向上的当前所处物理环境的图像,该环境图像可以由安装在自移动设备上的摄像装置拍照获得。环境图像可以是RGB图像或者深度图像等,此处不做限定。
在本申请的一个实施例中,检测自移动设备是否移动至指定区域的过程包括:获取自移动设备的工作地图;获取自移动设备的定位信息;根据定位信息,确定自移动设备到边界的距离;当距离在预设距离范围内时,确定自移动设备移动至工作区域中的指定区域。
具体地,自移动设备通常是在工作地图所规划的工作区域内移动,工作地图中包括自移动设备的各个工作区域的边界,也就表明,工作地图具有各工作区域的边界的定位信息。在自移动设备的移动过程中,获取自移动设备的定位信息。例如,可以通过全球卫星导航系统(Global Navigation Satellite System,GNSS),GNSS包括但不限于全球定位系统(Global Positioning System,GPS)、北斗卫星导航系统(BeiDou Navigation Satellite System,BDS)、格洛纳斯卫星导航系统(GLOBAL NAVIGATION SATELLITE SYSTEM,GLONASS)、伽利略卫星定位系统等方式确定自移动设备的定位信息。然后根据自移动设备的定位信息与各工作区域的边界的定位信息,计算自移动设备当前距离工作区域边界的距离。当距离在预设距离范围内时,说明自移动设备已移动至工作区域边界附近,可以确定自移动设备移动至工作区域中的指定区域。例如,当自移动设备距离工作区域边界的距离小于设定距离范围2米时,认为自移动设备移动至指定区域。
虽然可以使用GPS定位、BDS定位等方式判断自移动设备是否到达工作区域边界附近,但是这种定位方式的定位精度不高,难以精确判断自移动设备是否到达工作区域的边界。本申请实施例首先通过粗定位确定自移 动设备到达工作区域边界附近,然后在结合后续步骤的图像处理精确定位工作区域边界,即实现工作区域边界的精定位,能够有效提高自移动设备沿边控制的精度。
步骤220、对环境图像进行图像分割处理,得到分割图像;图像分割处理包括串行的多次目标特征提取操作,每次目标特征提取操作包括并行的多次卷积操作以及对多次卷积操作的结果的融合操作;分割图像用于指示环境图像中的工作区域和非工作区域。
具体地,图像分割是识别出环境图像中的工作区域和非工作区域,得到能够区分工作区域和非工作区域的分割图像,以便于后续提取工作区域和非工作区域之间的边界。
在本实施例中,图像分割处理包括串行的多次目标特征提取操作,每次目标特征提取操作包括并行的多次卷积操作以及对多次卷积操作的结果的融合操作。串行的多次目标特征提取操作,相当于多次目标特征提取操作依次串联连接,前一次目标特征提取操作的输出数据是后一次目标特征提取操作的输入数据。并行的多次卷积操作表示多次卷积操作是并列或同步进行的,一次卷积操作所处理的数据与其他卷积操作所处理的数据并无直接关联。多次卷积操作的结果的融合操作就是将各个卷积操作的结果融合为一个特征。例如,将各个卷积操作的结果进行相加、加权求和等等。
示例性的,图3A示意性地示出了本申请一个实施例提供的图像分割处理的流程图。如图3A所示,在图像分割处理的K次目标特征提取操作中,第i-1次目标特征提取的输出数据为第i次目标特征提取的输入数据,由此构成了串行的K次目标特征提取操作。其中,2≤i≤K,K为预设特征提取次数。需要说明的是,第1次目标特征提取的输入数据是对环境图像进行卷积操作所得到的特征图,即环境图像经过一次卷积操作后进入目标特征提取操作。经过K目标特征提取操作后,输出分割图像。
在第i次目标特征提取操作中,首先对输入数据分别进行并行的多次卷积操作,一次卷积操作得到一个卷积结果,从而得到多个卷积结果。
示例性的,图3B意性地示出了本申请一个实施例提供的目标特征提取过程的示意图。如图3B所示,第i次目标特征提取操作包括并行的M次卷积操作,每一个卷积操作的输入数据都是第i次目标特征提取操作的输入数据。这里,第i次目标特征提取操作可以理解为任一次目标特征提取操作。
然后,将多个卷积结果融合,得到融合特征。示例性的,如图3B所示,将M次卷积操作所对应的M个卷积结果相加(Add),得到融合特征。
最后,对融合特征进行激活处理,得到第i次目标特征提取的输出数据。激活函数可以选择ReLU(Linear rectification function,线性整流函数)、sigmoid(S型函数)等。
需要说明的是,本申请中的卷积处理、多次卷积操作等均涉及卷积计算,但卷积处理或卷积操作中所涉及的计算参数可以是有所不同的。例如,卷积处理或卷积操作所涉及的卷积核大小、步长、通道数等可能会有所不同,并行的多次卷积操作可以具有相同的计算参数,也可以涉及不同计算参数。
在本申请的一个实施例中,在一些目标特征提取过程中,除了并行的多次卷积操作外,还包括对输入数据的归一化处理。示例性的,图3C示意性地示出了本申请一个实施例提供的目标特征提取过程的示意图,如图3C所示,目标特征提取的输入数据经过归一化处理后,与多次卷积操作的卷积结果相加,在经过激活处理,得到输出数据。归一化处理可以是批归一化处理(Batch Normalization,BN)或组归一化处理(Group Normalization,GN)。
相关技术中的图像分割处理采用残差网络,残差网络通常是普通的串行卷积操作,其图像分割精度有待提高。而根据本申请实施例提供的图像分割处理可以看出,串行的多次目标特征提取使得特征提取的深度不断加深,能够提取到图像的深层特征;每次目标特征提取中并行的多次卷积操作又保留了该次特征提取的浅层特征。通过这种串行特征提取和并行卷积操作的方式,使得图像分割处理过程中能够保留图像的深层与浅层特征,结果融合操作使得深层特征和浅层特征相融合,有利于提高图像分割的精度,从而提高边界的识别和定位精度。
示例性的,表1示出了现有残差网络进行图像分割处理和本申请提供的图像分割处理的精度对比。如表1所示,在本申请方案中,当准确率阈值设置为50%~95%时,其对应的分割图像的识别概率分别是97%、95%、94%、92%、90%、89%、83%、76%、65%和38%,并将上述识别概率进行求平均,得到精度(即平均值)约为82%。同理,按照同样的方式计算得到相关技术方案的精度为80%。可以看出,本申请提高了图像分割精度,更有利于准确识别边界。
表1
平均值\准确率阈值(%) 50 55 60 65 70 75 80 85 90 95
80(相关技术方案) 95 94 93 92 89 87 82 75 62 37
82(本申请方案) 97 95 94 92 90 89 83 76 65 38
步骤230、提取分割图像中工作区域和非工作区域之间的多个边界像素点,得到边界图像。
具体地,在得到分割图像后,就可以提取该分割图像中的多个边界像素点,得到包括边界的边界图像。在分割图像中,工作区域的像素点和非工作区域的像素点是两种不同的像素点,例如,工作区域的像素点的像素值和非工作区域的像素点的像素值不同。由于边界像素点位于工作区域和非工作区域的交界处,故而边界像素点周围会包括两种类型的像素点,在提取边界像素点时,可以通过判断一个像素点周围是否包括两种类型的像素点来判断该像素点是否为边界像素点。
在本申请的一个实施例中,分割图像的像素点包括梯度值,在提取边界像素点时,可以根据像素点的梯度值范围判断分割图像的像素点是否为边界像素点,例如,当像素点的梯度值在第一阈值和第二阈值所限定的范围内时,认为该像素点为边界像素点。
步骤240、根据边界图像控制自移动设备沿边移动。
具体地,边界图像中的边界就代表了自移动设备工作区域的精确边界,根据该边界生成沿边路径,进而控制自移动设备根据该沿边路径进行沿边移动。
在本申请实施例提供的技术方案中,当自移动设备移动至工作区域中的指定区域时,获取环境图像并对环境图像进行图像分割处理,得到区分工作区域和非工作区域的分割图像,进而由分割图像得到边界图像;其中,在图像分割处理包括串行的多次目标特征提取,每次目标特征提取包括并行的多次卷积操作以及多次并行卷积操作的结果融合操作;在得到边界图像后,根据边界图像控制自移动设备沿边移动。一方面,自动移动设备移动至指定区域相当于对自移动设备的粗定位,在该指定区域内,对环境图像进行处理得到边界图像,是对自移动设备所在边界的精定位,由此,实现了粗定位与精定位相结合的方法来控制自移动设备沿边移动,提高边界定位的精度。另一方面,在对环境图像的分割处理中,串行的多次特征提取使得特征提取的深度不断加深,由此得到环境图像的深层特征;同时每次特征提取过程并行的多次卷积操作保留了环境图像的浅层特征,那么,最后的结果融合操作使得环境图像的深层特征和浅层特征相融合,有利于提高图像分割的精度,进而提高了边界图像中边界像素点的识别精度,从而使得自移动设备在沿边过程中能够更加精确地定位到工作区域边界,有效提高自移动设备的沿边效果。
在本申请的一个实施例中,控制自移动设备沿边移动的过程还包括:确定边界图像的中垂线和边界图像的图像边缘;将中垂线与图像边缘的相交点作为自移动设备的投影像素点;从边界图像中的多个边界像素点中,确定距离投影像素点最近的目标边界像素点;根据投影像素点的位置和目标边界像素点的位置确定自移动设备的移动方向。
具体地,边界图像的中垂线是指经过边界图像的中心点,且与图像边缘垂直的线。中垂线与图像边缘具有两个交点,一般将靠近工作区域的交点,也就是中垂线与图像下边缘的交点,作为自移动设备在边界图像中的投影像素点。示例性的,图4示意性地示出了本申请一个实施例提供的边界图像的示意图。如图4所示,中垂线与图像边缘的交点包括点A和点A’,点A在工作区域内部,属于中垂线与图像下边缘的交点,点A’在非工作区域,属于中垂线与图像上边缘的交点,故而点A为自移动设备的投影像素点。投影像素点的确定类似相机在其所拍摄图像中的位置,通常是中垂线与图像的下边缘的交点。
在确定自移动设备的投影像素点后,在边界上的多个边界像素点找到一个与该投影像素点最近的目标边界像素点,根据该目标边界像素点与投影像素点的相对位置,就可以确定自移动设备的移动方向,进而基于该移动方向控制自移动设备的沿边移动。具体而言,若目标边界像素点在投影像素点左边,则控制自移动设备向左移动至目标边界像素点处;若目标边界像素点在投影像素点右边,则控制自移动设备向右移动至目标边界像素点处。
在本申请的一个实施例中,假设以边界图像的左上角作为原点O,并以原点所相交的两个图像边缘分别作为x轴和y轴,以构建该边界图像的坐标系。当投影像素点的位置为在该坐标系下的像素坐标(x 0,y 0),边界像素点的位置为在该坐标系下的像素坐标(x,y),则投影像素点与边界像素点之间的距离D为:
Figure PCTCN2022132388-appb-000001
在各个边界像素点所对应的距离D中,找到距离最小值所对应的边界像素点即为目标边界像素点。
然后计算目标边界像素点与投影像素点之间的相对位置d:
d=x 0-x
当d>0时,说明目标边界像素点在投影点的左边,表示自移动设备的移动方向为向左;当d<0时,说明目标边界像素点在投影点的右边,表示自移动设备的移动方向为向右。
自移动设备根据移动方向从投影像素点移动至目标边界像素点的距离根据PID控制确定。自移动设备移动至目标边界像素点后,根据边界图像中边界进行移动。示例性的,如图4所示,目标边界像素点B位于投影像素点A右侧,那么控制自移动设备向右移动至目标边界像素点B,然后根据识别出的边界控制自移动设备沿边移动。
在本申请的一个实施例中,本申请的自移动设备的沿边控制方法还包括:在自移动设备沿边移动的过程中,检测自移动设备是否到达转向点;当自移动设备到达转向点时,根据当前检测到的工作区域调整自移动设备的移动方向。
具体地,转向点是指需要更改自移动设备移动方向的位置。由于自移动设备是沿边移动,当自移动设备到达转向点时,表示一段沿边路径已经走完,那么边界图像中的工作区域必定缩减,故而可以通过工作区域面积来判断自移动设备是否到达转向点。
在本申请的一个实施例中,检测自移动设备是否到达转向点的过程包括:计算当前检测到的工作区域的面积;在当前检测到的工作区域的面积小于预设面积阈值时,确定自移动设备到达转向点。
示例性的,假设自移动设备从图4所示边界图像中的目标边界像素点B开始沿边移动,图4所示边界图像中,工作区域面积较大,显然目标边界像素点B不是转向点。将图4所示边界划分为边界1、边界2和边界3,自移动设备从目标像素点B开始,沿边界1移动。在沿边移动过程中,不断获取边界图像并计算工作区域的面积,当自移动设备移动至图4所示边界图像中的点C时,自移动设备此时所获取的边界图像如图5所示。在图5所示的边界图像中,工作区域的面积小于预设面积阈值,从而确定自移动设备当前位于转向点。
在本申请的一个实施例中,也可以通过工作区域面积与非工作区域面积的比值来检测是否到达转向点,即,当工作区域面积与非工作区域面积的比值小于预设阈值时,确定自移动设备到达转向点。
在确定自移动设备到达转向点后,根据当前检测到的工作区域面积调整自移动设备的移动方向,具体而言,是根据转向点两侧的工作区域面积调整自移动设备的移动方向,哪一侧的工作区域面积大,就将自移动设备的移动方向朝哪一侧调整。
在本申请的一个实施例中,在转向点调整移动方向的过程包括:根据预设的划分线,将当前检测到的工作区域划分为第一工作区域和第二工作区域;当第一工作区域的面积大于第二工作区域的面积时,控制自移动设 备向第一方向调整第一预设角度;其中,第一方向为朝向第一工作区域的方向;当第二工作区域的面积大于第一工作区域的面积时,控制自移动设备向第二方向调整第二预设角度;其中,第二方向为朝向第二工作区域的方向。
具体地,预设的划分线是经过边界图像中自移动设备的投影像素点的线,例如,边界图像的中垂线。通过预设划分线将工作区域划分为第一工作区域和第二工作区域后,当第一工作区域的面积大于第二工作区域的面积时,表明下一段沿边路径有较大概率在第一工作区域的方向,故而控制自移动设备朝向第一工作区域的第一方向调整第一预设角度,调整角度后,自移动设备的移动方向朝向第一工作区域。当第二工作区域的面积大于第一工作区域的面积时,表明下一段沿边路径有较大概率在第二工作区域的方向,故而控制自移动设备朝向第二工作区域的第二方向调整第二预设角度,调整角度后,自移动设备的移动方向朝向第二工作区域。
示例性的,如图5所示的边界图像,通过图像中垂线将工作区域划分为第一工作区域和第二工作区域,第一工作区域的面积大于第二工作区域的面积,故而控制自移动设备向第一方向调整第一预设角度,例如,控制自移动设备向左旋转90°。
在本申请的一个实施例中,第一预设角度和第二预设角度可以根据自移动设备的当前朝向与下一段边界之间的夹角进行设定。示例性的,如图5所示的边界图像,自移动设备的当前朝向为图像中垂线方向,下一段边界为边界2,图像中垂线和边界2之间形成夹角θ,那么可以控制自移动设备逆时针旋转角度θ,从而使自移动设备的方向与边界2的延伸方向一致。
在另一些实施例中,第一预设角度和第二预设角度也可以是预先设定的值,例如30°、40°、50°等角度值。本申请实施例对第一预设角度和第二预设角度的具体设置方法不予限制。
应当注意,尽管在附图中以特定顺序描述了本申请中方法的各个步骤,但是,这并非要求或者暗示必须按照该特定顺序来执行这些步骤,或是必须执行全部所示的步骤才能实现期望的结果。附加的或备选的,可以省略某些步骤,将多个步骤合并为一个步骤执行,以及/或者将一个步骤分解为多个步骤执行等。
以下介绍本申请的装置实施例,可以用于执行本申请上述实施例中的自移动设备的沿边控制方法。图6示意性地示出了本申请实施例提供的自移动设备的沿边控制装置的结构框图。如图6所示,该装置包括:
环境图像获取模块610,被配置当检测到自移动设备移动至指定区域时,获取环境图像;
图像分割模块620,被配置对所述环境图像进行图像分割处理,得到分割图像;所述图像分割处理包括串行的多次目标特征提取操作,每次目标特征提取操作包括并行的多次卷积操作以及对多次所述卷积操作的结果的融合操作;所述分割图像用于指示所述环境图像中的工作区域和非工作区域;
边界图像获取模块630,被配置提取所述分割图像中所述工作区域和所述非工作区域之间的多个边界像素点,得到边界图像;
沿边模块640,被配置根据所述边界图像控制所述自移动设备沿边移动。
在本申请的一个实施例中,图像分割模块620具体被配置:
将第i-1次目标特征提取的输出数据作为第i次目标特征提取的输入数据;其中,2≤i≤K,K为预设特征提取次数;第1次目标特征提取的输入数据是对所述环境图像进行卷积操作所得到的特征图;
对所述第i次目标特征提取的输入数据进行并行的多次卷积操作,得到多个卷积结果;其中,一次卷积操作对应得到一个卷积结果;
将所述多个卷积结果融合,得到融合特征,并对所述融合特征进行激活处理,得到所述第i次目标特征提取的输出数据。
在本申请的一个实施例中,该装置还包括:
移动方向确定模块,被配置确定所述边界图像的中垂线和所述边界图像的图像边缘;将所述中垂线与所述图像边缘的相交点作为所述自移动设备的投影像素点;从所述边界图像中的多个边界像素点中,确定距离所述投影像素点最近的目标边界像素点;根据所述投影像素点的位置和所述目标边界像素点的位置确定所述自移动设备的移动方向。
在本申请的一个实施例中,该装置还包括:
转向点检测模块,被配置在所述自移动设备沿边移动的过程中,检测所述自移动设备是否到达转向点;
移动方向调整模块,被配置当所述自移动设备到达所述转向点时,根据当前检测到的工作区域调整所述自移动设备的移动方向。
在本申请的一个实施例中,所述转向点检测模块具体被配置:
计算当前检测到的工作区域的面积;
当所述当前检测到的工作区域的面积小于预设面积阈值时,确定所述自移动设备到达所述转向点。
在本申请的一个实施例中,所述移动方向调整模块具体被配置:
根据预设的划分线,将所述当前检测到的工作区域划分为第一工作区域和第二工作区域;
当所述第一工作区域的面积大于所述第二工作区域的面积时,控制所述自移动设备向第一方向调整第一预设角度;其中,第一方向为朝向第一工作区域的方向;
当所述第二工作区域的面积大于所述第一工作区域的面积时,控制所述自移动设备向第二方向调整第二预设角度;其中,第二方向为朝向第二工作区域的方向。
在本申请的一个实施例中,该装置还包括:
检测模块,被配置获取所述自移动设备的工作地图;其中,所述工作地图包括各工作区域的边界;获取所述自移动设备的定位信息;根据所述定位信息,确定所述自移动设备到所述边界的距离;当所述距离在预设距离范围内时,确定所述自移动设备移动至所述工作区域中的指定区域。
本申请各实施例中提供的自移动设备的沿边控制装置的具体细节已经在对应的方法实施例中进行了详细的描述,此处不再赘述。
图7示意性地示出了用于实现本申请实施例的自移动设备的计算机系统结构框图。
需要说明的是,图7示出的自移动设备的700仅是一个示例,不应对本申请实施例的功能和使用范围带来任何限制。
如图7所示,自移动设备700包括中央处理器701(Central Processing Unit,CPU),其可以根据存储在只读存储器702(Read-Only Memory,ROM)中的程序或者从存储部分708加载到随机访问存储器703(Random Access Memory,RAM)中的程序而执行各种适当的动作和处理。在随机访问存储器703中,还存储有系统操作所需的各种程序和数据。中央处理器701、在只读存储器702以及随机访问存储器703通过总线704彼此相连。输入/输出接口705(Input/Output接口,即I/O接口)也连接至总线704。
以下部件连接至输入/输出接口705:包括键盘、鼠标等的输入部分706;包括诸如阴极射线管(Cathode Ray Tube,CRT)、液晶显示器(Liquid Crystal Display,LCD)等以及扬声器等的输出部分707;包括硬盘等的存储部分708;以及包括诸如局域网卡、调制解调器等的网络接口卡的通信部分709。通信部分709经由诸如因特网的网络执行通信处理。 驱动器710也根据需要连接至输入/输出接口705。可拆卸介质711,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器710上,以便于从其上读出的计算机程序根据需要被安装入存储部分708。
特别地,根据本申请的实施例,各个方法流程图中所描述的过程可以被实现为计算机软件程序。例如,本申请的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分709从网络上被下载和安装,和/或从可拆卸介质711被安装。在该计算机程序被中央处理器701执行时,执行本申请的系统中限定的各种功能。
需要说明的是,本申请实施例所示的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(Erasable Programmable Read Only Memory,EPROM)、闪存、光纤、便携式紧凑磁盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本申请中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本申请中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:无线、有线等等,或者上述的任意合适的组合。
通过以上的实施方式的描述,本领域的技术人员易于理解,这里描述的示例实施方式可以通过软件实现,也可以通过软件结合必要的硬件的方式来实现。因此,根据本申请实施方式的技术方案可以以软件产品的形式 体现出来,该软件产品可以存储在一个非易失性存储介质(可以是CD-ROM,U盘,移动硬盘等)中或网络上,包括若干指令以使得一台计算设备(可以是个人计算机、服务器、触控终端、或者网络设备等)执行根据本申请实施方式的方法。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本申请的其它实施方案。本申请旨在涵盖本申请的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本申请的一般性原理并包括本申请未公开的本技术领域中的公知常识或惯用技术手段。
应当理解的是,本申请并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本申请的范围仅由所附的权利要求来限制。

Claims (10)

  1. 一种自移动设备的沿边控制方法,包括:
    当检测到自移动设备移动至指定区域时,获取环境图像;
    对所述环境图像进行图像分割处理,得到分割图像;所述图像分割处理包括串行的多次目标特征提取操作,每次目标特征提取操作包括并行的多次卷积操作以及对多次所述卷积操作的结果的融合操作;所述分割图像用于指示所述环境图像中的工作区域和非工作区域;
    提取所述分割图像中所述工作区域和所述非工作区域之间的多个边界像素点,得到边界图像;
    根据所述边界图像控制所述自移动设备沿边移动。
  2. 根据权利要求1所述的自移动设备的沿边控制方法,其中,在第i次目标特征提取操作中,包括:
    将第i-1次目标特征提取的输出数据作为第i次目标特征提取的输入数据;其中,2≤i≤K,K为预设特征提取次数;第1次目标特征提取的输入数据是对所述环境图像进行卷积操作所得到的特征图;
    对所述第i次目标特征提取的输入数据进行并行的多次卷积操作,得到多个卷积结果;其中,一次卷积操作对应得到一个卷积结果;
    将所述多个卷积结果融合,得到融合特征,并对所述融合特征进行激活处理,得到所述第i次目标特征提取的输出数据。
  3. 根据权利要求1所述的自移动设备的沿边控制方法,其中,在所述根据所述边界图像控制所述自移动设备沿边移动之前,所述方法还包括:
    确定所述边界图像的中垂线和所述边界图像的图像边缘;
    将所述中垂线与所述图像边缘的相交点作为所述自移动设备的投影像素点;
    从所述边界图像中的多个边界像素点中,确定距离所述投影像素点最近的目标边界像素点;
    根据所述投影像素点的位置和所述目标边界像素点的位置确定所述自移动设备的移动方向。
  4. 根据权利要求1或3所述的自移动设备的沿边控制方法,其中,所述方法还包括:
    在所述自移动设备沿边移动的过程中,检测所述自移动设备是否到达转向点;
    当所述自移动设备到达所述转向点时,根据当前检测到的工作区域调 整所述自移动设备的移动方向。
  5. 根据权利要求4所述的自移动设备的沿边控制方法,其中,所述检测所述自移动设备是否到达转向点,包括:
    计算当前检测到的工作区域的面积;
    当所述当前检测到的工作区域的面积小于预设面积阈值时,确定所述自移动设备到达所述转向点。
  6. 根据权利要求4所述的自移动设备的沿边控制方法,其中,所述根据当前检测到的工作区域调整所述自移动设备的移动方向,包括:
    根据预设的划分线,将所述当前检测到的工作区域划分为第一工作区域和第二工作区域;
    当所述第一工作区域的面积大于所述第二工作区域的面积时,控制所述自移动设备向第一方向调整第一预设角度;其中,所述第一方向为朝向所述第一工作区域的方向;
    当所述第二工作区域的面积大于所述第一工作区域的面积时,控制所述自移动设备向第二方向调整第二预设角度;其中,所述第二方向为朝向所述第二工作区域的方向。
  7. 根据权利要求1所述的自移动设备的沿边控制方法,其中,在所述当检测到自移动设备移动至指定区域时,获取环境图像之前,所述方法还包括:
    获取所述自移动设备的工作地图;其中,所述工作地图包括各工作区域的边界;
    获取所述自移动设备的定位信息;
    根据所述定位信息,确定所述自移动设备到所述边界的距离;
    当所述距离在预设距离范围内时,确定所述自移动设备移动至所述工作区域中的指定区域。
  8. 根据权利要求6所述的自移动设备的沿边控制方法,其中,所述第一预设角度与所述第二预设角度,均根据所述自移动设备的当前朝向与下一段边界之间的夹角确定。
  9. 一种计算机可读介质,其上存储有计算机程序,该计算机程序被处理器执行时实现权利要求1至8中任意一项所述的自移动设备的沿边控制方法。
  10. 一种自移动设备,包括:
    车体,所述车体包括车身和车轮;以及
    控制模块,用于执行权利要求1至8中任意一项所述的自移动设备的沿边控制方法。
PCT/CN2022/132388 2022-10-14 2022-11-16 自移动设备的沿边控制方法、介质及自移动设备 WO2024077708A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211259807.8A CN115685997A (zh) 2022-10-14 2022-10-14 自移动设备的沿边控制方法、装置、介质及自移动设备
CN202211259807.8 2022-10-14

Publications (1)

Publication Number Publication Date
WO2024077708A1 true WO2024077708A1 (zh) 2024-04-18

Family

ID=85065677

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/132388 WO2024077708A1 (zh) 2022-10-14 2022-11-16 自移动设备的沿边控制方法、介质及自移动设备

Country Status (2)

Country Link
CN (1) CN115685997A (zh)
WO (1) WO2024077708A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200233413A1 (en) * 2019-01-22 2020-07-23 Honda Research Institute Europe Gmbh Method for generating a representation and system for teaching an autonomous device operating based on such representation
CN111868743A (zh) * 2018-04-19 2020-10-30 苏州宝时得电动工具有限公司 自移动设备、服务器及其自动工作系统
CN113156924A (zh) * 2020-01-07 2021-07-23 苏州宝时得电动工具有限公司 自移动设备的控制方法
CN113296495A (zh) * 2020-02-19 2021-08-24 苏州宝时得电动工具有限公司 自移动设备的路径形成方法、装置和自动工作系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111868743A (zh) * 2018-04-19 2020-10-30 苏州宝时得电动工具有限公司 自移动设备、服务器及其自动工作系统
US20200233413A1 (en) * 2019-01-22 2020-07-23 Honda Research Institute Europe Gmbh Method for generating a representation and system for teaching an autonomous device operating based on such representation
CN113156924A (zh) * 2020-01-07 2021-07-23 苏州宝时得电动工具有限公司 自移动设备的控制方法
CN113296495A (zh) * 2020-02-19 2021-08-24 苏州宝时得电动工具有限公司 自移动设备的路径形成方法、装置和自动工作系统

Also Published As

Publication number Publication date
CN115685997A (zh) 2023-02-03

Similar Documents

Publication Publication Date Title
CN109242903B (zh) 三维数据的生成方法、装置、设备及存储介质
US11210534B2 (en) Method for position detection, device, and storage medium
WO2020253010A1 (zh) 一种泊车定位中的停车场入口定位方法、装置及车载终端
US11195064B2 (en) Cross-modal sensor data alignment
WO2014032496A1 (zh) 一种人脸特征点定位方法、装置及存储介质
WO2021196698A1 (zh) 待侦测物体储量确定方法、装置、设备及介质
CN112336342B (zh) 手部关键点检测方法、装置及终端设备
WO2019205633A1 (zh) 眼睛状态的检测方法、电子设备、检测装置和计算机可读存储介质
CN113570631B (zh) 一种基于图像的指针式仪表智能识别方法及设备
US20190066311A1 (en) Object tracking
CN110910445B (zh) 一种物件尺寸检测方法、装置、检测设备及存储介质
WO2019041569A1 (zh) 一种运动目标的标记方法、装置和无人机
CN113420648B (zh) 一种具有旋转适应性的目标检测方法及系统
CN111881748B (zh) 一种基于vbai平台建模的车道线视觉识别方法和系统
WO2024077708A1 (zh) 自移动设备的沿边控制方法、介质及自移动设备
WO2022205841A1 (zh) 机器人导航方法、装置、终端设备及计算机可读存储介质
CN108564626B (zh) 用于确定安装于采集实体的相机之间的相对姿态角的方法和装置
JPH07146121A (ja) 視覚に基く三次元位置および姿勢の認識方法ならびに視覚に基く三次元位置および姿勢の認識装置
CN115063760A (zh) 车辆可行驶区域检测方法、装置、设备及存储介质
CN114740867A (zh) 基于双目视觉的智能避障方法、装置、机器人及介质
CN114463534A (zh) 一种目标关键点检测方法、装置、设备及存储介质
CN112435274A (zh) 一种基于面向对象分割的遥感影像面状地物提取方法
CN111765892A (zh) 一种定位方法、装置、电子设备及计算机可读存储介质
CN114581890B (zh) 确定车道线的方法、装置、电子设备和存储介质
WO2022183484A1 (zh) 确定目标检测模型的方法及其装置