WO2023207610A1 - Procédé et appareil de cartographie, et support de stockage et appareil électronique - Google Patents

Procédé et appareil de cartographie, et support de stockage et appareil électronique Download PDF

Info

Publication number
WO2023207610A1
WO2023207610A1 PCT/CN2023/088027 CN2023088027W WO2023207610A1 WO 2023207610 A1 WO2023207610 A1 WO 2023207610A1 CN 2023088027 W CN2023088027 W CN 2023088027W WO 2023207610 A1 WO2023207610 A1 WO 2023207610A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
information
area
sub
obstacle
Prior art date
Application number
PCT/CN2023/088027
Other languages
English (en)
Chinese (zh)
Inventor
韩松杉
曹蒙
Original Assignee
追觅创新科技(苏州)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 追觅创新科技(苏州)有限公司 filed Critical 追觅创新科技(苏州)有限公司
Publication of WO2023207610A1 publication Critical patent/WO2023207610A1/fr

Links

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4011Regulation of the cleaning machine by electric means; Control systems and remote control systems therefor
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/24Floor-sweeping machines, motor-driven
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/28Floor-scrubbing machines, motor-driven
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4002Installations of electric equipment
    • A47L11/4008Arrangements of switches, indicators or the like
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4061Steering means; Means for avoiding obstacles; Details related to the place where the driver is accommodated
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/04Automatic control of the travelling movement; Automatic obstacle detection

Definitions

  • the present invention relates to the field of artificial intelligence, and specifically, to a mapping method, device, storage medium and electronic device.
  • cleaning equipment usually only relies on its own installed sensors to achieve obstacle sensing and simultaneous positioning and mapping (SLAM).
  • SLAM simultaneous positioning and mapping
  • the sensors that come with the cleaning equipment have blind spots that lead to the acquisition of work.
  • the information in the area is incomplete, and the obstacle information in the work area cannot be accurately obtained, resulting in a map constructed that is not accurate enough, and therefore the cleaning operation cannot be accurately performed on the work area. That is, there is a problem in the related technology that the cleaning equipment cannot accurately obtain the information of the work area, resulting in cleaning The problem of low efficiency.
  • Embodiments of the present invention provide a mapping method, device, storage medium and electronic device to at least solve the problem in related technologies that cleaning equipment cannot accurately obtain information about the work area, resulting in low cleaning efficiency.
  • a mapping method is provided, which is applied to a target cleaning device and a target camera device.
  • the target camera device is set in a cleaning environment and is communicatively connected with the target cleaning device, including: obtaining a target image obtained by photographing a target area by the target imaging device, wherein the target image contains the target cleaning device; and determining first area information of the target area based on the target image, wherein the The first area information includes the first location information of the target cleaning equipment and the second location information of the first obstacle contained in the target area; the first area information and the second area information are fused to Obtaining target area information, wherein the second area information is area information of a first area included in the target area detected by the target cleaning device; constructing a target of the target area based on the target area information map.
  • obtaining the target image obtained by photographing the target area by the target camera device includes: sending a first acquisition instruction to the target camera device to instruct the target camera device to capture the entire target area. the first image; obtain the first image sent by the target camera device, and determine the first image as the target image.
  • obtaining the target image obtained by shooting the target area by the target camera device includes: sending a second collection instruction to the target camera device to instruct the target camera device to collect the image of the target area.
  • determining the first area information of the target area based on the target image includes identifying the target image to determine the first target information of the target cleaning device in the target area. and second target information of the first obstacle, wherein the first target information at least includes the first position information, and the second target information at least includes the second position information, the first obstacle Type information and size information of the object; determine the first area information based on the first target information and the second target information.
  • fusing the first area information and the second area information to obtain the target area information includes: obtaining the first sub-area corresponding to the first area in the first area information. information; compare the first sub-area information with the second area information to obtain a comparison result, wherein the comparison result is used to indicate where the first sub-obstacle information in the first sub-area information Whether the corresponding first sub-obstacle matches the second sub-obstacle corresponding to the second sub-obstacle information in the second area information; update the first area information based on the comparison result to obtain The target area information.
  • updating the first area information based on the comparison result to obtain the target area information includes: when the comparison result indicates that the first sub-obstacle and the second When the sub-obstacles do not match, the first sub-obstacle information in the first area information is updated to the second sub-obstacle information to obtain the updated first area information; the updated The first area information is determined as the target area information.
  • comparing the first sub-region information with the second sub-region information to obtain a comparison result includes: comparing the first sub-obstacle information with the second sub-obstacle information. Compare; when it is determined that the first sub-obstacle information is inconsistent with the second sub-obstacle information, obtain a first comparison result; after determining that the first sub-obstacle information and the second sub-obstacle information are inconsistent When the object information is consistent, the second comparison result is obtained.
  • comparing the first sub-region information with the second sub-region information to obtain a comparison result includes: comparing the first feature of the first sub-obstacle with the second sub-region information.
  • the second feature of the obstacle is compared, wherein the first feature is the feature data of the first sub-obstacle extracted using the target convolutional neural network, and the second feature is extracted using the target convolutional neural network.
  • the extracted feature data of the second sub-obstacle when it is determined that the first similarity between the first feature and the second feature is less than the first similarity threshold, obtain a third comparison result; in When it is determined that the first similarity between the first feature and the second feature is greater than or equal to the first similarity threshold, a fourth comparison result is obtained.
  • comparing the first sub-region information with the second region information to obtain a comparison result includes: segmenting a region corresponding to the first sub-region information to obtain a comparison result containing The first block of the first sub-obstacle, and segmenting the area corresponding to the second area information to obtain the second block containing the second sub-obstacle; based on the center of the first block coordinates and the center coordinate of the second block to calculate a second similarity between the first sub-obstacle and the second sub-obstacle; after determining that the second similarity is less than a second similarity threshold In this case, a fifth comparison result is obtained; in a case where it is determined that the second similarity is greater than or equal to the second similarity threshold, a sixth comparison result is obtained.
  • the method further includes: planning a target path based on the target map, and performing a cleaning operation according to the target path.
  • the method before acquiring the target image obtained by photographing the target area by the target camera device, the method further includes: acquiring information on the local area network where the target cleaning device is located; and determining, based on the local area network information, All smart terminals that enter the network at the same time; obtain the network identifier of each smart terminal; and determine the target camera device based on the network identifier.
  • fusing the first area information with the second area information to obtain target area information includes: updating the second area information according to predetermined rules to obtain updated second area information. ; Fusion of the first area information and the updated second area information to obtain the target area information.
  • a mapping device located in the target cleaning equipment and the target camera equipment.
  • the target camera equipment is arranged in a cleaning environment and is communicatively connected with the target cleaning equipment, including:
  • the first acquisition module is used to acquire the target image obtained by photographing the target area by the target camera device, wherein the target image contains the target cleaning device;
  • the first determination module is used to obtain the target image based on the target image.
  • the image determines first area information of the target area, wherein the first area information includes first position information of the target cleaning equipment and second position information of the first obstacle contained in the target area; a fusion module, configured to fuse the first area information with the second area information to obtain target area information, wherein the second area information is included in the target area detected by the target cleaning device regional information of the first region; a construction module configured to construct a target map of the target region based on the target region information.
  • a computer-readable storage medium includes a stored program, wherein when the program is run, it executes any of the above-mentioned embodiments. method described.
  • an electronic device including a memory and a processor.
  • a computer program is stored in the memory, and the processor is configured to execute any of the above implementations through the computer program. method described in the example.
  • the target image obtained by shooting the target area by the target camera device is obtained, and the first area information of the target area is determined based on the target image, and then the first area information is combined with the results of the first area detection by the target cleaning equipment.
  • the obtained second area information is fused to obtain the target area information, so that a target map of the target area can be constructed based on the target area information. Since the first area information determined based on the target image is fused, the obtained target area information is more It is accurate and avoids the problem in related technologies that the working area information obtained by detecting the working area only by relying on the target cleaning equipment itself is inaccurate or incomplete.
  • the purpose of improving the accuracy of determining the target area information can be achieved, thereby
  • the purpose of improving the accuracy of building a target map is achieved, effectively solving the problem in related technologies that cleaning equipment cannot accurately obtain information about the work area, resulting in low cleaning efficiency, and achieving the effect of improving the cleaning efficiency of cleaning equipment.
  • Figure 1 is a hardware structural block diagram of a mapping method according to an embodiment of the present invention
  • Figure 2 is a flow chart of a mapping method according to an embodiment of the present invention.
  • Figure 3 is an example flow chart according to an embodiment of the present invention.
  • Figure 4 is a structural block diagram of a mapping device according to an embodiment of the present invention.
  • FIG. 1 is a hardware structure block diagram of a mapping method according to an embodiment of the present invention.
  • the mobile device may include one or more (only one is shown in Figure 1) processors 102 (the processor 102 may include but is not limited to a processing device such as a microprocessor MCU or a programmable logic device FPGA) and a memory 104 for storing data.
  • the above-mentioned mobile device may also include a transmission device 106 for communication functions and an input and output device 108.
  • a transmission device 106 for communication functions
  • an input and output device 108 Persons of ordinary skill in the art can understand that the structure shown in FIG.
  • a mobile device may also include more or fewer components than shown in FIG. 1 , or be configured differently with equivalent functionality or more functionality than that shown in FIG. 1 .
  • the memory 104 can be used to store computer programs, for example, software programs and modules of application software, such as the computer program corresponding to the mapping method in the embodiment of the present invention.
  • the processor 102 executes various tasks by running the computer program stored in the memory 104.
  • a functional application and data processing that is, to implement the above method.
  • Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
  • memory 104 may further include memory located remotely from processor 102, and these remote memories may be connected to the mobile device through a network. Examples of the above-mentioned networks include but are not limited to the Internet, intranets, local area networks, mobile communication networks and combinations thereof.
  • the transmission device 106 is used to receive or send data via a network.
  • Specific examples of the above-mentioned network may include a wireless network provided by a communication provider of the mobile device.
  • the transmission device 106 includes a network adapter (Network Interface Controller, NIC for short), which can be connected to other network devices through a base station to communicate with the Internet.
  • the transmission device 106 may be a radio frequency (Radio Frequency, RF for short) module, which is used to communicate with the Internet wirelessly.
  • NIC Network Interface Controller
  • a mapping method is provided, which is applied to a target cleaning device and a target camera device.
  • the target camera device is set in a cleaning environment and is communicatively connected with the target cleaning device, as shown in Figure 2.
  • the method includes the following steps:
  • Step S202 Obtain a target image obtained by photographing the target area by the target imaging device, wherein the target image includes the target cleaning device;
  • Step S204 Determine first area information of the target area based on the target image, where the first area information includes first position information of the target cleaning equipment and first obstacles contained in the target area.
  • Step S206 Fusion of the first area information and second area information to obtain target area information, wherein the second area information is the first area included in the target area detected by the target cleaning device.
  • Regional information of a region
  • Step S208 Construct a target map of the target area based on the target area information.
  • the device that performs the above operations may be a device, such as the above-mentioned target cleaning device, or a processor or controller included in the device, or a device with control capabilities, or other processing devices or processing devices with similar processing capabilities. Unit, etc., wherein the above-mentioned controller or other execution subject may exist alone, or may be integrated into the target cleaning equipment.
  • the following takes the controller included in the target cleaning equipment (hereinafter referred to as the "controller") as an example to perform the above operations (this is only an illustrative explanation, in actual operation, other devices or modules can also be used to perform the above operations). illustrate:
  • the controller obtains a target image obtained by photographing a target area by a target camera device.
  • the target area may be an area in a home or office waiting to be cleaned.
  • the target area may be a living room area, where the target image includes images of the target cleaning equipment.
  • the target cleaning equipment and the target camera equipment can be connected to the same local area network.
  • the target camera equipment can be connected through methods including but not limited to wired, WIFI, etc., and the target cleaning equipment can also be connected through Including but not limited to WIFI, Bluetooth and other methods to access the same LAN, so that when the target cleaning equipment appears in the shooting screen of the target camera equipment, the image of the target area can be captured.
  • the target camera equipment can capture images including The image of the entire target area can be understood as a panoramic image of the target area.
  • the target camera device can be installed on the ceiling of the target area (such as the living room), or the target camera device can be a camera device that can move in all directions.
  • the target camera device can capture The target image of the target area is then sent to the target cleaning equipment through the network; in practical applications, the target cleaning equipment can also send instructions to the target camera equipment to instruct the target camera equipment to collect the target image of the target area; control
  • the device determines first area information of the target area based on the target image.
  • the first area information includes first position information of the target cleaning equipment and second position information of the first obstacles contained in the target area.
  • the first obstacles may include multiple obstacle.
  • the controller can obtain the first area information by identifying the target image.
  • the first area information can also include type information and/or size information of the first obstacle; and then the first area information can be obtained by identifying the target image.
  • the information is fused with the second area information to obtain the target area information, where the second area information is the area information of the first area included in the target area detected by the target cleaning equipment.
  • the target cleaning equipment can The regional information of the first area is obtained through its own sensor. Since the sensor carried by the cleaning equipment itself is generally located directly in front of the cleaning equipment, there are blind areas on both sides and rear.
  • the target obtained by fusing the first area information with the second area information The area information will be more accurate; therefore, the target map of the target area constructed based on the target area information is also more accurate. Furthermore, the cleaning path of the target cleaning equipment can be planned based on the target map, which can effectively solve the problems of cleaning equipment in related technologies. The inability to accurately obtain information about the work area leads to the problem of low cleaning efficiency, which achieves the effect of improving the cleaning efficiency of the cleaning equipment.
  • obtaining the target image obtained by photographing the target area by the target camera device includes: sending a first acquisition instruction to the target camera device to instruct the target camera device to capture the entire target area. the first image; obtain the first image sent by the target camera device, and determine the first image as the target image.
  • a first collection instruction can be sent to the target camera device to instruct the target camera device to collect the first image of the entire target area. It can be understood that the first image is a panoramic image of the entire target area, and then the target camera device is acquired. The camera device sends the first image and determines the first image as the target image.
  • obtaining the target image obtained by shooting the target area by the target camera device includes: sending a second collection instruction to the target camera device to instruct the target camera device to collect the image of the target area.
  • a second collection instruction can also be sent to the target camera device to instruct the target camera device to collect a second image of a designated area included in the target area.
  • the designated area is a preset range centered on the target cleaning device.
  • the designated area can be a circular area or a square area centered on the target cleaning equipment, or it can also be an area of other graphics. This specification does not limit it. In practical applications, it may only be necessary to specify Clean the area, then acquire the second image sent by the target camera device, and determine the second image as the target image, thus improving the efficiency and accuracy of cleaning and improving the user experience.
  • the purpose of obtaining the target image of the target area through different methods is achieved.
  • determining the first area information of the target area based on the target image includes identifying the target image to determine the first area information of the target cleaning device included in the target area.
  • a target information and second target information of the first obstacle wherein the first target information at least includes the first position information, and the second target information at least includes the second position information, the Type information and size information of the first obstacle; determining the first area information based on the first target information and the second target information.
  • the first target information of the target cleaning equipment and the second target information of the first obstacle included in the target area are determined by identifying the target image.
  • the first target information includes at least the third target information of the target cleaning equipment.
  • a position information, the second target information at least includes the second position information of the first obstacle, the type information of the first obstacle and the size information of the first obstacle.
  • it can be through image recognition (such as OCR Technology) and/or AI image processing to identify the target cleaning equipment itself, the ground area and various obstacles in the target area included in the target image, and also calculate the positional relationship between the target cleaning equipment itself and all obstacles. , and then determine the first area information based on the first target information and the second target information.
  • the purpose of the first area information is determined by identifying the target image.
  • fusing the first area information and the second area information to obtain the target area information includes: obtaining the first sub-area corresponding to the first area in the first area information. information; compare the first sub-area information with the second area information to obtain a comparison result, wherein the comparison result is used to indicate where the first sub-obstacle information in the first sub-area information Whether the corresponding first sub-obstacle matches the second sub-obstacle corresponding to the second sub-obstacle information in the second area information; update the first area information based on the comparison result to obtain The target area information.
  • the comparison result is obtained by comparing the first sub-area information corresponding to the first area and the second area information included in the first area information, because the first area information includes the entire target area.
  • the matching process may include matching the location, size, etc. of obstacles; then the first area information may be updated based on the comparison results to obtain the final target area information. For example, if the above first sub-area information is consistent with When the second area information is inconsistent, that is, when the first sub-obstacle included in the first sub-area information does not match the second sub-obstacle included in the second area information, the second area information may be based on the second area information.
  • the information of one sub-region is updated, that is, the information of the first region is updated, thereby obtaining the target region information, thereby achieving the effect of further improving the accuracy of determining the target region information.
  • the purpose of further improving the accuracy of determining the target area information is by fusing the first area information with the second area information.
  • updating the first area information based on the comparison result to obtain the target area information includes: when the comparison result indicates that the first sub-obstacle and the second When the sub-obstacles do not match, the first sub-obstacle information in the first area information is updated to the second sub-obstacle information to obtain the updated first area information; the updated The first area information is determined as the target area information.
  • the first sub-obstacle information included in the first area information may be updated with the second sub-obstacle information, so that Obtain the updated first area information.
  • comparing the first sub-region information with the second sub-region information to obtain a comparison result includes: comparing the first sub-obstacle information with the second sub-obstacle information. Compare; when it is determined that the first sub-obstacle information is inconsistent with the second sub-obstacle information, obtain a first comparison result; after determining that the first sub-obstacle information and the second sub-obstacle information are inconsistent When the object information is consistent, the second comparison result is obtained.
  • the comparison result can be obtained by comparing the first sub-obstacle information with the second sub-obstacle information.
  • first sub-obstacle information and the second sub-obstacle information are inconsistent, then It is considered that the first sub-obstacle and the second sub-obstacle do not match, and when the first sub-obstacle information is consistent with the second sub-obstacle information, the first sub-obstacle and the second sub-obstacle are considered to match.
  • comparing the first sub-region information with the second sub-region information to obtain a comparison result includes: comparing the first feature of the first sub-obstacle with the second sub-region information.
  • the second feature of the obstacle is compared, wherein the first feature is the feature data of the first sub-obstacle extracted using the target convolutional neural network, and the second feature is extracted using the target convolutional neural network.
  • the extracted feature data of the second sub-obstacle when it is determined that the first similarity between the first feature and the second feature is less than the first similarity threshold, obtain a third comparison result; in When it is determined that the first similarity between the first feature and the second feature is greater than or equal to the first similarity threshold, a fourth comparison result is obtained.
  • the comparison result can be obtained by comparing the first feature of the first sub-obstacle with the second feature of the second sub-obstacle, where the first feature and the second feature are respectively obtained by convolution Feature data obtained by extracting features of the first sub-obstacle and the second sub-obstacle by the neural network, for example, calculating the first similarity value between the first feature and the second feature.
  • the similarity value between the two is less than When the first similarity threshold (such as 90%, or 85%, or other values) is reached, it is considered that the first sub-obstacle and the second sub-obstacle do not match, and when the similarity value between the two is greater than or equal to the first similarity When the degree threshold is reached, the first sub-obstacle and the second sub-obstacle are considered to match.
  • the first similarity threshold such as 90%, or 85%, or other values
  • comparing the first sub-region information with the second region information to obtain a comparison result includes: segmenting a region corresponding to the first sub-region information to obtain a comparison result containing The first block of the first sub-obstacle, and segmenting the area corresponding to the second area information to obtain the second block containing the second sub-obstacle; based on the center of the first block coordinates and the center coordinate of the second block to calculate a second similarity between the first sub-obstacle and the second sub-obstacle; after determining that the second similarity is less than a second similarity threshold In this case, a fifth comparison result is obtained; in a case where it is determined that the second similarity is greater than or equal to the second similarity threshold, a sixth comparison result is obtained.
  • the first block and the second block can also be obtained by dividing the areas corresponding to the first sub-area information and the second area information respectively, and then based on the first block
  • the center coordinate and the center coordinate of the second block calculate the second similarity value between the first sub-obstacle and the second sub-obstacle.
  • the similarity value between the two is less than the second similarity threshold (such as 95%, or 90%, or other values)
  • the similarity value between the two is greater than or equal to the second similarity threshold
  • the above-mentioned first sub-obstacle may include one or more obstacles, and similarly, the above-mentioned second sub-obstacle may also include one or more obstacles; through the above embodiments, the first sub-obstacle can be determined in a variety of different ways. Whether the first sub-obstacle corresponding to the first sub-obstacle information included in the area information matches the second sub-obstacle corresponding to the second sub-obstacle information included in the second area information, thereby achieving the purpose of matching the first sub-obstacle information included in the area information. The purpose of updating a region information and determining the target region information.
  • the method further includes: planning a target path based on the target map, and performing a cleaning operation according to the target path.
  • the target cleaning device or the controller can plan the target path based on the target map, and control the target cleaning device to perform cleaning operations according to the target path.
  • the method before acquiring the target image obtained by photographing the target area by the target camera device, the method further includes: acquiring information on the local area network where the target cleaning device is located; and determining, based on the local area network information, All smart terminals that enter the network at the same time; obtain the network identifier of each smart terminal; and determine the target camera device based on the network identifier.
  • all the smart terminals in the same local area network as the target cleaning device can be determined by obtaining the local area network information where the target cleaning device is located.
  • the network identifier of each smart terminal can also be obtained, for example Terminal ID, a unique identifier, can identify the type of smart terminal device, and then determine the target camera device based on the network identifier.
  • fusing the first area information with the second area information to obtain target area information includes: updating the second area information according to predetermined rules to obtain updated second area information. ; Fusion of the first area information and the updated second area information to obtain the target area information.
  • the second area information can be updated according to predetermined rules. For example, in practical applications, as the target cleaning equipment rotates or moves, the scanning angle of its own sensor may also change, and it can be updated regularly or in real time. The second area information obtained by the target cleaning equipment is then integrated with the first area information to update the target area information regularly or in real time, thereby further improving the accuracy of determining the target area information. The purpose is to achieve the effect of improving the cleaning efficiency of cleaning equipment.
  • FIG 3 is an example flow chart according to an embodiment of the present invention. As shown in Figure 3, the process includes the following steps:
  • the camera (corresponding to the aforementioned target camera equipment) and the sweeper (corresponding to the aforementioned target cleaning equipment) are connected to the same local area network.
  • the surveillance camera accesses the same local area network through methods including but not limited to wired and wifi
  • the sweeping robot accesses the same local area network through methods including but not limited to wifi and Bluetooth.
  • the sweeper can obtain device information connected to the same local area network.
  • the device information may include device type, identification, identification code, etc., and then determine the image collection device (such as a home camera) based on the device information, thereby excluding mobile phone washing machines. Refrigerator and other equipment.
  • the router can be used as the signal and data transmission intermediary, or the sweeper can establish a direct connection with the image acquisition device by establishing a second local area network (such as Bluetooth, WiFi, etc.), thereby realizing direct transmission of signals and data.
  • a second local area network such as Bluetooth, WiFi, etc.
  • S304 The camera collects images and transmits the image (corresponding to the aforementioned target image) to the sweeper.
  • the sweeping robot appears in the surveillance camera screen, the pictures taken by the camera are transmitted to the sweeping robot through the network.
  • the sweeper sends a first image collection instruction (which can be understood as a panoramic image) to the image collection device.
  • the camera can move in all directions. This is to determine which image collection devices the sweeper is within the field of view of. For example, the sweeper is in the living room. The panoramic image of the camera in the bedroom does not have a sweeper).
  • the first image acquisition instruction corresponds to the aforementioned first acquisition instruction.
  • the sweeper can determine the target image acquisition device based on the obtained first image; the sweeper can determine the target image acquisition device. Panoramic images for image analysis.
  • the sweeper sends a second image acquisition instruction (corresponding to the aforementioned second acquisition instruction) to the target image device, collects the second image centered on the sweeper and within a preset range, and analyzes the second image.
  • a second image acquisition instruction corresponding to the aforementioned second acquisition instruction
  • the effect is: Avoiding the large performance consumption caused by analyzing panoramic images, analyzing partial images can reduce software and hardware requirements.
  • S306 Recognize the image to determine the positional relationship between the sweeper itself and obstacles. Through image recognition or AI image processing, it identifies the sweeper itself, the ground and other passable areas in the camera picture, and various obstacles within the camera's field of view, and calculates the relative position of itself and all obstacles.
  • step S306 identify the picture collected by the camera (corresponding to the aforementioned target image), and determine the type, size (such as outline, length, width, height, etc.) and relative position relationship (distance and angle with the sweeper) of the obstacles in the picture. ), for example, identify the type in the picture through the image recognition model, determine the size and relative position of each obstacle through the position of the camera in the room (such as the relationship between the height of the camera and the image ratio, etc.), and generate the first obstacle based on the recognition result. object map.
  • the type, size such as outline, length, width, height, etc.
  • relative position relationship distance and angle with the sweeper
  • S308 fuse the above positional relationship with the positioning information calculated or stored by the sweeper through the local sensor to obtain an updated obstacle map; obtain the relative relationship between the sweeper and reference obstacles (such as walls, furniture) in the picture.
  • the position relationship is fused with the positioning information calculated by the sweeper through the local sensor or stored to improve the accuracy of positioning.
  • the sweeper obtains the second obstacle map through its own sensor, fuses the first obstacle map and the second obstacle map (equivalent to the aforementioned fusion of the first area information and the second area information), and obtains the final obstacle map (corresponding to the aforementioned target map);
  • the fusion process can be:
  • the scope of the second obstacle map (which may include a sector-shaped area centered on the sweeper, determine the angle, boundary, and information about the obstacles contained in the sector-shaped area);
  • the matching process may include matching in terms of obstacle location, size, etc.;
  • the areas with consistent matching i.e. obstacles
  • the inconsistent parts can be replaced with obstacles in the first obstacle map with obstacles in the second obstacle map, thereby obtaining the updated first obstacle map, which is the final obstacle information.
  • the above matching process can also be:
  • Extract the characteristic data of the first obstacle and the second obstacle (for example, it can be extracted through a convolutional neural network); calculate the similarity of the two characteristic data; if the similarity is greater than the threshold (corresponding to the aforementioned first similarity threshold), then It means the match is consistent, otherwise it is inconsistent.
  • each group of obstacles segmentation is performed according to the same rules to obtain different blocks.
  • Different obstacle types can correspond to different segmentation rules, which can achieve segmentation accuracy; each group of obstacles is calculated through the center of each block The similarity between them (refer to Euclidean distance and Mann distance calculation method); if the similarity is greater than the threshold (corresponding to the aforementioned second similarity threshold), it means that the matching is consistent, otherwise it is inconsistent.
  • S310 Perform path planning based on the updated obstacle map.
  • the obstacle sensing sensor (Tof, line laser, camera) of the sweeper is usually located directly in front of the machine, with blind areas on both sides and rear, and the relationship between the position of the sweeper itself and the obstacles in the above step S306 is integrated.
  • AI classification of obstacles, and passable areas on the ground it can make up for the obstacle information in its own blind spot, improve the perception range and accuracy of surrounding obstacles, and establish a more accurate and complete obstacle map.
  • the problem in the related art that the sweeper only relies on the sensor it carries to sense obstacles and SLAM positioning is avoided.
  • the problem of incomplete information on obstacles and errors in positioning due to the blind area of the sensor itself is avoided.
  • the sweeping machine obtains the panoramic image collected by the camera through the network and analyzes the image to generate a first obstacle map, and then fuses the first obstacle map with the second obstacle map obtained by the sweeping machine through its own sensor to obtain The final target obstacle map can achieve the purpose of improving the integrity and reliability of sweeping machine mapping.
  • the method according to the above embodiments can be implemented by means of software plus the necessary general hardware platform. Of course, it can also be implemented by hardware, but in many cases the former is Better implementation.
  • the technical solution of the present invention can be embodied in the form of a software product in essence or that contributes to the existing technology.
  • the computer software product is stored in a storage medium (such as ROM/RAM, disk, CD), including several instructions to cause a terminal device (which can be a mobile phone, computer, server, or network device, etc.) to execute the methods described in various embodiments of the present invention.
  • a mapping device is also provided, which is located in the target cleaning equipment and the target camera equipment.
  • the target camera equipment is set in a cleaning environment and is communicatively connected with the target cleaning equipment.
  • the device is used to implement The above-mentioned embodiments and preferred implementation modes have been described and will not be described again.
  • the term "module” may be a combination of software and/or hardware that implements a predetermined function.
  • the apparatus described in the following embodiments is preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
  • Figure 4 is a structural block diagram of a mapping device according to an embodiment of the present invention. As shown in Figure 4, the device includes:
  • the first acquisition module 402 is used to acquire a target image obtained by photographing a target area by the target imaging device, wherein the target image includes the target cleaning device;
  • the first determination module 404 is configured to determine the first area information of the target area based on the target image, wherein the first area information includes the first position information of the target cleaning equipment and the first location information of the target area. Contains the second position information of the first obstacle;
  • Fusion module 406 configured to fuse the first area information with the second area information to obtain target area information, wherein the second area information is in the target area detected by the target cleaning device. Included area information for the first area;
  • a construction module 408 is used to construct a target map of the target area based on the target area information.
  • the above-mentioned acquisition module 402 includes:
  • the first acquisition unit is used to send a first acquisition instruction to the target camera device to instruct the target camera device to collect the first image of the entire target area; the first acquisition unit is used to acquire the first image captured by the target camera. The first image sent by the device and determines the first image as the target image.
  • the above-mentioned acquisition module 402 includes:
  • a second collection unit configured to send a second collection instruction to the target camera device to instruct the target camera device to collect a second image of a designated area of the target area, wherein the designated area includes the target Cleaning device; a second acquisition unit, configured to acquire the second image sent by the target imaging device and determine the second image as the target image.
  • the above-mentioned determining module 404 includes:
  • An identification unit configured to identify the target image to determine the first target information of the target cleaning equipment and the second target information of the first obstacle in the target area, wherein the first target The information at least includes the first position information, and the second target information at least includes the second position information, the type information and the size information of the first obstacle;
  • a first determining unit configured to determine the first area information based on the first target information and the second target information.
  • the above-mentioned fusion module 406 includes:
  • a third acquisition unit configured to acquire the first sub-region information corresponding to the first region in the first region information
  • a comparison unit configured to compare the first sub-area information with the second area information to obtain a comparison result, wherein the comparison result is used to indicate the first sub-obstacle in the first sub-area information. Whether the first sub-obstacle corresponding to the object information matches the second sub-obstacle corresponding to the second sub-obstacle information in the second area information;
  • a first update unit is configured to update the first area information based on the comparison result to obtain the target area information.
  • the above-mentioned first update unit includes:
  • Update subunit configured to update the first sub-obstacle information in the first area information to when the comparison result indicates that the first sub-obstacle does not match the second sub-obstacle. the second sub-obstacle information to obtain updated first area information;
  • a determining subunit configured to determine the updated first area information as the target area information.
  • the above comparison unit includes:
  • a first comparison subunit is used to compare the first sub-obstacle information with the second sub-obstacle information; a first obtaining sub-unit is used to determine whether the first sub-obstacle information is the same as the second sub-obstacle information. When the second sub-obstacle information is inconsistent, the first comparison result is obtained; the second acquisition sub-unit is used to obtain the first comparison result when it is determined that the first sub-obstacle information is consistent with the second sub-obstacle information. Second comparison result.
  • the above comparison unit includes:
  • the second comparison subunit is used to compare the first feature of the first sub-obstacle with the second feature of the second sub-obstacle, wherein the first feature is extracted using a target convolutional neural network
  • the characteristic data of the first sub-obstacle, the second characteristic is the characteristic data of the second sub-obstacle extracted using the target convolutional neural network
  • the third acquisition sub-unit is used to determine the When the first similarity between the first feature and the second feature is less than the first similarity threshold, a third comparison result is obtained
  • a fourth acquisition subunit is used to determine whether the first feature is the same as the first similarity threshold. When the first similarity between the second features is greater than or equal to the first similarity threshold, a fourth comparison result is obtained.
  • the above comparison unit includes:
  • a segmentation subunit used to segment the area corresponding to the first sub-area information to obtain the first block containing the first sub-obstacle, and to segment the area corresponding to the second area information.
  • the calculation subunit is used to calculate the first sub-obstacle and the second sub-obstacle based on the center coordinates of the first block and the center coordinates of the second block.
  • the above device further includes:
  • a planning module configured to plan a target path based on the target map after constructing a target map of the target area based on the target area information, and perform cleaning operations according to the target path.
  • the above device further includes:
  • the second acquisition module is used to acquire the local area network information of the target cleaning device before acquiring the target image obtained by shooting the target area by the target camera device;
  • the second determination module is used to determine all intelligent terminals that enter the network at the same time based on the local area network information
  • the third acquisition module is used to obtain the network identifier of each smart terminal
  • a third determination module configured to determine the target camera device based on the network identifier.
  • the above-mentioned fusion module 406 also includes:
  • a second update unit configured to update the second area information according to predetermined rules to obtain updated second area information
  • a fusion unit configured to fuse the first area information with the updated second area information to obtain the target area information.
  • each of the above modules can be implemented through software or hardware.
  • it can be implemented in the following ways, but is not limited to this: the above modules are all located in the same processor; or the above modules can be implemented in any combination.
  • the forms are located in different processors.
  • Embodiments of the present invention also provide a computer-readable storage medium that stores a computer program, wherein the computer program is configured to execute the steps in any of the above method embodiments when running.
  • the above-mentioned computer-readable storage medium may be configured to store a computer program for performing the following steps:
  • S3 Fusion of the first area information and the second area information to obtain target area information, wherein the second area information is the first area included in the target area detected by the target cleaning device.
  • S4 Construct a target map of the target area based on the target area information.
  • the computer-readable storage medium may include but is not limited to: USB flash drive, read-only memory (ROM), random access memory (Random Access Memory, RAM) , mobile hard disk, magnetic disk or optical disk and other media that can store computer programs.
  • ROM read-only memory
  • RAM random access memory
  • mobile hard disk magnetic disk or optical disk and other media that can store computer programs.
  • An embodiment of the present invention also provides an electronic device, including a memory and a processor.
  • a computer program is stored in the memory, and the processor is configured to run the computer program to perform the steps in any of the above method embodiments.
  • the above-mentioned electronic device may further include a transmission device and an input-output device, wherein the transmission device is connected to the above-mentioned processor, and the input-output device is connected to the above-mentioned processor.
  • the above-mentioned processor may be configured to perform the following steps through a computer program:
  • S3 Fusion of the first area information and the second area information to obtain target area information, wherein the second area information is the first area included in the target area detected by the target cleaning device.
  • S4 Construct a target map of the target area based on the target area information.
  • the obstacle information in its own blind spot can be compensated, and the perception range and accuracy of surrounding obstacles can be improved to achieve the purpose of establishing a more accurate and complete obstacle map and solve the problems in related technologies. Due to the limited number of sensors in the cleaning equipment itself and the existence of blind spots, there are problems with errors in sensing obstacles and SLAM positioning.
  • modules or steps of the present invention can be implemented using general-purpose computing devices. They can be concentrated on a single computing device, or distributed across a network composed of multiple computing devices. They may be implemented in program code executable by a computing device, such that they may be stored in a storage device for execution by the computing device, and in some cases may be executed in a sequence different from that shown herein. Or the described steps can be implemented by making them into individual integrated circuit modules respectively, or by making multiple modules or steps among them into a single integrated circuit module. As such, the invention is not limited to any specific combination of hardware and software.

Abstract

Procédé et appareil de cartographie, et support de stockage et appareil électronique. Le procédé de cartographie consiste à : acquérir une image cible, qui est obtenue au moyen d'un dispositif de photographie cible qui photographie une zone cible, l'image cible contenant un dispositif de nettoyage cible (S202) ; déterminer des premières informations de zone sur la zone cible sur la base de l'image cible, les premières informations de zone comprenant des premières informations de position du dispositif de nettoyage cible et des secondes informations de position d'un premier obstacle (S204) ; fusionner les premières informations de zone et les secondes informations de zone, de façon à obtenir des informations de zone cible, les secondes informations de zone étant des informations de zone d'une première zone qui est comprise dans la zone cible, lesquelles informations de zone sont détectées par le dispositif de nettoyage cible (S206) ; et construire une carte cible de la zone cible sur la base des informations de zone cible (S208). Le problème d'une faible efficacité de nettoyage en raison de l'incapacité pour un dispositif de nettoyage d'acquérir avec précision des informations sur une zone de fonctionnement est efficacement résolu, ce qui permet d'obtenir un effet d'amélioration de l'efficacité de nettoyage du dispositif de nettoyage.
PCT/CN2023/088027 2022-04-25 2023-04-13 Procédé et appareil de cartographie, et support de stockage et appareil électronique WO2023207610A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210441631.1A CN116982884A (zh) 2022-04-25 2022-04-25 建图方法、装置、存储介质及电子装置
CN202210441631.1 2022-04-25

Publications (1)

Publication Number Publication Date
WO2023207610A1 true WO2023207610A1 (fr) 2023-11-02

Family

ID=88517465

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/088027 WO2023207610A1 (fr) 2022-04-25 2023-04-13 Procédé et appareil de cartographie, et support de stockage et appareil électronique

Country Status (2)

Country Link
CN (1) CN116982884A (fr)
WO (1) WO2023207610A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006113858A (ja) * 2004-10-15 2006-04-27 Mitsubishi Heavy Ind Ltd 移動体の遠隔操作支援方法及びシステム
CN111723619A (zh) * 2019-03-21 2020-09-29 安克创新科技股份有限公司 移动信息的确定方法、装置、存储介质及电子装置
WO2021146862A1 (fr) * 2020-01-20 2021-07-29 珊口(深圳)智能科技有限公司 Procédé de positionnement intérieur de dispositif mobile, dispositif mobile et système de commande
CN113662476A (zh) * 2020-05-14 2021-11-19 杭州萤石软件有限公司 一种提高可移动清洁机器人清洁覆盖率的方法、以及系统
CN113670292A (zh) * 2021-08-10 2021-11-19 追觅创新科技(苏州)有限公司 地图的绘制方法和装置、扫地机、存储介质、电子装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006113858A (ja) * 2004-10-15 2006-04-27 Mitsubishi Heavy Ind Ltd 移動体の遠隔操作支援方法及びシステム
CN111723619A (zh) * 2019-03-21 2020-09-29 安克创新科技股份有限公司 移动信息的确定方法、装置、存储介质及电子装置
WO2021146862A1 (fr) * 2020-01-20 2021-07-29 珊口(深圳)智能科技有限公司 Procédé de positionnement intérieur de dispositif mobile, dispositif mobile et système de commande
CN113662476A (zh) * 2020-05-14 2021-11-19 杭州萤石软件有限公司 一种提高可移动清洁机器人清洁覆盖率的方法、以及系统
CN113670292A (zh) * 2021-08-10 2021-11-19 追觅创新科技(苏州)有限公司 地图的绘制方法和装置、扫地机、存储介质、电子装置

Also Published As

Publication number Publication date
CN116982884A (zh) 2023-11-03

Similar Documents

Publication Publication Date Title
WO2023016188A1 (fr) Procédé et appareil de tracé de cartes, balayeuse de sol, support de stockage, et appareil électronique
CN110268225B (zh) 一种多设备之间协同操作的方法、服务端及电子设备
CN111328017B (zh) 一种地图传输方法和装置
WO2023066078A1 (fr) Procédé et dispositif de correction de carte de grille, support de stockage et dispositif électronique
KR101753361B1 (ko) 청소 로봇을 이용한 스마트 청소 시스템
CN110134117B (zh) 一种移动机器人重定位方法、移动机器人及电子设备
US10437251B2 (en) Method for specifying position, terminal device, autonomous device, and program
WO2019232804A1 (fr) Procédé et système de mise à jour de logiciel, et robot mobile et serveur
CN111679661A (zh) 基于深度相机的语义地图构建方法及扫地机器人
CN112075879A (zh) 一种信息处理方法、装置及存储介质
WO2023005377A1 (fr) Procédé de construction de carte pour robot, et robot
WO2020010841A1 (fr) Procédé et dispositif de positionnement d'aspirateur autonome utilisant un étalonnage de gyroscope basé sur une détection de fermeture de boucle visuelle
WO2021208015A1 (fr) Procédé de construction et de positionnement de carte, client, robot mobile et support de stockage
CN111679664A (zh) 基于深度相机的三维地图构建方法及扫地机器人
CN113475977A (zh) 机器人路径规划方法、装置及机器人
CN113520246B (zh) 移动机器人补偿清洁方法及系统
CN112748721A (zh) 视觉机器人及其清洁控制方法、系统和芯片
CN110597081A (zh) 基于智能家居操作系统的控制指令的发送方法及装置
WO2023207610A1 (fr) Procédé et appareil de cartographie, et support de stockage et appareil électronique
CN113536820B (zh) 位置识别方法、装置以及电子设备
CN112286185A (zh) 扫地机器人及其三维建图方法、系统及计算机可读存储介质
CN114935341A (zh) 一种新型slam导航计算视频识别方法及装置
CN110177256B (zh) 一种追踪视频数据获取方法和装置
CN113516715A (zh) 目标区域录入方法、装置、存储介质、芯片及机器人
JP2021119802A (ja) 清掃制御方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23795043

Country of ref document: EP

Kind code of ref document: A1