WO2023050637A1 - Détection d'ordures - Google Patents

Détection d'ordures Download PDF

Info

Publication number
WO2023050637A1
WO2023050637A1 PCT/CN2022/070517 CN2022070517W WO2023050637A1 WO 2023050637 A1 WO2023050637 A1 WO 2023050637A1 CN 2022070517 W CN2022070517 W CN 2022070517W WO 2023050637 A1 WO2023050637 A1 WO 2023050637A1
Authority
WO
WIPO (PCT)
Prior art keywords
garbage
area
detection
preset
determined
Prior art date
Application number
PCT/CN2022/070517
Other languages
English (en)
Chinese (zh)
Inventor
黄超
郑伟伟
姚为龙
Original Assignee
上海仙途智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海仙途智能科技有限公司 filed Critical 上海仙途智能科技有限公司
Publication of WO2023050637A1 publication Critical patent/WO2023050637A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the embodiments of this specification relate to the field of object detection, and in particular to methods and devices for garbage detection.
  • the garbage detection technology For garbage detection, usually after taking an image of the area to be cleaned, the garbage detection technology is used to determine the garbage in the captured image, which can be marked with a garbage detection frame.
  • the current garbage detection technology can detect the garbage in the image, it is difficult to further determine the specific location of the detected garbage, and the accuracy is low.
  • this specification provides a method and device for garbage detection.
  • the technical scheme is as follows.
  • a garbage detection method which divides an area to be detected into at least two preset areas in advance; the method includes: acquiring a target image taken for the area to be detected; determining that the divided preset area is in the target image position; for the target image, using a pre-trained garbage detection model to determine a garbage detection result; according to the determined garbage detection result, determine a preset area where garbage exists as a garbage area.
  • a garbage detection device which divides an area to be detected into at least two preset areas in advance; the device includes: an acquisition unit, used to acquire a target image taken for the area to be detected; a mapping unit, used to determine the divided The position of the preset area in the target image; the detection unit is used to determine the garbage detection result by using the garbage detection model trained in advance for the target image; the positioning unit is used to determine the garbage detection result according to the determined garbage detection result.
  • a preset area where garbage exists is determined as a garbage area.
  • the above technical solution can determine the position of the preset area in the target image, so as to facilitate the determination of the preset area where garbage exists according to the garbage detection result of the target image, efficiently, quickly and accurately determine the position of the garbage, and reduce computing resources loss.
  • Fig. 1 is a schematic flow chart of a garbage detection method provided by the embodiment of this specification
  • Fig. 2 is a schematic diagram of the principle of a preset area mapping method provided by the embodiment of this specification;
  • Fig. 3 is a schematic diagram of the principle of a garbage area determination method provided by the embodiment of this specification.
  • Fig. 4 is a schematic diagram of the principle of cleaning route planning provided by the embodiment of this specification.
  • Fig. 5 is a schematic diagram of the principle of a model structure provided by the embodiment of this specification.
  • Fig. 6 is a schematic diagram of the principle of another cleaning route planning provided by the embodiment of this specification.
  • Fig. 7 is a schematic structural diagram of a garbage detection device provided by an embodiment of this specification.
  • Fig. 8 is a schematic structural diagram of a device for configuring the method of the embodiment of the present specification.
  • Object detection is an important perception technique.
  • obstacles on the driving route can be detected for unmanned vehicles, specifically including other cars, pedestrians, bicycles, motorcycles, etc., and correct driving decisions can be made based on the detection results. Such as stopping, avoiding or going around.
  • the target detected by the target detection may be an obstacle or other objects. For example, spam detection.
  • the unmanned sweeper can detect garbage in the area to be cleaned, and detect the garbage in the area to be cleaned, so that the driving route of the unmanned sweeper can be planned to clean up the detected garbage.
  • the garbage detection technology For garbage detection, usually after taking an image of the area to be cleaned, the garbage detection technology is used to determine the garbage in the captured image, which can be marked with a garbage detection frame.
  • the current garbage detection technology can detect the garbage in the image, it is difficult to further determine the specific location of the detected garbage, and the accuracy is low.
  • a large piece of paper dust in the distance and a small piece of paper dust near it may have similar sizes in the image, so it is difficult to determine the position of the paper dust.
  • the method for locating obstacles is not suitable for locating garbage.
  • obstacles are more obvious, such as pedestrians, motorcycles, etc., they can be scanned by lidar, and obstacles can be determined through more reflection information.
  • many garbage is relatively small, such as paper scraps, fallen leaves, etc. Even if the position in the image is detected and scanned by lidar, there is reflection information for the paper scraps, making it difficult to locate the scraps of paper.
  • the embodiment of this specification provides a garbage detection method.
  • the area to be detected can be divided in advance, and can be divided into multiple preset areas. After the image taken for the area to be detected is acquired, the position of the preset area in the captured image can be determined, so that after the garbage is detected, the preset area with garbage can be determined as the garbage area.
  • the garbage area is a specific location determined for the garbage. Of course, in the same garbage area, there may be multiple garbage. This method does not need to determine the location of each garbage, and can directly determine the preset area where garbage exists as the garbage area, and the determined garbage area is the location of each garbage contained therein.
  • the cleaning route can be planned quickly and efficiently, so that the unmanned sweeper can clean along the cleaning route. It should be noted that the unmanned sweeper needs to clean the garbage area as a whole.
  • the garbage detection method provided by the embodiment of this specification can divide the area to be detected into a plurality of preset areas, and determine the position of the preset area in the image, so that after detecting garbage, it is convenient to further determine the presence of garbage in the image The preset area, so that the garbage can be located. High positioning efficiency, fast speed and high accuracy.
  • FIG. 1 it is a schematic flowchart of a garbage detection method provided in the embodiment of this specification.
  • the region to be detected can be divided into at least two preset regions in advance.
  • the area to be detected may be an area where garbage detection is required, specifically, it may be a pre-set area. For example, for areas such as parking lots, airport halls, shopping malls, etc., since it is necessary to clean up the garbage on the ground, it is necessary to perform garbage detection first to facilitate cleaning later.
  • the preset area in an optional embodiment, it can be used to map to the image that needs to be detected for garbage, so as to determine the position of the detected garbage, that is, the preset area where the garbage is located, and facilitate subsequent determination of cleaning route.
  • the method flow does not limit the method of dividing the preset area.
  • the preset area may be divided manually or automatically by the device.
  • the division may be performed according to a certain size standard.
  • the area to be detected can be divided into multiple grids, and the size and shape of different grids are the same, and the size of each grid can be comprehensively determined according to factors such as image accuracy, garbage detection accuracy, and garbage evaluation size. Of course, it can also be directly set to 1 square meter.
  • the cleaning range of the unmanned sweeper without moving can be determined, and then the grid size can be determined according to the determined cleaning range, specifically, the grid can be made equal to or smaller than the determined cleaning range, so that When the unmanned sweeper is cleaning the preset area, it can be cleaned without moving.
  • the sizes of different preset areas may be the same or different; the shapes of different preset areas may be the same or different.
  • the preset area In the case of manually dividing the preset area, optionally, it can be manually divided according to a special area in the area to be detected. For example, when the area to be detected is a parking lot, each parking space contained in it can be manually determined as a preset area, which is convenient for subsequent direct cleaning of the entire parking space.
  • the aisle can be divided into grids with the same size and shape; when the area to be detected is an area with a height difference, it can be specifically a staircase, and each staircase can be manually determined as a preset area.
  • the division may also be performed in a manner of combining manual division and automatic division.
  • the ground in each shop can be manually determined as the preset area, and then for the hall and aisle of the shopping mall, the preset area can be determined by using the method of automatic division of equipment, specifically, it can be divided by using a grid of preset size , divide the hall and aisle of the shopping mall into multiple grids with a size of 1 square meter, and each grid is a preset area.
  • the preset areas are used to determine the location of the garbage, there may be no overlap between different preset areas.
  • the method may include the following steps.
  • S101 Acquire a target image taken for a region to be detected.
  • S102 Determine the position of the divided preset area in the target image.
  • S103 For the target image, determine a garbage detection result by using a pre-trained garbage detection model.
  • S104 Determine a preset area where garbage exists as a garbage area according to the determined garbage detection result.
  • S102 and S103 may be executed in parallel or successively, and this embodiment does not limit the execution sequence between S102 and S103.
  • the above method flow can determine the position of the preset area in the target image, so as to facilitate the determination of the preset area where garbage exists according to the garbage detection result of the target image, efficiently, quickly and accurately determine the position of the garbage, and reduce computing resources loss.
  • staff can be arranged to clean it, or unmanned cleaning equipment can clean it after determining the cleaning route.
  • the target image in S101 may specifically be any image captured for the region to be detected.
  • the process of this method can be applied to any electronic device. Therefore, specifically, the electronic device can shoot target images for the area to be detected through its own camera device; The target image is captured in the area, and then transmitted to the electronic device to execute the above-mentioned method flow for garbage detection.
  • the method flow can be applied to unmanned cleaning equipment, specifically, a mobile unmanned cleaning vehicle.
  • the unmanned cleaning equipment can be equipped with a camera device to facilitate garbage detection. Therefore, the unmanned cleaning equipment itself can take pictures of the area to be detected to obtain target images.
  • a high-altitude camera or a drone can take target images of the area to be detected, and then transmit the target images to unmanned cleaning equipment for garbage detection.
  • the UAV can shoot from a top-down perspective.
  • acquiring a target image taken for the area to be detected may specifically include: acquiring a target image taken by other devices for the area to be detected; or acquiring a target image taken by itself for the area to be detected.
  • the captured target image may include all regions to be detected, or some regions to be detected.
  • the position of the garbage can only be determined based on the preset area contained in the target image.
  • an image captured in advance for the region to be detected may be determined as the target image, or an image captured in real time for the region to be detected may be determined as the target image.
  • acquiring a target image taken for the area to be detected may specifically include: acquiring a target image taken in real time for the area to be detected; or acquiring a target image taken in advance for the area to be detected.
  • the shooting result optionally, there may be one or more target images.
  • the garbage itself is not necessarily fixed at the same position, and the location of the garbage may change at any time. For example, discarded cans were kicked away, paper scraps and fallen leaves were blown away by the wind, etc.
  • multiple target images may be obtained, so as to perform garbage detection and garbage location based on the multiple target images.
  • Different target images can contain different parts of the area to be detected, so that multiple parts of the area to be detected can be covered, and garbage detection and garbage location can be performed from different angles to improve the accuracy of garbage detection and garbage location.
  • multiple images taken continuously for the region to be detected may be acquired, or multiple images taken for the region to be detected within a preset time period may be acquired.
  • acquiring a target image taken for the region to be detected may include: acquiring a plurality of target images taken for the region to be detected; or acquiring a plurality of target images taken for the region to be detected within a preset time period; Or acquire multiple target images shot continuously for the area to be detected.
  • the process of the method can be executed by acquiring multiple target images, so as to improve the accuracy of garbage detection and garbage location.
  • the flow of the method does not limit the method for specifically determining the position of the preset area in the target image.
  • the position of the area to be detected mapped in the target image can be determined according to the pose of the target image when shooting the area to be detected, and further can be determined according to the actual position of the preset area contained in the area to be detected The location of the preset area map in the target image.
  • determining the position of the divided preset area in the target image may include: determining the position, height and shooting angle of the camera when shooting the target image; determining the position of each preset area in the area to be detected; According to the position, height and shooting angle of the camera device, and the position of each preset area, the position of each preset area in the target image is determined.
  • the actual position corresponding to the boundary of the captured target image can be determined, so that the distance between the actual position corresponding to the boundary of the target image and the actual position of the preset area can be determined.
  • the positional relationship further determines the position of the preset area in the target image.
  • the position of the preset area may also be characterized by the positions of multiple points.
  • the position of the preset area may be represented by the positions of four vertices.
  • the position of the camera device in three-dimensional space can be determined according to the position and height of the camera device, so that the points used to represent the position of the preset area can be connected to the camera device, and then the points used to characterize the position of the preset area can be determined according to the shooting angle.
  • the position of the point in the captured target image, so that the position of the preset area in the target image can be determined.
  • FIG. 2 it is a schematic diagram of a principle of a preset area mapping method provided by an embodiment of this specification. There are two preset area mapping methods provided.
  • the area to be detected may be pre-divided into 4 square preset areas.
  • FIG. 2 only shows the situation of mapping for preset area 1 .
  • the position of the preset area 1 is represented by coordinates. Specifically (0,0), (0,1), (1,0) and (1,1).
  • a preset area mapping method may be to photograph the area to be detected through the top view angle of the camera device.
  • the position of the camera is (0,0)
  • the height is 2, and the shooting angle is 90 degrees. Therefore, for the captured square target image 1, it can be determined that the actual positions corresponding to the boundary of the target image 1 are (2,2), (2,-2), (-2,2) and (-2,-2 ).
  • the corresponding position of the preset area 1 in the target image 1 can be determined according to the positional relationship between the position of the preset area 1 and the actual position of the boundary of the target image 1 .
  • Another preset area mapping method may be to determine the position of the camera in three-dimensional space according to the position and height of the camera. Specifically, the position of the camera in the three-dimensional space may be determined according to the position (0, -2) and the height 1 of the camera. After that, the range mapped by the target image 2 can be determined according to the shooting angle of 90 degrees. Wherein, it can be determined that the actual position (0, -1) falls on one of the boundaries of the target image 2 .
  • determining the position of the preset area in the target image may also be done in other ways.
  • the position of the preset area can be identified by means of features in the area to be detected. Specifically, for a rectangular parking space drawn with a white line, the position of the preset area in the target image can be determined by identifying the position of the white line.
  • the positional relationship between the camera device and the area to be detected can be determined through the relevant information of the camera device, and then the position of the preset area in the target image can be accurately determined, which facilitates subsequent garbage positioning.
  • the position of the imaging device in the UTM coordinate system can be determined, and the position of the preset area in the UTM coordinate system can be determined. According to the relationship between the positions, it can be determined that the preset area is at location in the target image.
  • a garbage detection method is used to determine a garbage detection result.
  • the pre-trained garbage detection model can be used for detection.
  • preprocessing operations need to be performed on the target image.
  • the preprocessing operations may include operations such as scaling, normalization, and cropping.
  • preprocessing may be performed on the target image; the preprocessing result is input into the pre-trained garbage detection model, and the garbage detection result is determined according to the output of the garbage detection model.
  • the target image can be cropped to retain the preset area in the target image.
  • the amount of data input into the garbage detection model can be reduced, thereby reducing the loss of computing resources and improving the efficiency of garbage detection.
  • the preprocessing may include: cropping the target image, and retaining a preset area in the target image.
  • the preprocessing may also include: scaling the target image.
  • scaling the target image the resolution of the image can be reduced, and the amount of data input to the garbage detection model can be reduced, thereby reducing the consumption of computing resources and improving the efficiency of garbage detection.
  • the preprocessing may also include: performing normalization processing on the RGB channel data in the target image, so that it is convenient to input the preprocessing result into the garbage detection model for subsequent detection by the garbage detection model.
  • the garbage detection model may specifically be constructed using a deep convolutional neural network.
  • These can include base feature networks for extracting image features and detection heads for spam detection.
  • the framework of the garbage detection model may adopt frameworks such as SSD, RetinaNet, CenterNet or YoLo.
  • frameworks such as SSD, RetinaNet, CenterNet or YoLo.
  • the framework of YoLov5 can be used for construction.
  • garbage detection there are many specific problems. Specifically, it can include: the number of garbage samples is small, and the garbage itself is small. Therefore, based on the characteristics of the specific scene of garbage detection, the adaptability can be adjusted according to the needs of the garbage detection scene, and the garbage detection model can be adjusted.
  • the garbage detection model itself can perform feature extraction by scaling the input image data.
  • feature pyramid networks For example, feature pyramid networks.
  • the degree of zooming can be limited, so that when zooming the image data, the loss of the image features corresponding to the garbage can be avoided as far as possible, thereby improving the effect of the garbage detection model.
  • the scaling ratio of the zoomed image data may be limited, and the resolution of the zoomed image data may also be limited.
  • the garbage detection model is used to perform garbage detection on at least one or more scaled copies of the target image; wherein, any scaled copy may have an image resolution greater than a preset resolution, or a scale ratio may be greater than a preset scale ratio .
  • the garbage detection model may include a feature pyramid network.
  • a typical feature pyramid network can include 2, 3, 4, and 5 layers for extracting features through scaling. Specifically, 2 layers can be used to scale the input image data by 4 times, 3 layers can be used to scale the input image data by 8 times, 4 layers can be used to scale the input image data by 16 times, and 5 layers can be used for Scales by a factor of 32 for the input image data.
  • the feature pyramid network in the garbage detection model may only include 2, 3, and 4 layers, so as to limit the scaling and avoid loss of image features of garbage.
  • the image features of the garbage in the target image can be avoided by limiting the zoom ratio or the zoomed image resolution, thereby improving the detection effect of the garbage detection model and improving the accuracy of garbage detection.
  • the number of garbage detection frames containing garbage is usually far less than that of non-garbage detection frames. Therefore, when training the garbage detection model, in the training sample set, usually The number of positive samples (junk detection boxes) is much less than negative samples (non-junk detection boxes).
  • Focus loss can determine the weights of positive and negative samples separately, increase the weight of a small number of positive samples, and reduce the weight of a large number of negative samples, so as to improve the training efficiency, training effect and garbage detection accuracy of the garbage detection model.
  • the loss function in the garbage detection model can include focal loss.
  • the focus loss can be used to increase the weight of image samples labeled with garbage detection boxes.
  • the weight of positive samples can be increased by introducing focal loss, so as to improve the training efficiency, training effect and garbage detection accuracy of the garbage detection model.
  • the image samples marked with garbage detection frames are difficult to label, and the acquired number is small. Therefore, the number of training samples in the garbage detection scene is small.
  • the basic feature network can be included in the garbage detection model, it is used to extract image features.
  • other labels similar to garbage can be introduced to train the basic feature network. For example, obstacles.
  • the training method of the garbage detection model may include: determining the image sample marked with the obstacle detection frame label as the image sample marked with the garbage detection frame label, and training the basic feature network included in the garbage detection model.
  • the obstacle detection frame is easy to label, and has more application scenarios, and the number of samples that can be obtained is larger. It can be used to increase the number of samples for training the basic feature network and improve the representation ability of the basic feature network.
  • the obstacle detection frame label and the garbage detection frame label can be used together to train the basic feature network to improve the representation ability, and then the same basic feature network can be used for garbage detection and obstacle detection to reduce the loss of computing resources.
  • the output of the garbage detection model usually includes a garbage detection frame, which can be used to mark the detected garbage in the target image, and can have a confidence level, that is, the marked image part Contains the credibility of garbage.
  • the garbage detection model can also be used to identify the type of the detected garbage, for example, can identify the type of garbage contained in the garbage detection frame, which can specifically include: paper scraps, leaves, plastic bags, boxes, etc. Specifically, the type of garbage may be further identified for the detected garbage detection frame.
  • the garbage detection frame output by the garbage detection model may also have the recognized garbage type.
  • the output garbage detection frame may be directly determined as the garbage detection result.
  • garbage detection frames with a confidence level lower than a preset confidence level may be deleted, or similar garbage detection frames may be considered as the same garbage detection frame.
  • the garbage detection model may output multiple similar garbage detection frames for the same garbage. In order to filter these repeated detection results, filtering may be performed according to the intersection ratio between different garbage detection frames.
  • intersection ratio between different garbage detection frames may specifically be the ratio of the overlap area between two garbage detection frames to the merged area.
  • using a pre-trained garbage detection model to determine the garbage detection result may include: the output of the garbage detection model is a garbage detection frame, and the output garbage detection frame has a degree of confidence; In order to traverse the garbage detection frame output by the garbage detection model; if the intersection ratio between the currently traversed garbage detection frame and any other garbage detection frame is greater than the preset intersection ratio, delete the other garbage detection frame ; After the traversal, determine the remaining garbage detection frames as garbage detection results.
  • multiple target images taken at the same location for the area to be detected can be obtained, so that multiple target images can be used for multiple garbage detections, and the garbage detection results can be determined comprehensively .
  • it may be a plurality of target images taken continuously at the same position for the region to be detected, or a plurality of target images taken periodically or irregularly at the same position for the region to be detected.
  • the garbage detection frames detected in different target images can be compared.
  • the garbage detection frame can be determined as the garbage detection result; if the garbage detection frame is only detected in one target image, while other target images If none of them can be detected, the garbage detection frame may not be determined as a garbage detection result.
  • the garbage detection frame can be determined as the garbage detection result; If two preset numbers of target images are detected, the garbage detection frame may not be determined as a garbage detection result, and specifically, the garbage detection frame may be deleted; the second preset number may be smaller than the first preset number.
  • unreliable or repeated garbage detection frames can be filtered, thereby improving the garbage detection accuracy of the garbage detection model.
  • the preset area containing garbage can be directly determined according to the target image .
  • any detected garbage is within the range of any preset area corresponding to the location in the target image, it may be determined that the preset area contains garbage.
  • the actual garbage detection result may be a garbage detection frame, it is necessary to determine the garbage area based on the garbage detection frame.
  • the garbage detection frame may be entirely contained within the position range of any preset area, or may be located within the position range of different preset areas. Specifically, the garbage area may be determined according to the center point of the garbage detection frame or the degree of overlap with the preset area.
  • the garbage detection result is a garbage detection frame
  • determining a preset area where garbage exists as a garbage area may include: setting the center point of any garbage detection frame The preset area where it is located is determined as a garbage area; or the preset area whose coincidence degree with any garbage detection frame satisfies the preset coincidence condition is determined as a garbage area.
  • the preset coincidence condition may include: the coincidence degree is greater than the preset coincidence degree, or the coincidence degree is the highest.
  • FIG. 3 it is a schematic diagram of the principle of a method for determining a garbage area provided by an embodiment of this specification.
  • It includes 4 preset regions mapped to the position range in the target image and 2 garbage detection frames, which are preset regions 1-4 and garbage detection frames 1-2 respectively.
  • the preset areas are marked with numbers 1-4.
  • the center point of the garbage detection frame 1 is located in the preset area 2, it can be determined that there is garbage in the preset area 2, and the preset area 2 is a garbage area.
  • the center point of the garbage detection frame 2 is the preset area 3, and it can be determined that there is garbage in the preset area 3, and the preset area 3 is a garbage area.
  • the preset area where garbage exists can be determined, and the detected garbage can be located efficiently, quickly and accurately, which is convenient for follow-up Cleaning route planning saves computing resources and improves computing efficiency.
  • the garbage area in order to improve the accuracy of determining the garbage area, the garbage area may be comprehensively determined according to the garbage detection results of multiple target images.
  • acquiring a target image taken for the region to be detected may include: acquiring a plurality of target images taken for the region to be detected; or acquiring a plurality of target images taken for the region to be detected within a preset time period; Or acquire multiple target images shot continuously for the area to be detected.
  • the acquired multiple target images are not necessarily taken at the same position, and may respectively contain different parts of the region to be detected.
  • any preset area only detects the presence of garbage in one target image, but does not detect the presence of garbage in other target images, then the detection result may be wrong, or the position of the garbage has changed. Therefore, you can The preset area is not determined as a garbage area.
  • these multiple target images may be obtained by shooting the area to be detected from different positions. Then, there is likely to be garbage in the preset area, and the preset area can be determined as the garbage area.
  • determining the preset area with garbage as the garbage area may include: if any preset area is in the garbage detection result determined for the target image of the preset number of images, all If garbage exists, the preset area may be determined as the garbage area.
  • the accuracy of the garbage area can be improved by using the garbage detection results of multiple target images.
  • the above-mentioned embodiment can be used to determine the position of the preset area in the target image, so as to facilitate the positioning of the detected garbage, and it is also convenient for the subsequent unmanned cleaning equipment to plan cleaning routes. Garbage is cleaned up.
  • the cleaning route can be further determined in the process of the above method.
  • cleaning may be performed on the detected garbage.
  • unmanned cleaning equipment can detect and locate garbage based on local computing resources, and move and clean after determining the cleaning route.
  • the process of the method may further include S105: determining a garbage area to be cleaned, and determining a cleaning route according to the garbage area to be cleaned.
  • the determined cleaning route may include a garbage area to be cleaned.
  • S105 may be performed after determining the garbage area, specifically, it may be performed after S104.
  • the garbage area After the garbage area is determined, it is usually necessary to clean it by cleaning equipment and plan the cleaning route. To plan the cleaning route, it is necessary to determine the garbage area to be cleaned.
  • the garbage area to be cleaned may specifically be a garbage area that needs to be cleaned later.
  • each determined garbage area may be determined as a garbage area to be cleaned.
  • the garbage area that meets the requirements may also be determined as the garbage area to be cleaned.
  • the garbage area containing a large amount of garbage may be determined as the garbage area to be cleaned, or the garbage area containing a specific type of garbage may be determined as the garbage area to be cleaned.
  • the determined garbage area needs to be cleaned as a whole.
  • the cleaning route is determined.
  • a route passing through all the garbage areas to be cleaned may be determined.
  • start from any garbage area to be cleaned randomly select an unpassed garbage area from other nearest garbage areas to be cleaned, move to the selected garbage area, and perform overall cleaning.
  • FIG. 4 it is a schematic diagram of the principle of cleaning route planning provided by the embodiment of this specification. It includes preset areas 1-9, and each preset area is marked by a number contained therein, and preset areas 1, 3, 4, and 9 are determined as garbage areas.
  • the nearest preset area 4 can be determined and moved to the preset area 4; since the distance between the preset area 4 and the preset areas 3 and 9 is the same, the preset area can be randomly selected Area 3, move to preset area 3, and then move to preset area 9.
  • the cleaning route can be determined quickly and efficiently based on the garbage area where the garbage is located and determined based on the garbage location, so as to avoid the loss of computing resources. Especially in unmanned cleaning equipment, it can save local computing resources of unmanned cleaning equipment.
  • different cleaning methods may also be determined for different types of garbage.
  • the cleaning method can be determined according to the type of garbage identified by the garbage detection model.
  • the above-mentioned method flow may also include: obtaining the corresponding relationship between garbage types and cleaning methods; according to the determined garbage detection results, determining the types of garbage contained in the garbage area; the garbage detection model can also be used to detect garbage types ; Determine the corresponding cleaning method according to the type of garbage contained in any garbage area.
  • one or more corresponding cleaning methods may be determined.
  • different cleaning methods may have priorities, so that the cleaning method with the highest priority may be selected; optionally, each determined cleaning method may be used to clean once to improve the cleaning effect.
  • a corresponding cleaning method can be determined, so that subsequent cleaning can be facilitated, and the cleaning of garbage can be more convenient and thorough.
  • the cleaning method can be determined quickly and efficiently according to the type of garbage contained in each garbage area, so that when the unmanned cleaning equipment specifically cleans any garbage area, Use the corresponding cleaning method to clean to improve cleaning efficiency.
  • obstacles may be further determined.
  • the unmanned sweeper For example, for an unmanned sweeper, it is necessary to avoid obstacles while cleaning up garbage. Therefore, for the area that needs to be cleaned, the unmanned sweeper needs to perform both garbage detection and obstacle detection.
  • target detection may be used for obstacle detection, and an obstacle detection model may be used to determine obstacles in the target image; laser radar or the like may also be used for scanning or the like.
  • multiple obstacle detection methods can be used, and the detection results of multiple obstacle detection methods can be combined to determine a preset area where an obstacle exists, that is, an obstacle area, so that Use unmanned cleaning equipment to avoid obstacle areas.
  • the detection results of multiple obstacle detection methods may be used as the obstacle detection results to determine the obstacle area.
  • using laser radar to scan for obstacles may be based on the point cloud data acquired by laser radar to determine a preset area where obstacles exist.
  • the ground plane points are obtained according to the ground plane fitting method, and the remaining points are used as obstacle points. Then the three-dimensional obstacle point cloud is projected into the target image through the projection formula, and the number of obstacle points is counted for the preset area. If it is greater than the threshold, it is considered as an obstacle area with obstacles.
  • obstacles can be determined by using a target detection method.
  • the obstacle detection results based on images and point cloud data can be fused to determine the combined obstacle detection results, so as to facilitate the determination of obstacle areas where obstacles exist.
  • the execution sequence of obstacle detection and garbage detection can be executed in parallel or sequentially. This embodiment is not limited.
  • the detected obstacle may be positioned according to the obstacle detection result, specifically, a preset area where the obstacle exists may be determined as the obstacle area.
  • the obstacle area may be a preset area where obstacles exist.
  • the determination of the obstacle area and the garbage area may be performed in parallel or sequentially, which is not limited in this embodiment.
  • the determination of the garbage area is the garbage detected by the positioning, and the garbage detected by the positioning is usually used to plan the cleaning route, and the obstacle area usually needs to be avoided, so it can be determined according to the determined garbage area and the obstacle area.
  • Cleaning route the determined cleaning route can pass through the garbage area that needs to be cleaned, and will not pass through any obstacle area.
  • the cleaning route can be planned from parking space 2 to parking space 4 and then to parking space 5. Each parking space needs to be cleaned as a whole.
  • a preset area which is determined as both a garbage area and an obstacle area.
  • the preset area can be determined as both a garbage area and an obstacle area.
  • the garbage area determined as the obstacle area may not be considered.
  • determining the garbage area that needs to be cleaned may specifically include: determining the garbage area in the non-obstacle area as the garbage area that needs to be cleaned, so that the cleaning route can be planned only for the garbage area in the non-obstacle area.
  • a preset area that contains garbage and belongs to the non-obstacle area may also be determined as the garbage area according to the pre-determined obstacle area.
  • the above method flow may further include S106: Detecting obstacles in the area to be detected, and determining a preset area where obstacles exist as an obstacle area.
  • determining the preset area where garbage is detected as the garbage area according to the determined garbage detection result may include: determining the non-obstacle area where garbage is detected as the garbage area according to the determined garbage detection result .
  • determining the garbage area to be cleaned may include: determining the garbage area in the non-obstacle area as the garbage area to be cleaned.
  • the cleaning route it is also possible to determine the garbage area belonging to the obstacle area and the garbage area in the non-obstacle area, so that the cleaning route can be determined according to the garbage area in the non-obstacle area, so that the planned cleaning route includes non-obstacle areas.
  • the garbage area of the area does not include the garbage area that belongs to the obstacle area, and does not include any obstacle area.
  • obstacle detection can be further combined to help plan cleaning routes and improve garbage cleaning efficiency.
  • the image content contained in the obstacle area in the target image can be directly deleted, and garbage detection is performed on the deleted target image, which can save computing resources.
  • obstacle detection may be performed by a target detection method. Therefore, detecting an obstacle in the region to be detected may include: using a pre-trained obstacle detection model to determine an obstacle detection result for the target image.
  • the labels marked with The image samples of the obstacle detection frame label, and the image samples marked with the garbage detection frame label jointly train the basic feature network of the obstacle detection model and the garbage detection model, and obtain the same basic feature network, which can improve the representation of the basic feature network ability, improve the training effect of obstacle detection model and garbage detection model, and save computing resources.
  • the detection head of the obstacle detection model only the image samples marked with the obstacle detection frame label need to be used for training, and the parameters of the basic feature network can be fixed.
  • the basic feature networks included in the obstacle detection model and the garbage detection model are trained using the same training sample set, and the trained basic feature networks are the same.
  • the training sample set may include: image samples marked with obstacle detection frame labels, and image samples marked with garbage detection frame labels.
  • the garbage detection model in the embodiment of this specification can be trained by collecting data of different garbage in different scenarios, and output the garbage detection frame and garbage category in the detection head.
  • the method of sharing the basic feature network with the obstacle detection model but independently training the detection head can be used .
  • FIG. 5 it is a schematic schematic diagram of a model structure provided by an embodiment of this specification.
  • the obstacle detection model and the garbage detection model share the basic feature network.
  • the feature extraction of the basic feature network can only be performed once without performing separate executions, thereby saving computing resources and improving computing speed and efficiency.
  • the basic feature network occupies the most computing resources in the deep learning model, and sharing this part of the calculation can greatly reduce the consumption of computing resources.
  • the shared basic features can also greatly reduce the need for separate training data for garbage detection, so as to achieve the target accuracy faster.
  • the specific way to train the detection model of the shared basic feature network is to train a detection model together on the obstacle and garbage detection data to obtain the common basic feature network parameters.
  • both the obstacle detection frame label and the garbage detection frame label can be regarded as the target detection frame label, which is used to train a detection model and obtain the basic feature network parameters.
  • a garbage detection model and an obstacle detection model can be constructed respectively.
  • the basic feature network parameters can be fixed during the training process, and the detection head of the obstacle detection model can be trained using the obstacle data. part, use the garbage detection data to train the detection head of the garbage detection model.
  • a more accurate and suitable cleaning route can be further determined.
  • the process of the above method may further include: determining a garbage area that needs to be cleaned and an obstacle area that needs to be avoided, and determining a cleaning route based on the determined garbage area and obstacle area.
  • FIG. 6 it is a schematic diagram of another cleaning route planning provided by the embodiment of this specification.
  • Preset areas 1-9 each preset area is marked by a number contained therein.
  • Preset areas 1, 3, 4, and 9 are determined as garbage areas, and preset areas 5 and 6 are determined as obstacle areas.
  • the nearest preset area 4 can be determined and moved to the preset area 4; since the distance between the preset area 4 and the preset areas 3 and 9 is the same, the preset area can be randomly selected Area 3, moving to the preset area 3, since moving from the preset area 4 to the preset area 3 in a straight line will pass through the preset area 5 (obstacle area), therefore, it can go around through the preset areas 1 and 2.
  • the preset areas 5 and 6 are obstacle areas, it is possible to detour from outside the preset area 6, thereby obtaining a cleaning route.
  • the unmanned cleaning equipment can take pictures of the area to be detected through its own camera device to obtain the target image.
  • the area to be detected may be a hall of a shopping mall.
  • the area to be detected is pre-divided into multiple preset areas, and each preset area is a square floor tile in the hall of the shopping mall.
  • Unmanned cleaning equipment can detect garbage and obstacles at the same time.
  • the target image can be input into the basic feature network, and then the image features output by the basic feature network can be input into the detection head and obstacle in the garbage detection model.
  • the detection head in the object detection model determines the garbage detection result and the obstacle detection result.
  • obstacles may be pedestrians, goods, and the like.
  • the unmanned cleaning device can also determine the position of the preset area in the area to be detected in the target image, specifically, it can determine the position of the square floor tiles in the hall of the shopping mall in the target image.
  • the unmanned cleaning equipment can determine the square floor tiles with garbage and the square floor tiles with obstacles, and then plan the cleaning route to avoid obstacles Clean the square floor tiles with rubbish.
  • the preset area can be mapped to the target image, which is convenient for garbage positioning, so that it can serve unmanned cleaning, and the obtained garbage positioning
  • the information can be used by unmanned sweepers for efficient garbage cleaning path planning.
  • the garbage detection model can also share the basic feature network with the obstacle detection model, thereby reducing computing resource consumption, reducing training data requirements, and improving representation capabilities.
  • the embodiment of this specification also provides an apparatus embodiment.
  • FIG. 7 it is a schematic structural diagram of a garbage detection device provided in the embodiment of this specification.
  • the rubbish detection device may include the following units.
  • the obtaining unit 201 is configured to obtain a target image taken for the region to be detected.
  • the mapping unit 202 is configured to determine the position of the divided preset area in the target image.
  • the detection unit 203 is configured to determine a garbage detection result by using a pre-trained garbage detection model for the target image.
  • the positioning unit 204 is configured to determine a preset area where garbage exists as a garbage area according to the determined garbage detection result.
  • the mapping unit 202 can be used to: determine the position, height and shooting angle of the camera when shooting the target image; determine the position of each preset area in the area to be detected; according to the position, height and shooting angle of the camera , and the position of each preset area to determine the position of each preset area in the target image.
  • the detection unit 203 may include: a preprocessing subunit 203a, configured to perform preprocessing on the target image.
  • the detection subunit 203b is configured to input the preprocessing result into the pre-trained garbage detection model, and determine the garbage detection result according to the output of the garbage detection model.
  • the preprocessing subunit 203a may be configured to: crop the target image, and retain a preset area in the target image.
  • the garbage detection model is at least used to perform garbage detection on one or more scaled copies of the target image; wherein, any scaled copy has an image resolution greater than a preset resolution, or a scale ratio greater than a preset scale ratio.
  • the loss function in the garbage detection model includes a focus loss; the focus loss is used to increase the weight of image samples marked with garbage detection frame labels.
  • the garbage detection model may include a basic feature network; the training method of the garbage detection model may include: determining an image sample marked with an obstacle detection frame label as an image sample marked with a garbage detection frame label, and training the garbage detection model Included base feature network.
  • the output of the garbage detection model is a garbage detection frame, and the output garbage detection frame has a confidence level; the detection unit 203 may include: a traversal subunit 203c for traversing the garbage detection The garbage detection box output by the model.
  • the deletion subunit 203d is used to delete the other garbage detection frame when the intersection ratio between the currently traversed garbage detection frame and any other garbage detection frame is greater than the preset intersection ratio; after the traversal, the The remaining garbage detection boxes are determined as garbage detection results.
  • the garbage detection result is a garbage detection frame
  • the positioning unit 204 can be used to: determine the preset area where the center point of any garbage detection frame is located as the garbage area; or determine the coincidence degree with any garbage detection frame The preset area that satisfies the preset coincidence condition is determined as a garbage area.
  • the acquiring unit 201 can be used to: acquire multiple target images taken for the area to be detected; or acquire multiple target images taken for the area to be detected within a preset time period; or acquire continuous shooting of the area to be detected multiple target images.
  • the positioning unit 204 may be configured to: if any preset area contains garbage in the garbage detection results determined for the preset number of target images, determine the preset area as a garbage area.
  • the garbage detection device may further include: a cleaning route determining unit 205, configured to determine a garbage area to be cleaned, and determine a cleaning route according to the garbage area to be cleaned.
  • a cleaning route determining unit 205 configured to determine a garbage area to be cleaned, and determine a cleaning route according to the garbage area to be cleaned.
  • the garbage detection device may also include: a cleaning mode determination unit 206, configured to obtain the correspondence between garbage types and cleaning modes; according to the determined garbage detection results, determine the types of garbage contained in the garbage area; the garbage detection model It is also used to detect the type of garbage; according to the type of garbage contained in any garbage area, determine the corresponding cleaning method.
  • a cleaning mode determination unit 206 configured to obtain the correspondence between garbage types and cleaning modes; according to the determined garbage detection results, determine the types of garbage contained in the garbage area; the garbage detection model It is also used to detect the type of garbage; according to the type of garbage contained in any garbage area, determine the corresponding cleaning method.
  • the garbage detection device may further include: an obstacle detection unit 207, configured to detect obstacles in the area to be detected, and determine a preset area where obstacles exist as an obstacle area.
  • an obstacle detection unit 207 configured to detect obstacles in the area to be detected, and determine a preset area where obstacles exist as an obstacle area.
  • the positioning unit 204 may be configured to: determine the non-obstacle area in which garbage is detected as the garbage area according to the determined garbage detection result.
  • the obstacle detection unit 207 may be configured to: use a pre-trained obstacle detection model to determine an obstacle detection result for the target image.
  • the basic feature network included in the obstacle detection model and the garbage detection model can be trained using the same training sample set, and the trained basic feature network is the same;
  • the training sample set can include: an image marked with an obstacle detection frame label samples, and image samples annotated with spam detection box labels.
  • the embodiment of this specification also provides a computer device, which can be specifically an unmanned cleaning device, which at least includes a memory, a processor, and a computer program stored in the memory and operable on the processor, wherein the processor executes the program When implementing a garbage detection method in any one of the above method embodiments.
  • FIG. 8 shows a schematic diagram of a more specific hardware structure of a computer device provided by the embodiment of this specification.
  • the device may include: a processor 1010 , a memory 1020 , an input/output interface 1030 , a communication interface 1040 and a bus 1050 .
  • the processor 1010 , the memory 1020 , the input/output interface 1030 and the communication interface 1040 are connected to each other within the device through the bus 1050 .
  • the processor 1010 may be implemented by a general-purpose CPU (Central Processing Unit, central processing unit), a microprocessor, an application-specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, and is used to execute related programs to realize the technical solutions provided by the embodiments of this specification.
  • a general-purpose CPU Central Processing Unit, central processing unit
  • a microprocessor an application-specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits
  • ASIC Application Specific Integrated Circuit
  • the memory 1020 can be implemented in the form of ROM (Read Only Memory, read-only memory), RAM (Random Access Memory, random access memory), static storage device, dynamic storage device, etc.
  • the memory 1020 can store operating systems and other application programs. When implementing the technical solutions provided by the embodiments of this specification through software or firmware, the relevant program codes are stored in the memory 1020 and invoked by the processor 1010 for execution.
  • the input/output interface 1030 is used to connect the input/output module to realize information input and output.
  • the input/output/module can be configured in the device as a component (not shown in the figure), or can be externally connected to the device to provide corresponding functions.
  • the input device may include a keyboard, mouse, touch screen, microphone, various sensors, etc.
  • the output device may include a display, a speaker, a vibrator, an indicator light, and the like.
  • the communication interface 1040 is used to connect a communication module (not shown in the figure), so as to realize the communication interaction between the device and other devices.
  • the communication module can realize communication through wired means (such as USB, network cable, etc.), and can also realize communication through wireless means (such as mobile network, WIFI, Bluetooth, etc.).
  • Bus 1050 includes a path that carries information between the various components of the device (eg, processor 1010, memory 1020, input/output interface 1030, and communication interface 1040).
  • the above device only shows the processor 1010, the memory 1020, the input/output interface 1030, the communication interface 1040 and the bus 1050, in the specific implementation process, the device may also include other components.
  • the above-mentioned device may only include components necessary to implement the solutions of the embodiments of this specification, and does not necessarily include all the components shown in the figure.
  • the embodiment of this specification also provides a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, a garbage detection method in any of the above method embodiments is implemented.
  • Computer-readable media including both permanent and non-permanent, removable and non-removable media, can be implemented by any method or technology for storage of information.
  • Information may be computer readable instructions, data structures, modules of a program, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Flash memory or other memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, A magnetic tape cartridge, disk storage or other magnetic storage device or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
  • computer-readable media excludes transitory computer-readable media, such as modulated data signals and carrier waves.
  • a typical implementing device is a computer, which may take the form of a personal computer, laptop computer, cellular phone, camera phone, smart phone, personal digital assistant, media player, navigation device, e-mail device, game control device, etc. desktops, tablets, wearables, or any combination of these.
  • each embodiment in this specification is described in a progressive manner, the same and similar parts of each embodiment can be referred to each other, and each embodiment focuses on the differences from other embodiments.
  • the description is relatively simple, and for relevant parts, please refer to part of the description of the method embodiment.
  • the device embodiments described above are only illustrative, and the modules described as separate components may or may not be physically separated, and the functions of each module may be integrated in the same or multiple software and/or hardware implementations. Part or all of the modules can also be selected according to actual needs to achieve the purpose of the solution of this embodiment. It can be understood and implemented by those skilled in the art without creative effort.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention divulgue un procédé et un appareil de détection d'ordures. Une zone à soumettre à une détection est divisée à l'avance en au moins deux zones prédéfinies. Le procédé comprend les étapes consistant à : acquérir une image cible qui est capturée pour une zone à soumettre à une détection ; déterminer des positions de zones prédéfinies, qui sont obtenues par division de ladite zone, dans l'image cible ; pour l'image cible, déterminer un résultat de détection d'ordures à l'aide d'un modèle de détection d'ordures préentraîné ; et en fonction du résultat de détection d'ordures déterminé, déterminer une zone prédéfinie dans laquelle des ordures sont présentes comme étant une zone d'ordures.
PCT/CN2022/070517 2021-09-30 2022-01-06 Détection d'ordures WO2023050637A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111163884.9 2021-09-30
CN202111163884.9A CN115240094A (zh) 2021-09-30 2021-09-30 一种垃圾检测方法和装置

Publications (1)

Publication Number Publication Date
WO2023050637A1 true WO2023050637A1 (fr) 2023-04-06

Family

ID=83666404

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/070517 WO2023050637A1 (fr) 2021-09-30 2022-01-06 Détection d'ordures

Country Status (2)

Country Link
CN (1) CN115240094A (fr)
WO (1) WO2023050637A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116342895A (zh) * 2023-05-31 2023-06-27 浙江联运知慧科技有限公司 基于ai处理的提升再生资源分拣效率的方法及系统
CN117314704A (zh) * 2023-09-28 2023-12-29 光谷技术有限公司 应急事件管理方法、电子设备和存储介质
CN117315541A (zh) * 2023-10-12 2023-12-29 浙江净禾智慧科技有限公司 一种地面垃圾识别方法及系统

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190347486A1 (en) * 2018-05-08 2019-11-14 Electronics And Telecommunications Research Institute Method and apparatus for detecting a garbage dumping action in real time on video surveillance system
CN111037554A (zh) * 2019-12-12 2020-04-21 杭州翼兔网络科技有限公司 一种基于机器学习的垃圾清理方法、装置、设备及介质
CN111458721A (zh) * 2020-03-31 2020-07-28 江苏集萃华科智能装备科技有限公司 一种暴露垃圾的识别定位方法、装置及系统
CN111767822A (zh) * 2020-06-23 2020-10-13 浙江大华技术股份有限公司 垃圾检测方法以及相关设备、装置
CN111797829A (zh) * 2020-06-24 2020-10-20 浙江大华技术股份有限公司 一种车牌检测方法、装置、电子设备和存储介质
CN113255588A (zh) * 2021-06-24 2021-08-13 杭州鸿泉物联网技术股份有限公司 垃圾清扫车垃圾清扫方法、装置、电子设备及存储介质
CN113377111A (zh) * 2021-06-30 2021-09-10 杭州电子科技大学 一种无人清扫车的任务调度系统及方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190347486A1 (en) * 2018-05-08 2019-11-14 Electronics And Telecommunications Research Institute Method and apparatus for detecting a garbage dumping action in real time on video surveillance system
CN111037554A (zh) * 2019-12-12 2020-04-21 杭州翼兔网络科技有限公司 一种基于机器学习的垃圾清理方法、装置、设备及介质
CN111458721A (zh) * 2020-03-31 2020-07-28 江苏集萃华科智能装备科技有限公司 一种暴露垃圾的识别定位方法、装置及系统
CN111767822A (zh) * 2020-06-23 2020-10-13 浙江大华技术股份有限公司 垃圾检测方法以及相关设备、装置
CN111797829A (zh) * 2020-06-24 2020-10-20 浙江大华技术股份有限公司 一种车牌检测方法、装置、电子设备和存储介质
CN113255588A (zh) * 2021-06-24 2021-08-13 杭州鸿泉物联网技术股份有限公司 垃圾清扫车垃圾清扫方法、装置、电子设备及存储介质
CN113377111A (zh) * 2021-06-30 2021-09-10 杭州电子科技大学 一种无人清扫车的任务调度系统及方法

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116342895A (zh) * 2023-05-31 2023-06-27 浙江联运知慧科技有限公司 基于ai处理的提升再生资源分拣效率的方法及系统
CN116342895B (zh) * 2023-05-31 2023-08-11 浙江联运知慧科技有限公司 基于ai处理的提升再生资源分拣效率的方法及系统
CN117314704A (zh) * 2023-09-28 2023-12-29 光谷技术有限公司 应急事件管理方法、电子设备和存储介质
CN117314704B (zh) * 2023-09-28 2024-04-19 光谷技术有限公司 应急事件管理方法、电子设备和存储介质
CN117315541A (zh) * 2023-10-12 2023-12-29 浙江净禾智慧科技有限公司 一种地面垃圾识别方法及系统

Also Published As

Publication number Publication date
CN115240094A (zh) 2022-10-25

Similar Documents

Publication Publication Date Title
WO2023050637A1 (fr) Détection d'ordures
Tan et al. Toronto-3D: A large-scale mobile LiDAR dataset for semantic segmentation of urban roadways
WO2020052530A1 (fr) Procédé et dispositif de traitement d'image et appareil associé
US11386672B2 (en) Need-sensitive image and location capture system and method
JP2019149150A (ja) 点群データを処理するための方法及び装置
CN111325788B (zh) 一种基于街景图片的建筑物高度确定方法
CN106871906B (zh) 一种盲人导航方法、装置及终端设备
WO2020093966A1 (fr) Procédé de production de données de positionnement, appareil et dispositif électronique
US20230249353A1 (en) Method for building a local point cloud map and Vision Robot
WO2020093939A1 (fr) Procédé de positionnement, dispositif et appareil électronique
WO2021253245A1 (fr) Procédé et dispositif d'identification de tendance au changement de voie d'un véhicule
US11295521B2 (en) Ground map generation
Józsa et al. Towards 4D virtual city reconstruction from Lidar point cloud sequences
WO2021017211A1 (fr) Procédé et dispositif de positionnement de véhicule utilisant la détection visuelle, et terminal monté sur un véhicule
JP2021103160A (ja) 自律走行車意味マップ確立システム及び確立方法
CN106096207A (zh) 一种基于多目视觉的旋翼无人机抗风评估方法及系统
CN106687931A (zh) 电子设备以及对监控目标进行监控的方法和装置
WO2022166606A1 (fr) Procédé et appareil de détection de cible
RU2638638C1 (ru) Способ и система автоматического построения трехмерных моделей городов
JP5577312B2 (ja) データ解析装置、データ解析方法、及びプログラム
CN106996785A (zh) 一种对导航数据进行更新的方法及装置
WO2020135325A1 (fr) Procédé, dispositif et système de positionnement de dispositif mobile, et dispositif mobile
CN111709354B (zh) 识别目标区域的方法、装置、电子设备和路侧设备
CN110660113A (zh) 特征地图的建立方法、装置、采集设备和存储介质
CN112880692A (zh) 地图数据标注方法及装置、存储介质

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE