WO2023019793A1 - 一种确定方法、清洁机器人和计算机存储介质 - Google Patents

一种确定方法、清洁机器人和计算机存储介质 Download PDF

Info

Publication number
WO2023019793A1
WO2023019793A1 PCT/CN2021/133084 CN2021133084W WO2023019793A1 WO 2023019793 A1 WO2023019793 A1 WO 2023019793A1 CN 2021133084 W CN2021133084 W CN 2021133084W WO 2023019793 A1 WO2023019793 A1 WO 2023019793A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
image frame
image
frame
optical flow
Prior art date
Application number
PCT/CN2021/133084
Other languages
English (en)
French (fr)
Inventor
赵大成
韩冲
张志鹏
Original Assignee
美智纵横科技有限责任公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 美智纵横科技有限责任公司 filed Critical 美智纵横科技有限责任公司
Publication of WO2023019793A1 publication Critical patent/WO2023019793A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Definitions

  • the present application relates to the technical field of determining the type of dirt with a cleaning robot, and in particular relates to a determining method, a cleaning robot and a computer storage medium.
  • the sweeping robot needs to adopt different cleaning modes when cleaning liquid or sauce dirt. Therefore, it is very necessary to recognize the dirt.
  • the embodiment of the present application expects to provide a determination method, a cleaning robot and a computer storage medium, so as to solve the technical problem of poor recognition accuracy when the cleaning robot performs dirty recognition in the related art.
  • a method for determining, said method being applied in a cleaning robot comprising:
  • the type of soiling of the soiled area is determined.
  • a cleaning robot comprising:
  • An acquisition module configured to acquire adjacent image frames through the monocular camera of the cleaning robot
  • the first determination module is configured to determine the reflection image in the adjacent image frames according to the optical flow frame of the adjacent image frames when determining the movement of the cleaning robot, and from one of the adjacent image frames Delete the reflection image in the frame to get the target image frame;
  • a second determination module configured to determine a dirty area in the target image frame
  • a third determination module configured to determine the type of contamination of the dirty area.
  • a cleaning robot comprising:
  • a computer storage medium stores executable instructions, and when the executable instructions are executed by one or more processors, the processors execute the determination method described in one or more embodiments.
  • the adjacent image frames are obtained by the monocular camera of the cleaning robot, and when the motion of the cleaning robot is determined, according to the optical flow frames of the adjacent image frames, the The reflection image in the adjacent image frame, delete the reflection image from one of the adjacent image frames to obtain the target image frame, determine the dirty area in the target image frame, and determine the dirt type of the dirty area; that is , in the embodiment of the present application, after the adjacent image frame is acquired, the reflection image can be determined through the optical flow frame of the adjacent image frame, and the image frame after deleting the reflection image of one frame in the adjacent avatar frame is determined be the target image frame, determine the dirty area based on the target image frame, and determine the dirty type of the dirty area, so that the impact of the reflection on the dirty recognition can be avoided because the reflection image has been removed from the target image frame, and The influence of other interference images in the image frame on the accuracy of dirt recognition is avoided when the traditional dirt recognition algorithm is used, thereby improving the accuracy of dirt
  • FIG. 1 is a schematic flowchart of an optional determination method provided in the embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of an example of an optional sweeping robot provided in the embodiment of the present application
  • FIG. 3 is a schematic flowchart of an example of an optional determination of a dirty area provided by the embodiment of the present application
  • FIG. 4 is a visual comparison diagram of an optional dirty area provided by the embodiment of the present application.
  • FIG. 5 is a schematic flowchart of an example of an optional determination of the type of contamination provided by the embodiment of the present application.
  • FIG. 6 is a schematic diagram of an optional region with reflected colors of visible laser light provided by the embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of an optional cleaning robot provided in an embodiment of the present application.
  • Fig. 8 is a schematic structural diagram of another optional cleaning robot provided in the embodiment of the present application.
  • FIG. 1 is a schematic flow chart of an optional determination method provided by the embodiment of the present application.
  • the determination method can be include:
  • sweeping robots usually use traditional dirt recognition algorithms to transfer images to the frequency domain to remove highly repetitive textures, and then binarize to extract dirt.
  • the frequency domain cannot solve repetitive textures, and Binarization cannot solve the sundries, reflections and patterned bottom plates in real scenes, etc., resulting in poor accuracy of dirt recognition.
  • the embodiment of the present application provides a determination method.
  • the adjacent image frame is acquired through the monocular camera of the cleaning robot, wherein the adjacent image frame can be a color space of
  • the image frame of red, green and blue may also be an image frame whose color space is Hue Saturation Value (HSV, Hue Saturation Value), which is not specifically limited in this embodiment of the present application.
  • HSV Hue Saturation Value
  • S101 may include:
  • the original image frame collection is obtained by shooting with the monocular camera of the cleaning robot;
  • the original image frame set is obtained by shooting with the monocular camera of the cleaning robot, and for each original image frame in the original image frame set, the variance of each original image frame is calculated and the mean value of each original image frame is calculated , so that the variance set and the mean set can be obtained.
  • the original image frame whose variance is greater than the preset variance threshold, and the original image frame whose mean value is smaller than the preset first mean threshold and greater than the preset second mean threshold all belong to the image frame that is too dark or overexposed, so here , from the original image frame set, delete the original image frame whose variance is greater than the preset variance threshold, and, from the original image frame set, delete the mean value less than the preset first mean threshold and greater than the preset second mean threshold.
  • two adjacent image frames are selected as adjacent image frames for determining the type of dirt.
  • the cleaning robot After the adjacent image frames are acquired, it can be determined whether the cleaning robot is in motion through the adjacent image frames, wherein, whether the cleaning robot is in motion can be determined through the optical flow frames of the adjacent image frames, for example, if the adjacent When the Euclidean distance of the same object image in the optical flow frame of the image frame is greater than 0, it means that the cleaning robot is in motion; The robot is in motion.
  • the reflection image in the adjacent image frame can be determined according to the optical flow frame of the adjacent image frame, so as to delete the reflection image from one of the adjacent image frames to obtain the target image frame, thus , so that the obtained target image frame is an image frame from which the reflection image has been removed, and the suspected dirty area is determined by using the image frame, thereby improving the determination accuracy.
  • the reflection image in the adjacent image frame it can be determined according to the Euclidean distance of the object image in the optical flow frame, or it can be determined by the included angle of the optical flow direction in the optical flow frame, or it can be determined by the optical flow
  • the included angle between the Euclidean distance in the flow frame and the optical flow direction is determined, which is not specifically limited in this embodiment of the present application.
  • S102 may include:
  • An image of an object image whose included angle is less than or equal to a preset angle threshold is determined as a reflection image.
  • the optical flow frame of the adjacent image frame is obtained, and the optical flow frame includes the optical flow direction of each object image, the optical flow velocity, and the position change and pixel value of each object image , in this way, the position information and pixel value of each object image before and after motion can be known, and then the Euclidean distance of each object image can be calculated according to the position information before and after motion, only when the Euclidean distance is less than or equal to the preset When the distance threshold of , it indicates that the object image is a reflection image.
  • the angle of the optical flow direction of each object image can be calculated, and only when the angle is less than or equal to the preset angle threshold, the object image is determined for the reflection image.
  • S102 may include:
  • An image of an object whose Euclidean distance is less than or equal to a preset distance threshold and whose included angle is less than or equal to a preset angle threshold is determined as a reflection image.
  • the Euclidean distance of each object image and the included angle of the optical flow direction of each object image can be calculated.
  • the Euclidean distance is less than or equal to the preset distance threshold, and the included angle When it is less than or equal to the preset angle threshold, it is determined that the object image is a reflection image.
  • the dirty area of the target image frame can be determined according to the dirty recognition algorithm, or the light of the adjacent image frame can be used.
  • the stream frame is used to determine the dirty area of the target image frame, which is not specifically limited in this embodiment of the present application.
  • S103 may include:
  • the area is expanded within the preset pixel value deviation range to obtain the expanded area
  • the dirty area is determined.
  • the optical flow frame also includes the pixel value of each object image in adjacent image frames, here, the absolute difference between the pixel values of each object image in adjacent image frames can be calculated value, and select the absolute value according to the order of the absolute value of the difference from large to small, and determine the position of the object image of the selected absolute value in the target image frame as the target position.
  • the relatively larger value of the absolute value of the difference is selected here.
  • the reason for selecting a relatively larger value is that the object image with a relatively larger value is most likely to be a point in the dirty area. Therefore, the position of the selected absolute value object image in the target image frame is determined as the target position.
  • the pixel value floating within the preset pixel value deviation range is found out, and the expanded area is obtained together with the pixel value of the target position, and according to the expanded The area identified is the dirty area.
  • the above method can also include:
  • the target image frame is an image frame whose color space is RGB
  • the target image frame is converted into an image frame of color space HSV, and the target image frame is obtained again.
  • the target image frame is an image frame whose color space is GRB
  • the pixel value of the frame is (100, 100, 100), and the preset deviation range is [90, 110].
  • the area where the pixel value is between [90, 110] can be determined as the expanded area, and Dirty areas are determined from the expanded area.
  • the accuracy of the determined dirty area is higher, which is beneficial to determine the type of dirt.
  • the position of the object image in the target image frame whose absolute value of the difference falls within a preset range is determined as the target position, including :
  • the position of the object image corresponding to the absolute value of the first N difference values in the target image frame is determined as the target position.
  • the absolute value of the difference is calculated, the absolute value is sorted in descending order, and then the absolute value of the first N differences is selected, where N is a positive integer; and then the N
  • the position of the object image corresponding to the absolute value of the difference in the target image frame is determined as the target position.
  • the object image corresponding to the value with a larger absolute value is found out, because the object image is likely to be a dirty area Therefore, the position of the object image in the target image frame is determined, and finally the determined position is determined as the target position, so that the determined target position is to a large extent the target position in the dirty area of the target image frame point, which is beneficial to determine the dirty area in the target image frame.
  • the expanded area can be directly determined as the dirty area, and the expanded area can also be processed according to preset rules to obtain the dirty area , here, this embodiment of the present application does not specifically limit it.
  • determining the expanded area as a dirty area includes:
  • the area in the expanded area whose area is greater than or equal to the preset first area threshold is determined as a dirty area
  • the expanded area is determined as a dirty area.
  • the expanded area is screened, the area in the expanded area is greater than or equal to the preset first area threshold, and the area in the expanded area is smaller than the preset first area threshold In this way, the area in the expanded area whose area is greater than or equal to the preset first area threshold can be determined as the dirty area.
  • the area size of the expanded area can also be used first to retain the area in the expanded area whose area is greater than or equal to the preset first area threshold, and to reserve the area in the expanded area smaller than the preset first area threshold.
  • Delete the area and on this basis, input the target image frame into the support vector machine (SVM, Support Vector Machine) classifier, when it is determined that there is indeed dirt in the target image frame, the expanded area is determined as dirty If the target image is input into the SVM classifier and it is determined that there is no dirt in the target image frame, it means that there is no dirty area in the target image frame.
  • SVM Support Vector Machine
  • the target image frame can be input into the SVM classifier, and the target image frame with the expanded region can also be input into the SVM classifier, so that the SVM classifier can determine the target Whether there is a dirty area in the image frame is not specifically limited in this embodiment of the present application.
  • Otsu binarization can be used to determine the suspected dirty area
  • SVM classifier can be used to determine whether there is a dirty area in the adjacent image frame. If there is, It indicates that the suspected dirty area is a dirty area.
  • S104 Determine the type of dirt in the dirty area.
  • the SVM classifier can be used to determine the type of dirt in the dirty area, and the type of dirt in the dirty area can also be determined in the following way.
  • the cleaning robot also includes a visible For the laser, the visible laser is arranged under the monocular camera, and the overlapping area of the irradiation area of the visible laser and the shooting area of the monocular camera is greater than the preset second area threshold, S104 may include:
  • the dirt type of the dirty area is liquid or sauce.
  • a visible laser is provided in the lower area of the monocular camera of the cleaning robot, so that the overlapping area of the irradiation area of the visible laser and the shooting area of the monocular camera is greater than the preset second area threshold, so that when the visible laser is turned on After that, the monocular camera can capture the image frame irradiated with the visible laser, that is, the current image frame, because when the current image frame contains an area with the reflected color of the visible laser, for example, the current image frame contains an area with green color, Indicate that the type of dirt is liquid or sauce. In this way, the dirty area of liquid or sauce can be determined.
  • the cleaning mode can be switched to cleaning liquid or sauce In the mode, the cleaning robot can clean up the dirty area.
  • FIG 2 is a schematic structural diagram of an example of an optional sweeping robot provided in the embodiment of the present application.
  • the sweeping robot is equipped with a visible laser under the monocular camera, and the installation of the visible laser and the monocular camera The direction is the same, and they are installed directly in front of the sweeping robot's forward direction. It can be seen that the field of view of the laser and the monocular camera overlap.
  • Fig. 3 is a schematic flowchart of an optional example of determining a dirty area provided by the embodiment of the present application. As shown in Fig. 3, the method for determining a dirty area may include:
  • the original image frame is acquired through a monocular camera, and the original image frame that does not meet the variance threshold and mean threshold is deleted by calculating the variance and mean of each original image frame, and then obtained from the deleted original image frame Adjacent image frames to complete scene analysis.
  • S302 optical flow tracking; when the cleaning robot moves, execute S303; otherwise, execute S305;
  • optical flow tracking can be performed based on adjacent image frames to determine whether the cleaning robot is in motion.
  • the optical flow information in the optical flow frames includes the optical flow of each object image. Flow velocity, optical flow direction, position information of each object image, and pixel value of each object image.
  • the reflection image in the adjacent image frame can be determined through the optical flow frame of the adjacent image frame, for example, through the Euclidean distance and optical flow of the same object image in the optical flow frame The angle between the orientations is used to determine the reflected image in adjacent image frames.
  • the reflection image is deleted from one of the adjacent image frames to obtain the target image frame.
  • the image frame acquired by the monocular camera is the image frame whose color space is RGB, however, the image frame whose color space is HSV is more suitable for determining the dirty area, so here, the color space is the RGB target
  • the image frame is converted into a target image frame whose color space is HSV, and on the basis of the converted target image frame, the position corresponding to the object image whose absolute value of the pixel value difference is found to be larger is used as the target position, and the target position That is, the optical flow points with large differences in the HSV color space.
  • optical flow points with large differences are obtained, on the basis of the converted target image frame, and based on the obtained optical flow points with large differences, search for pixel values within the preset pixel value deviation range, Select the optical flow points that match the pixel value, and determine the area formed by the selected optical flow points and the optical flow points with large differences as the suspected dirty area.
  • S306 Screen the largest connected area to determine the dirty area; execute S309;
  • the suspected dirty area After the suspected dirty area is determined, some of the identified suspected dirty areas are larger, and some are only composed of a few sporadic noise points. In order to improve the accuracy of the dirty area, select the suspected dirty area with The region with the largest connected domain is regarded as the dirty region.
  • S308 Binarize Otsu to obtain the dirty area; execute S309;
  • the average value of the acquired image frames can be determined first in the form of a histogram. After the average value is determined, the acquired image frames are binarized by Otsu binarization to obtain a binary value The image frame after binarization, and the dirty area is determined according to the image frame after binarization.
  • S309 SVM classifier.
  • the SVM classifier is used to verify the authenticity of the dirty area.
  • Figure 4 is a visual comparison diagram of an optional dirty area provided by the embodiment of the present application.
  • Fig. 5 is a schematic flowchart of an example of an optional determination of the type of contamination provided by the embodiment of the present application. As shown in Fig. 5, the method for determining the type of contamination may include:
  • S502 start the visible laser, and acquire image frames
  • the visible laser can be used to determine the type of dirt. Therefore, here, the image frame is acquired after starting the visible laser , the image frames thus obtained are image frames captured under the condition of irradiating visible laser light.
  • the visible laser mask image is extracted.
  • the mask image includes the emission color area of the visible laser light, it means that the type of dirt in the image frame is liquid or sauce, otherwise, it belongs to non-liquid type. and non-sauce soiled areas.
  • Figure 6 is a schematic diagram of an optional area with the reflected color of the visible laser provided by the embodiment of the present application. As shown in Figure 6, when the visible laser hits the dirty area, it will reflect green light. According to the color analysis, the extracted Determining the type of soiling of the soiled area, out of the soiled edge area.
  • adding a visible laser to the monocular camera in the sweeping robot does not require cumbersome calibration, the price is low, and the computing power depends on the hardware platform. It can make specific analysis according to the dynamic and static scenes and exposure scenes, and can eliminate the ground according to the optical flow information. Reflection, stable extraction of liquids and sauces, filtering out floor patterns.
  • the adjacent image frames are acquired by the monocular camera of the cleaning robot, and when the movement of the cleaning robot is determined, the reflection in the adjacent image frames is determined according to the optical flow frames of the adjacent image frames Image, delete the reflection image from one of the adjacent image frames to obtain the target image frame, determine the dirty area in the target image frame, and determine the dirty type of the dirty area; that is, in the embodiment of the present application , after the adjacent image frame is acquired, the reflection image can be determined through the optical flow frame of the adjacent image frame, and the image frame after deleting the reflection image of one frame in the adjacent avatar frame is determined as the target image frame, based on the target The image frame is used to determine the dirty area and the type of dirt in the dirty area.
  • FIG. 7 is a schematic structural diagram of an optional cleaning robot provided in the embodiment of the present application.
  • the cleaning robot may include:
  • the acquisition module 71 is configured to acquire adjacent image frames through the monocular camera of the cleaning robot;
  • the first determination module 72 is configured to determine the reflection image in the adjacent image frame according to the optical flow frame of the adjacent image frame when determining the movement of the cleaning robot, and delete the reflection image from one of the adjacent image frames to obtain target image frame;
  • the second determination module 73 is configured to determine the dirty area in the target image frame
  • the third determination module 74 is configured to determine the type of contamination of the dirty area.
  • the acquisition module 71 is specifically configured as:
  • the original image frame collection is obtained by shooting with the monocular camera of the cleaning robot;
  • the adjacent image frames are selected from the deleted original image frame set.
  • the first determination module 72 is specifically configured as:
  • An image of an object image whose included angle is less than or equal to a preset angle threshold is determined as a reflection image.
  • the first determination module 72 is specifically configured as:
  • An image of an object whose Euclidean distance is less than or equal to a preset distance threshold and whose included angle is less than or equal to a preset angle threshold is determined as a reflection image.
  • the second determination module 73 is specifically configured as:
  • the area is expanded within the preset pixel value deviation range to obtain the expanded area
  • the dirty area is determined.
  • the cleaning robot is also configured as:
  • the target image frame is an image frame whose color space is RGB
  • the second determination module 73 selects the difference in order of the absolute value of the difference from large to small, and determines the position of the object image of the selected difference in the target image frame as the target locations, including:
  • the position of the object image corresponding to the absolute value of the first N difference values in the target image frame is determined as the target position.
  • the second determination module 73 determines the dirty area according to the expanded area, including:
  • the expanded area is determined as a dirty area.
  • the cleaning robot further includes a visible laser.
  • the visible laser is arranged under the monocular camera, and the overlapping area between the irradiation area of the visible laser and the shooting area of the monocular camera is greater than the preset second area threshold, corresponding Ground, the third determining module 74 is specifically configured as:
  • the dirt type of the dirty area is liquid or sauce.
  • the acquisition module 71, the first determination module 72, the second determination module 73 and the third determination module 74 can be implemented by a processor located on the cleaning robot, specifically a central processing unit (CPU, Central Processing Unit), Implementations such as microprocessor (MPU, Microprocessor Unit), digital signal processor (DSP, Digital Signal Processing) or field programmable gate array (FPGA, Field Programmable Gate Array).
  • CPU Central Processing Unit
  • Implementations such as microprocessor (MPU, Microprocessor Unit), digital signal processor (DSP, Digital Signal Processing) or field programmable gate array (FPGA, Field Programmable Gate Array).
  • FIG. 8 is a schematic structural diagram of another optional cleaning robot provided in the embodiment of the present application. As shown in FIG. 8 , the embodiment of the present application provides a Cleaning robot 800, including:
  • the storage medium 82 relies on the processor 81 to perform operations through a communication bus 83.
  • the instructions are executed by the processor 81, Execute the determining method described in one or more embodiments above.
  • the communication bus 83 is used to realize connection and communication between these components.
  • the communication bus 83 also includes a power bus, a control bus and a status signal bus.
  • the various buses are labeled as communication bus 83 in FIG.
  • the embodiments of the present application provide a computer-readable storage medium, which stores one or more programs, and the one or more programs can be executed by one or more processors.
  • the determination method provided by the embodiment provided by the embodiment.
  • the embodiments of the present application may be provided as methods, systems, or computer program products. Accordingly, the present application may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage and optical storage, etc.) having computer-usable program code embodied therein.
  • a computer-usable storage media including but not limited to disk storage and optical storage, etc.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to operate in a specific manner, such that the instructions stored in the computer-readable memory produce an article of manufacture comprising instruction means, the instructions
  • the device realizes the function specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram.
  • the determination method, the cleaning robot and the computer storage medium provided in the embodiments of the present application can determine the reflection image through the optical flow frame of the adjacent image frame after the adjacent image frame is acquired, and will delete one of the adjacent avatar frames.
  • the image frame after the reflection image of the frame is determined as the target image frame
  • the dirty area is determined based on the target image frame
  • the dirt type of the dirty area is determined. In this way, since the reflection image has been removed from the target image frame, it can avoid The influence of the reflection on the dirt recognition, and avoiding the influence of other interference images in the image frame on the accuracy of the dirt recognition when using the traditional dirt recognition algorithm, thereby improving the accuracy of the dirt recognition.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

一种确定方法,该方法应用于清洁机器人中,包括:通过清洁机器人的单目摄像头获取相邻图像帧(S101),当确定清洁机器人运动时,根据相邻图像帧的光流帧,确定相邻图像帧中的倒影图像,从相邻图像帧的其中一帧中删除倒影图像,得到目标图像帧(S102),确定目标图像帧中的脏污区域(S103),确定脏污区域的脏污类型(S104)。同时还公开了一种清洁机器人和计算机存储介质。

Description

一种确定方法、清洁机器人和计算机存储介质
相关申请的交叉引用
本申请基于申请号为202110961393.2、申请日为2021年08月20日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此以全文引用的方式引入本申请。
技术领域
本申请涉及具有清洁机器人的脏污类型的确定技术领域,尤其是涉及一种确定方法、清洁机器人和计算机存储介质。
背景技术
目前,扫地机器人在清扫液体或者酱类脏污时,需要采取不同的清洁模式,因此,识别脏污是非常有必要的。
然而,传统的脏污识别算法受到重复纹理,杂物,反光和有图案的地板的影响,导致识别结果不准确,另外,采用双目相机或者飞行时间(TOF,Time of Flight)脏污识别算法比单目相机多了一维信息,增加数据处理的复杂度,影响识别结果的准确性;由此可以看出,现有的清洁机器人进行脏污识别时存在识别准确性较差的技术问题。
发明内容
本申请实施例期望提供一种确定方法、清洁机器人和计算机存储介质,以解决相关技术中清洁机器人进行脏污识别时识别准确性较差的技术问题。
本申请的技术方案是这样实现的:
一种确定方法,所述方法应用于清洁机器人中,包括:
通过所述清洁机器人的单目摄像头获取相邻图像帧;
当确定所述清洁机器人运动时,根据所述相邻图像帧的光流帧,确定所述相邻图像帧中的倒影图像,从所述相邻图像帧的其中一帧中删除倒影图像,得到目标图像帧;
确定所述目标图像帧的脏污区域;
确定所述脏污区域的脏污类型。
一种清洁机器人,包括:
获取模块,配置为通过所述清洁机器人的单目摄像头获取相邻图像帧;
第一确定模块,配置为当确定所述清洁机器人运动时,根据所述相邻图像帧的光流帧,确定所述相邻图像帧中的倒影图像,从所述相邻图像帧的其中一帧中删除倒影图像,得到目标图像帧;
第二确定模块,配置为确定所述目标图像帧中的脏污区域;
第三确定模块,配置为确定所述脏污区域的脏污类型。
一种清洁机器人,包括:
处理器以及存储有所述处理器可执行指令的存储介质,所述存储介质通过通信总线依赖所述处理器执行操作,当所述指令被所述处理器执行时,执行上述一个或多个实施例所述的确定方法。
一种计算机存储介质,存储有可执行指令,当所述可执行指令被一个或多个处理器执行的时候,所述处理器执行如一个或多个实施例所述的确定方法。
本申请实施例所提供的一种确定方法、清洁机器人和计算机存储介质,通过清洁机器人的单目摄像头获取相邻图像帧,当确定清洁机器人运动时,根据相邻图像帧的光流帧,确定相邻图像帧中的倒影图像,从相邻图像帧的其中一帧中删除倒影图像,得到目标图像帧,确定目标图像帧中的脏污区域,确定脏污区域的脏污类型;也就是说,在本申请实施例中,在获取到相邻图像帧之后,通过相邻图像帧的光流帧可以确定出倒影图像,将删 除掉相邻头像帧中一帧的倒影图像之后的图像帧确定为目标图像帧,基于目标图像帧来确定脏污区域,并确定脏污区域的脏污类型,这样,由于目标图像帧中已经去除掉倒影图像,从而可以避免倒影对脏污识别的影响,并且避免了使用传统的脏污识别算法时图像帧中的其他干扰图像对脏污识别的准确性的影响,从而提高了脏污识别的准确性。
附图说明
图1为本申请实施例提供的一种可选的确定方法的流程示意图;
图2为本申请实施例提供的一种可选的扫地机器人的实例的结构示意图;
图3为本申请实施例提供的一种可选的确定脏污区域的实例的流程示意图;
图4为本申请实施例提供的一种可选的脏污区域的视觉对比图;
图5为本申请实施例提供的一种可选的确定脏污类型的实例的流程示意图;
图6为本申请实施例提供的一种可选的具有可见激光的反射颜色的区域示意图;
图7为本申请实施例提供的一种可选的清洁机器人的结构示意图;
图8为本申请实施例提供的另一种可选的清洁机器人的结构示意图。
具体实施方式
为了更好地了解本申请的目的、结构及功能,下面结合附图,对本申请的一种确定方法、清洁机器人做进一步详细的描述。
本申请的实施例提供一种确定方法,该确定方法应用于清洁机器人中,图1为本申请实施例提供的一种可选的确定方法的流程示意图,参照图1所示,该确定方法可以包括:
S101:通过清洁机器人的单目摄像头获取相邻图像帧;
目前,扫地机器人通常采用传统的脏污识别算法把图像转到频域去掉高度重复纹理,然后二值化提取脏污,但是,由于扫地机器人的相机的视角低,频域不能解决重复纹理,并且二值化解决不了现实场景中的杂物、反光和有图案的底板等等,导致对脏污识别的准确性较差。
为了提高清洁机器人对脏污识别的准确性,本申请实施例中提供一种确定方法,首先,通过清洁机器人的单目摄像头获取相邻图像帧,其中,该相邻图像帧可以是颜色空间为红绿蓝(RGB,Red Green Blue)的图像帧,也可以是颜色空间为色调饱和度亮度(HSV,Hue Saturation Value)的图像帧,这里,本申请实施例对此不作具体限定。
为了避免过暗或者过曝图像帧对脏污识别的准确性的影响,需要对采集到的图像帧进行筛选,在一种可选的实施例中,S101可以包括:
通过清洁机器人的单目摄像头拍摄得到原始图像帧集合;
计算原始图像帧集合中每个原始图像帧的方差和每个原始图像帧的均值,得到方差集合和均值集合;
从原始图像帧集合中,删除方差集合中大于预设的方差阈值的方差对应的原始图像帧,且删除均值集合中小于预设的第一均值阈值且大于预设的第二均值阈值的均值对应的原始图像帧,得到删除后的原始图像帧集合;
从删除后的原始图像帧集合中,选取出相邻图像帧。
具体来说,通过清洁机器人的单目摄像头拍摄,从而得到原始图像帧集合,针对原始图像帧集合中的每个原始图像帧,计算每个原始图像帧的方差以及计算每个原始图像帧的均值,从而可以得到方差集合和均值集合。
由于方差大于预设的方差阈值的原始图像帧,以及均值小于预设的第一均值阈值大于预设的第二均值阈值的原始图像帧,都属于过暗或者过曝的图像帧,所以,这里,从原始图像帧集合中,删除掉方差大于预设的方差阈值的原始图像帧,并且,从原始图像帧集合中,删除掉均值小于预设 的第一均值阈值大于预设的第二均值阈值的原始图像帧,得到删除后的原始图像帧集合。
最后,从删除后的原始图像帧集合中,选取出两个相邻的图像帧作为相邻图像帧,用于确定脏污类型。
这样,使得所获取到的相邻图像帧中不存在过暗或者过曝的图像帧,避免了过暗或者过曝的图像帧对脏污区域识别的影响,从而提高了脏污识别的准确性。
S102:当确定清洁机器人运动时,根据相邻图像帧的光流帧,确定相邻图像帧中的倒影图像,从相邻图像帧的其中一帧中删除倒影图像,得到目标图像帧;
在获取到相邻图像帧之后,可以通过相邻图像帧确定出清洁机器人是否处于运动中,其中,可以通过相邻图像帧的光流帧来确定清洁机器人是否处于运动中,例如,若相邻图像帧的光流帧中同一物像的欧式距离大于0时,说明清洁机器人处于运动中,若相邻图像帧的光流帧中同一物像的光流方向的夹角大于0时,说明清洁机器人处于运动中。
当确定清洁机器人运动时,可以根据相邻图像帧的光流帧来确定相邻图像帧中的倒影图像,以从相邻图像帧中其中一帧中删除掉倒影图像,得到目标图像帧,这样,使得得到的目标图像帧是剔除了倒影图像的图像帧,利用该图像帧确定疑似脏污区域,提高了确定的准确性。
进一步地,为了确定出相邻图像帧中的倒影图像,可以根据光流帧中的物像的欧式距离来确定,也可以通过光流帧中光流方向的夹角来确定,还可以通过光流帧中的欧式距离和光流方向的夹角来确定,这里,本申请实施例对此不作具体限定。
针对根据光流帧中的物像的欧式距离或者通过光流帧中光流方向的夹角来确定倒影图像来说,在一种可选的实施例中,S102可以包括:
计算光流帧中每个物像的欧式距离;
将欧式距离小于等于预设的距离阈值的物像的图像,确定为倒影图像;
或者,计算光流帧中每个物像的光流方向之间的夹角;
将夹角小于等于预设的角度阈值的物像的图像,确定为倒影图像。
具体来说,这里,根据光流法得到相邻图像帧的光流帧,该光流帧中包括每个物像的光流方向,光流速度,以及每个物像的位置变化和像素值,这样,可以知晓每个物像在运动前和运动后的位置信息和像素值,然后根据运动前和运动后的位置信息计算得到每个物像的欧式距离,只有当欧式距离小于等于预设的距离阈值时,说明该物像为倒影图像。
或者,根据每个物像在运动前和运动后的光流方向,可以计算出每个物像的光流方向的夹角,只有当夹角小于等于预设的角度阈值时,确定该物像为倒影图像。
另外,为了确定出相邻图像帧中的倒影图像,针对根据光流帧中的物像的欧式距离和通过光流帧中光流方向的夹角来确定倒影图像来说,在一种可选的实施例中,S102可以包括:
计算光流帧中每个物像的欧式距离,以及计算光流帧中每个物像的光流方向之间的夹角;
将欧式距离小于等于预设的距离阈值,且夹角小于等于预设的角度阈值的物像的图像,确定为倒影图像。
具体来说,根据相邻图像帧的光流帧可以计算出每个物像的欧式距离,每个物像的光流方向的夹角,当欧式距离小于等于预设的距离阈值,且夹角小于等于预设的角度阈值时,确定该物像为倒影图像。
S103:将确定目标图像帧的脏污区域;
具体来说,在确定出目标图像帧之后,需要根据目标图像帧来确定脏污区域,这里,可以根据脏污识别算法来确定目标图像帧的脏污区域,也可以利用相邻图像帧的光流帧来确定目标图像帧的脏污区域,这里,本申请实施例对此不作具体限定。
为了确定出脏污区域,再根据光流帧确定目标图像帧的脏污区域中,在一种可选的实施例中,S103可以包括:
获取光流帧中每个物像的像素值的差值的绝对值;
按照差值的绝对值由大到小的顺序选取绝对值,并将选取出的绝对值的物像在目标图像帧中的位置,确定为目标位置;
根据目标图像帧,以目标位置的像素值为基准在预设的像素值偏差范围内扩充区域,得到扩充后的区域;
根据扩充后的区域,确定脏污区域。
具体来说,由于光流帧中还包括每个物像在相邻图像帧中的像素值,所以,这里,可以计算出每个物像在相邻图像帧中的像素值的差值的绝对值,并按照将差值的绝对值由大到小的顺序选取绝对值,并将选取出的绝对值的物像在目标图像帧中的位置,确定为目标位置。
也就是说,这里选取出差值的绝对值中相对较大的值,之所以选取出相对较大的值,是因为相对较大的值的物像极大可能为脏污区域中的点,所以,将选取出的绝对值的物像在目标图像帧中的位置确定为目标位置。
最后,根据目标图像帧,以目标位置的像素值为基准,将在预设的像素值偏差范围内浮动的像素值查找出来,与目标位置的像素值一起得到扩充后的区域,并根据扩充后的区域确定脏污区域。
进一步地,为了使得确定出目标图像帧的脏污区域更加准确,在一种可选的实施例中,在获取光流帧中每个物像的像素值的差值的绝对值之前,上述方法还可以包括:
当目标图像帧是颜色空间为RGB的图像帧时,将目标图像帧转换成颜色空间HSV的图像帧,重新得到目标图像帧。
具体来说,为了使得确定出的脏污区域的准确性更高,这里,当目标图像帧是颜色空间为GRB的图像帧时,需要将目标图像帧转换成颜色空间为HSV的图像帧,重新得到目标图像帧,这样,再根据目标图像帧,以目 标位置的像素值为基准,在预设的偏差范围内查找符合的像素值的位置点,例如,重新得到的HSV颜色空间下的目标图像帧的像素值为(100,100,100),预设的偏差范围为【90,110】,这样,可以将像素值在【90,110】之间的位置区域确定为扩充后的区域,并根据扩充后的区域确定脏污区域。
通过目标图像帧在颜色空间的转换和预设的像素值的偏差范围内的扩充,使得确定出的脏污区域的准确性更高,有利于确定出脏污类型。
进一步地,为了确定出合适的目标位置,在一种可选的实施例中,将差值的绝对值落入预设范围内的物像在目标图像帧中的位置,确定为目标位置,包括:
将差值的绝对值按照由大到小的顺序排序,从排序结果中选取前N个差值的绝对值;
将前N个差值的绝对值对应的物像在目标图像帧中的位置,确定为目标位置。
具体来说,在计算出差值的绝对值之后,按照由大到小的顺序对绝对值进行排序,然后选取出前N个差值的绝对值,其中,N为正整数;再将这N个差值的绝对值对应的物像在目标图像帧中的位置,确定为目标位置,这样,将绝对值较大的值对应的物像找出来,由于该物像有极大可能为脏污区域中的点,所以,确定该物像在目标图像帧中的位置,最后将确定出的位置确定为目标位置,使得确定出的目标位置在很大程度上是目标图像帧中脏污区域中的点,有利于确定出目标图像帧中的脏污区域。
进一步地,为了确定出脏污区域,在确定出扩充后的区域之后,可以直接将扩充后的区域确定为脏污区域,还可以按照预设规则对扩充后的区域进行处理,得到脏污区域,这里,本申请实施例对此不作具体限定。
针对照预设规则对扩充后的区域进行处理,得到脏污区域,在一种可选的实施例中,将扩充后的区域确定为脏污区域,包括:
将扩充后的区域中面积大于等于预设的第一面积阈值的区域,确定为 脏污区域;
或者,当扩充后的区域中面积大于等于预设的第一面积阈值,且利用SVM分类器确定扩充后的区域为脏污时,将扩充后的区域,确定为脏污区域。
具体来说,对扩充后的区域进行筛选,将扩充后的区域中面积大于等于预设的第一面积阈值的区域保留下来,将扩充后的区域中面积小于预设的第一面积阈值的区域删除掉,这样,便可以将扩充后的区域中面积大于等于预设的第一面积阈值的区域确定为脏污区域。
另外,还可以先通过扩充后的区域的面积大小,将扩充后的区域中面积大于等于预设的第一面积阈值的区域保留下来,将扩充后的区域中面积小于预设的第一面积阈值的区域删除掉,并在此基础上,将目标图像帧输入至支持向量机(SVM,Support Vector Machine)分类器中,确定目标图像帧中确实有脏污时,将扩充后的区域确定为脏污区域,若将目标图像输入至SVM分类器中,确定目标图像帧中没有脏污时,说明目标图像帧中不存在脏污区域。
这里,需要说明的是,本申请实施例中可以将目标图像帧输入至SVM分类器中,也可以将具有扩充后的区域的目标图像帧输入至SVM分类器中,使得SVM分类器确定出目标图像帧中是否存在脏污区域,本申请实施例对此不作具体限定。
另外,需要说明的是,当确定清洁机器人不在运动状态时,可以利用大津二值化确定出疑似脏污区域,并利用SVM分类器确定出相邻图像帧中是否存在脏污区域,若存在,说明该疑似脏污区域为脏污区域。
S104:确定脏污区域的脏污类型。
在确定出脏污区域之后,由于针对不同类型的脏污区域,清洁机器人需要采用不同的清洁模式来清扫脏污区域,所以,这里,需要确定出脏污类型,为了确定出脏污区域的类型,可以采用SVM分类器的方式来确定出 脏污区域的脏污类型,还可以通过下述方式来确定脏污区域的脏污类型,在一种可选的实施例中,清洁机器人还包括可见激光器,可见激光器设置于单目摄像头的下方,且可见激光器的照射区域与单目摄像头的拍摄区域的重叠面积大于预设的第二面积阈值,S104可以包括:
开启可见激光器之后,通过单目摄像头获取当前图像帧;
当当前图像帧中包含具有可见激光的反射颜色的区域时,确定脏污区域的脏污类型为液体类或者酱类。
也就是说,在清洁机器人的单目摄像头的下方区域设置有可见激光器,使得可见激光器的照射区域与单目摄像头的拍摄区域的重叠面积大于预设的第二面积阈值,这样,当开启可见激光之后,单目摄像头可以捕捉到照射有可见激光的图像帧,即当前图像帧,由于当当前图像帧中包含具有可见激光的反射颜色的区域时,例如,当前图像帧中包含具有绿色的区域,说明该脏污类型为液体类或者酱类,这样,便可以确定出液体类或者酱类的脏污区域,当清洁机器人运动至脏污区域时,可以将清洁模式切换至清扫液体类或者酱类的模式中,使得清洁机器人能够将脏污区域清理干净。
下面举实例来对上述一个或多个实施例所述的确定方法进行说明。
图2为本申请实施例提供的一种可选的扫地机器人的实例的结构示意图,如图2所示,该扫地机器人在单目相机的下方安装有可见激光器,可见激光器与单目相机的安装方向一致,均安装在扫地机器人前进方向的正前方,可见激光器与单目相机的视野重合。
图3为本申请实施例提供的一种可选的确定脏污区域的实例的流程示意图,如图3所示,确定脏污区域的方法可以包括:
S301:场景分析;
具体来说,通过单目相机获取原始图像帧,通过计算每个原始图像帧的方差和均值,将不符合方差阈值和均值阈值的原始图像帧删除掉,然后从删除后的原始图像帧中获取相邻图像帧,以完成场景分析。
S302:光流跟踪;当清洁机器人运动时,执行S303;否则,执行S305;
具体来说,可以根据相邻图像帧进行光流追踪,来确定清洁机器人是否处于运动中,先确定相邻图像帧的光流帧,光流帧中的光流信息包括每个物像的光流速度,光流方向以及每个物像的位置信息和每个物像的像素值,当光流帧中的光流信息有变化时,说明扫地机器人处于运动状态,执行S303;当光流帧中的光流信息无变化时,说明扫地机器人处于静止状态,执行S307。
S303:处理倒影;执行S304;
具体来说,当确定扫地机器人处于运动状态时,通过相邻图像帧的光流帧可以确定出相邻图像帧中的倒影图像,例如,通过对光流帧中同一物像的欧式距离和光流方向的夹角来确定相邻图像帧中的倒影图像。
然后,从相邻图像帧的其中一帧中删除倒影图像,得到目标图像帧。
S304:提取HSV空间差异较大的光流点;执行S305;
一般地,通过单目相机获取到的图像帧为颜色空间为RGB的图像帧,然而,颜色空间为HSV的图像帧更适合用于确定脏污区域,所以,这里,将颜色空间为RGB的目标图像帧转换为颜色空间为HSV的目标图像帧,并在转换后的目标图像帧的基础上,查找像素值的差值的绝对值较大的物像对应的位置,作为目标位置,该目标位置即为HSV颜色空间中差异较大的光流点。
S305:根据光流点获取脏污区域;执行S306;
在获取到差异较大的光流点之后,在转换后的目标图像帧的基础上,以获取到的差异较大的光流点为基准,在预设的像素值偏差范围内查找像素值,将符合像素值的光流点选取出来,并将选取出来的光流点以及差异较大的光流点形成的区域确定为疑似脏污区域。
S306:筛选最大连通区域,确定脏污区域;执行S309;
在确定出疑似脏污区域之后,确定出的疑似脏污区域有些区域面积较 大,有些只是有零星的几个杂点组成,为了提高脏污区域的准确性,选取出疑似脏污区域中具有最大连通域的区域作为脏污区域。
S307:直方图均值化;执行S308;
S308:大津二值化,获取脏污区域;执行S309;
当扫地机器人处于静止状态时,可以先采用直方图的形式确定出获取到的图像帧的均值,在确定出均值之后,利用大津二值化将获取到的图像帧进行二值化,得到二值化后的图像帧,根据二值化后的图像帧确定出脏污区域。
S309:SVM分类器。
最后,在确定出脏污区域之后,利用SVM分类器验证该脏污区域的真实性。
图4为本申请实施例提供的一种可选的脏污区域的视觉对比图,如图4所示,从左到右依次为脏污区域的光流效果图,大津二值化得到的脏污区域的掩膜图,以及确定出的包括脏污区域的图像帧。
图5为本申请实施例提供的一种可选的确定脏污类型的实例的流程示意图,如图5所示,确定脏污类型的方法可以包括:
S501:获取疑似脏污区域;
S502:启动可见激光器,获取图像帧;
在确定出单目相机获取到的图像帧的疑似脏污区域时,为了确定出脏污区域的脏污类型,可以通过可见激光器来确定脏污类型,所以,这里,启动可见激光器之后获取图像帧,这样获取到的图像帧是在照射有可见激光的情况下拍摄的图像帧。
S503:提取可见激光掩膜图;
S504:获取可见激光的反射颜色的区域;
S505:确定脏污类型。
在获取到图像帧之后,提取可见激光掩膜图,当掩膜图中包括可见激 光的发射颜色的区域时,说明图像帧中的脏污类型为液体类或者酱类,否则,属于非液体类和非酱类的脏污区域。
图6为本申请实施例提供的一种可选的具有可见激光的反射颜色的区域示意图,如图6所示,当可见激光打在脏污区域时,会反射绿光,根据颜色分析,提取出脏污边缘区域,确定脏污区域的脏污类型。
通过上述实例,在扫地机器人中单目相机加可见激光,无需繁琐的标定,价格低,依赖硬件平台算力低,能够根据动静场景以及曝光场景做出具体分析,能够根据光流信息剔除掉地面倒影,能够稳定提取液体和酱类脏污,能过滤掉地板图案。
本申请实施例所提供的一种确定方法,通过清洁机器人的单目摄像头获取相邻图像帧,当确定清洁机器人运动时,根据相邻图像帧的光流帧,确定相邻图像帧中的倒影图像,从相邻图像帧的其中一帧中删除倒影图像,得到目标图像帧,确定目标图像帧中的脏污区域,确定脏污区域的脏污类型;也就是说,在本申请实施例中,在获取到相邻图像帧之后,通过相邻图像帧的光流帧可以确定出倒影图像,将删除掉相邻头像帧中一帧的倒影图像之后的图像帧确定为目标图像帧,基于目标图像帧来确定脏污区域,并确定脏污区域的脏污类型,这样,由于目标图像帧中已经去除掉倒影图像,从而可以避免倒影对脏污识别的影响,并且避免了使用传统的脏污识别算法时图像帧中的其他干扰图像对脏污识别的准确性的影响,从而提高了脏污识别的准确性。
实施例二
基于同一发明构思,本申请的实施例提供一种清洁机器人,图7为本申请实施例提供的一种可选的清洁机器人的结构示意图,参照图7所示,该清洁机器人可以包括:
获取模块71,配置为通过清洁机器人的单目摄像头获取相邻图像帧;
第一确定模块72,配置为当确定清洁机器人运动时,根据相邻图像帧的光流帧,确定相邻图像帧中的倒影图像,从相邻图像帧的其中一帧中删除倒影图像,得到目标图像帧;
第二确定模块73,配置为确定目标图像帧中的脏污区域;
第三确定模块74,配置为确定脏污区域的脏污类型。
本申请其他实施例中,该获取模块71,具体配置为:
通过清洁机器人的单目摄像头拍摄得到原始图像帧集合;
计算原始图像帧集合中每个原始图像帧的方差和每个原始图像帧的均值,得到方差集合和均值集合;
从原始图像帧集合中,删除方差集合中大于预设的方差阈值的方差对应的原始图像帧,且删除均值集合中小于预设的第一均值阈值且大于预设的第二均值阈值的均值对应的原始图像帧,得到删除后的原始图像帧集合;
从删除后的原始图像帧集合中,选取出所述相邻图像帧。
本申请其他实施例中,第一确定模块72,具体配置为:
计算光流帧中每个物像的欧式距离;
将欧式距离小于等于预设的距离阈值的物像的图像,确定为倒影图像;
或者,计算光流帧中每个物像的光流方向之间的夹角;
将夹角小于等于预设的角度阈值的物像的图像,确定为倒影图像。
本申请其他实施例中,第一确定模块72,具体配置为:
计算光流帧中每个物像的欧式距离,以及计算光流帧中每个物像的光流方向之间的夹角;
将欧式距离小于等于预设的距离阈值,且夹角小于等于预设的角度阈值的物像的图像,确定为倒影图像。
本申请其他实施例中,第二确定模块73,具体配置为:
获取光流帧中每个物像的像素值的差值的绝对值;
按照差值的绝对值由大到小的顺序选取绝对值,并将选取出的绝对值 的物像在目标图像帧中的位置,确定为标位置;
根据目标图像帧,以目标位置的像素值为基准在预设的像素值偏差范围内扩充区域,得到扩充后的区域;
根据扩充后的区域,确定脏污区域。
本申请其他实施例中,清洁机器人还配置为:
在获取光流帧中每个物像的像素值的差值的绝对值之前,当目标图像帧是颜色空间为RGB的图像帧时,将目标图像帧转换成颜色空间HSV的图像帧,重新得到目标图像帧。
本申请其他实施例中,第二确定模块73按照差值的绝对值由大到小的顺序,选取差值,并将选取出的差值的物像在目标图像帧中的位置,确定为目标位置中,包括:
将差值的绝对值按照由大到小的顺序排序,从排序结果中选取前N个差值的绝对值;其中,N为正整数;
将前N个差值的绝对值对应的物像在目标图像帧中的位置,确定为目标位置。
本申请其他实施例中,第二确定模块73根据扩充后的区域,确定脏污区域中,包括:
将扩充后的区域中面积大于等于预设的第一面积阈值的区域,确定为脏污区域;
或者,当扩充后的区域中面积大于等于预设的第一面积阈值,且利用SVM分类器确定扩充后的区域为脏污时,将扩充后的区域,确定为脏污区域。
本申请其他实施例中,清洁机器人还包括可见激光器,可见激光器设置于单目摄像头的下方,且可见激光器的照射区域与单目摄像头的拍摄区域的重叠面积大于预设的第二面积阈值,相应地,第三确定模块74,具体配置为:
开启可见激光器之后,通过单目摄像头获取当前图像帧;
当当前图像帧中包含具有可见激光的发射颜色的区域时,确定脏污区域的脏污类型为液体类或者酱类。
在实际应用中,上述获取模块71、第一确定模块72,第二确定模块73和第三确定模块74可由位于清洁机器人上的处理器实现,具体为中央处理器(CPU,Central Processing Unit)、微处理器(MPU,Microprocessor Unit)、数字信号处理器(DSP,Digital Signal Processing)或现场可编程门阵列(FPGA,Field Programmable Gate Array)等实现。
基于前述实施例,本申请实施例提供一种扫地机器人,图8为本申请实施例提供的另一种可选的清洁机器人的结构示意图,如图8所示,本申请实施例提供了一种清洁机器人800,包括:
处理器81以及存储有所述处理器81可执行指令的存储介质82,所述存储介质82通过通信总线83依赖所述处理器81执行操作,当所述指令被所述处理器81执行时,执行上述一个或多个实施例所述确定方法。
需要说明的是,实际应用时,终端中的各个组件通过通信总线83耦合在一起。可理解,通信总线83用于实现这些组件之间的连接通信。通信总线83除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图8中将各种总线都标为通信总线83。
基于前述实施例,本申请的实施例提供一种计算机可读存储介质,该计算机可读存储介质存储有一个或者多个程序,该一个或者多个程序可被一个或者多个处理器执行本申请实施例提供的确定方法。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用硬件实施例、软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器和光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
以上所述,仅为本申请较佳实施例而已,并非用于限定本申请的保护范围。
工业实用性
本申请实施例中提供的确定方法、清洁机器人和计算机存储介质,在获取到相邻图像帧之后,通过相邻图像帧的光流帧可以确定出倒影图像,将删除掉相邻头像帧中一帧的倒影图像之后的图像帧确定为目标图像帧,基于目标图像帧来确定脏污区域,并确定脏污区域的脏污类型,这样,由于目标图像帧中已经去除掉倒影图像,从而可以避免倒影对脏污识别的影 响,并且避免了使用传统的脏污识别算法时图像帧中的其他干扰图像对脏污识别的准确性的影响,从而提高了脏污识别的准确性。

Claims (12)

  1. 一种确定方法,其中,所述方法应用于清洁机器人中,包括:
    通过所述清洁机器人的单目摄像头获取相邻图像帧;
    当确定所述清洁机器人运动时,根据所述相邻图像帧的光流帧,确定所述相邻图像帧中的倒影图像,从所述相邻图像帧的其中一帧中删除倒影图像,得到目标图像帧;
    确定所述目标图像帧的脏污区域;
    确定所述脏污区域的脏污类型。
  2. 根据权利要求1所述的方法,其中,所述通过所述清洁机器人的单目摄像头获取相邻图像帧,包括:
    通过所述清洁机器人的单目摄像头拍摄得到原始图像帧集合;
    计算所述原始图像帧集合中每个原始图像帧的方差和每个原始图像帧的均值,得到方差集合和均值集合;
    从原始图像帧集合中,删除所述方差集合中大于预设的方差阈值的方差对应的原始图像帧,且删除所述均值集合中小于预设的第一均值阈值且大于预设的第二均值阈值的均值对应的原始图像帧,得到删除后的原始图像帧集合;
    从删除后的原始图像帧集合中,选取出所述相邻图像帧。
  3. 根据权利要求1或2所述的方法,其中,所述根据所述相邻图像帧的光流帧,确定所述相邻图像帧中的倒影图像,从所述相邻图像帧的其中一帧中删除倒影图像,得到目标图像帧,包括:
    计算所述光流帧中每个物像的欧式距离;
    将所述欧式距离小于等于预设的距离阈值的物像的图像,确定为所述倒影图像;
    或者,计算所述光流帧中每个物像的光流方向之间的夹角;
    将所述夹角小于等于预设的角度阈值的物像的图像,确定为所述倒影图像。
  4. 根据权利要求1或2所述的方法,其中,所述根据所述相邻图像帧的光流帧,确定所述相邻图像帧中的倒影图像,包括:
    计算所述光流帧中每个物像的欧式距离,以及计算所述光流帧中每个物像的光流方向之间的夹角;
    将所述欧式距离小于等于预设的距离阈值,且所述夹角小于等于预设的角度阈值的物像的图像,确定为所述倒影图像。
  5. 根据权利要求1所述的方法,其中,所述确定所述目标图像帧的脏污区域,包括:
    获取所述光流帧中每个物像的像素值的差值的绝对值;
    按照所述差值的绝对值由大到小的顺序选取绝对值,并将选取出的绝对值的物像在所述目标图像帧中的位置,确定为所述目标位置;
    根据所述目标图像帧,以所述目标位置的像素值为基准在预设的像素值偏差范围内扩充区域,得到扩充后的区域;
    根据所述扩充后的区域,确定所述脏污区域。
  6. 根据权利要求5所述的方法,其中,在获取所述光流帧中每个物像的像素值的差值的绝对值之前,所述方法还包括:
    当所述目标图像帧是颜色空间为RGB的图像帧时,将所述目标图像帧转换成颜色空间HSV的图像帧,重新得到目标图像帧。
  7. 根据权利要求5所述的方法,其中,所述按照所述差值的绝对值由大到小的顺序,选取差值,并将选取出的差值的物像在所述目标图像帧中的位置,确定为所述目标位置,包括:
    将所述差值的绝对值按照由大到小的顺序排序,从排序结果中选取前N个差值的绝对值;其中,N为正整数;
    将前N个差值的绝对值对应的物像在所述目标图像帧中的位置,确定 为所述目标位置。
  8. 根据权利要求5所述的方法,其中,所述根据扩充后的区域,确定所述脏污区域,包括:
    将所述扩充后的区域中面积大于等于预设的第一面积阈值的区域,确定为所述脏污区域;
    或者,当所述扩充后的区域中面积大于等于预设的第一面积阈值,且利用SVM分类器确定所述扩充后的区域为脏污时,将所述扩充后的区域,确定为所述脏污区域。
  9. 根据权利要求1所述的方法,其中,所述清洁机器人还包括可见激光器,所述可见激光器设置于所述单目摄像头的下方,且所述可见激光器的照射区域与所述单目摄像头的拍摄区域的重叠面积大于预设的第二面积阈值,相应地,所述确定所述脏污区域的脏污类型,包括:
    开启所述可见激光器之后,通过所述单目摄像头获取当前图像帧;
    当所述当前图像帧中包含具有可见激光的发射颜色的区域时,确定所述脏污区域的脏污类型为液体类或者酱类。
  10. 一种清洁机器人,其中,包括:
    获取模块,配置为通过所述清洁机器人的单目摄像头获取相邻图像帧;
    第一确定模块,配置为当确定所述清洁机器人运动时,根据所述相邻图像帧的光流帧,确定所述相邻图像帧中的倒影图像,从所述相邻图像帧的其中一帧中删除倒影图像,得到目标图像帧;
    第二确定模块,配置为确定所述目标图像帧中的脏污区域;
    第三确定模块,配置为确定所述脏污区域的脏污类型。
  11. 一种清洁机器人,其中,包括:
    处理器以及存储有所述处理器可执行指令的存储介质,所述存储介质通过通信总线依赖所述处理器执行操作,当所述指令被所述处理器执行时,执行上述的权利要求1至9任一项所述的确定方法。
  12. 一种计算机存储介质,其中,存储有可执行指令,当所述可执行指令被一个或多个处理器执行的时候,所述处理器执行如权利要求1至9任一项所述的确定方法。
PCT/CN2021/133084 2021-08-20 2021-11-25 一种确定方法、清洁机器人和计算机存储介质 WO2023019793A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110961393.2A CN113628202B (zh) 2021-08-20 2021-08-20 一种确定方法、清洁机器人和计算机存储介质
CN202110961393.2 2021-08-20

Publications (1)

Publication Number Publication Date
WO2023019793A1 true WO2023019793A1 (zh) 2023-02-23

Family

ID=78386928

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/133084 WO2023019793A1 (zh) 2021-08-20 2021-11-25 一种确定方法、清洁机器人和计算机存储介质

Country Status (2)

Country Link
CN (1) CN113628202B (zh)
WO (1) WO2023019793A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113628202B (zh) * 2021-08-20 2024-03-19 美智纵横科技有限责任公司 一种确定方法、清洁机器人和计算机存储介质
CN114468843B (zh) * 2022-02-28 2023-09-08 烟台艾睿光电科技有限公司 清洁设备、系统及其清洁控制方法、装置和介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050192707A1 (en) * 2004-02-27 2005-09-01 Samsung Electronics Co., Ltd. Dust detection method and apparatus for cleaning robot
US20190325585A1 (en) * 2018-04-23 2019-10-24 Denso Ten Limited Movement information estimation device, abnormality detection device, and abnormality detection method
CN209678390U (zh) * 2018-12-07 2019-11-26 江苏美的清洁电器股份有限公司 一种用于扫地机的运动状态监测装置及扫地机
CN111008571A (zh) * 2019-11-15 2020-04-14 万翼科技有限公司 室内垃圾处理方法及相关产品
CN111444768A (zh) * 2020-02-25 2020-07-24 华中科技大学 一种用于反光地面场景的微小障碍物发现方法
CN111487958A (zh) * 2019-01-28 2020-08-04 北京奇虎科技有限公司 一种扫地机器人的控制方法和装置
CN113628202A (zh) * 2021-08-20 2021-11-09 美智纵横科技有限责任公司 一种确定方法、清洁机器人和计算机存储介质

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018022343A (ja) * 2016-08-03 2018-02-08 株式会社東芝 画像処理装置、および画像処理方法
CN108154098A (zh) * 2017-12-20 2018-06-12 歌尔股份有限公司 一种机器人的目标识别方法、装置和机器人
CN111493742A (zh) * 2019-01-30 2020-08-07 江苏美的清洁电器股份有限公司 清洁机器人、控制方法和存储介质
CN110288538A (zh) * 2019-05-23 2019-09-27 南京理工大学 一种多特征融合的运动目标阴影检测及消除方法
CN111402373B (zh) * 2020-03-13 2024-03-01 网易(杭州)网络有限公司 一种图像处理方法、装置、电子设备及存储介质
CN112434659B (zh) * 2020-12-07 2023-09-05 深圳市优必选科技股份有限公司 反光特征点剔除方法、装置、机器人和可读存储介质
CN112734720B (zh) * 2021-01-08 2024-03-05 沈阳工业大学 基于视觉识别的船壳激光清洗在位检测方法及系统
CN113160075A (zh) * 2021-03-30 2021-07-23 武汉数字化设计与制造创新中心有限公司 Apriltag视觉定位的加工方法、系统、爬壁机器人及存储介质
CN113194253B (zh) * 2021-04-28 2023-04-28 维沃移动通信有限公司 去除图像反光的拍摄方法、装置和电子设备

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050192707A1 (en) * 2004-02-27 2005-09-01 Samsung Electronics Co., Ltd. Dust detection method and apparatus for cleaning robot
US20190325585A1 (en) * 2018-04-23 2019-10-24 Denso Ten Limited Movement information estimation device, abnormality detection device, and abnormality detection method
CN209678390U (zh) * 2018-12-07 2019-11-26 江苏美的清洁电器股份有限公司 一种用于扫地机的运动状态监测装置及扫地机
CN111487958A (zh) * 2019-01-28 2020-08-04 北京奇虎科技有限公司 一种扫地机器人的控制方法和装置
CN111008571A (zh) * 2019-11-15 2020-04-14 万翼科技有限公司 室内垃圾处理方法及相关产品
CN111444768A (zh) * 2020-02-25 2020-07-24 华中科技大学 一种用于反光地面场景的微小障碍物发现方法
CN113628202A (zh) * 2021-08-20 2021-11-09 美智纵横科技有限责任公司 一种确定方法、清洁机器人和计算机存储介质

Also Published As

Publication number Publication date
CN113628202A (zh) 2021-11-09
CN113628202B (zh) 2024-03-19

Similar Documents

Publication Publication Date Title
CN110378945B (zh) 深度图处理方法、装置和电子设备
JP6125188B2 (ja) 映像処理方法及び装置
CN109997351B (zh) 用于生成高动态范围图像的方法和装置
JP3083918B2 (ja) 画像処理装置
WO2023019793A1 (zh) 一种确定方法、清洁机器人和计算机存储介质
CN107784669A (zh) 一种光斑提取及其质心确定的方法
US20170261319A1 (en) Building height calculation method, device, and storage medium
JP5975598B2 (ja) 画像処理装置、画像処理方法及びプログラム
WO2011078264A1 (ja) 画像処理装置、画像処理方法、コンピュータプログラム及び移動体
US20080075385A1 (en) Detection and Correction of Flash Artifacts from Airborne Particulates
US10026004B2 (en) Shadow detection and removal in license plate images
KR20130030220A (ko) 고속 장애물 검출
CN108377374B (zh) 用于产生与图像相关的深度信息的方法和系统
CN108154491B (zh) 一种图像反光消除方法
JP2012038318A (ja) ターゲット検出方法及び装置
JP4674179B2 (ja) 影認識方法及び影境界抽出方法
GB2557035A (en) IR or thermal image enhancement method based on background information for video analysis
CN111369570B (zh) 一种视频图像的多目标检测跟踪方法
CN114127784A (zh) 用于生成相机流的掩模的方法、计算机程序产品和计算机可读介质
JP2018142828A (ja) 付着物検出装置および付着物検出方法
US10997743B2 (en) Attachable matter detection apparatus
JP7200893B2 (ja) 付着物検出装置および付着物検出方法
JP2020109595A (ja) 付着物検出装置および付着物検出方法
CN111027560B (zh) 文本检测方法以及相关装置
JP6201922B2 (ja) 車両用錆検出装置、車両用錆検出システム、及び車両用錆検出方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21954028

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE