WO2022252937A1 - Dispositif de nettoyage et procédé de reconnaissance d'événement déclenché par la lumière pour dispositif de nettoyage - Google Patents

Dispositif de nettoyage et procédé de reconnaissance d'événement déclenché par la lumière pour dispositif de nettoyage Download PDF

Info

Publication number
WO2022252937A1
WO2022252937A1 PCT/CN2022/092021 CN2022092021W WO2022252937A1 WO 2022252937 A1 WO2022252937 A1 WO 2022252937A1 CN 2022092021 W CN2022092021 W CN 2022092021W WO 2022252937 A1 WO2022252937 A1 WO 2022252937A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
target object
frame image
light
frame
Prior art date
Application number
PCT/CN2022/092021
Other languages
English (en)
Chinese (zh)
Inventor
刘煜
唐成
段飞
Original Assignee
北京顺造科技有限公司
苏州小顺科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京顺造科技有限公司, 苏州小顺科技有限公司 filed Critical 北京顺造科技有限公司
Publication of WO2022252937A1 publication Critical patent/WO2022252937A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/24Floor-sweeping machines, motor-driven
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4002Installations of electric equipment
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4011Regulation of the cleaning machine by electric means; Control systems and remote control systems therefor
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4061Steering means; Means for avoiding obstacles; Details related to the place where the driver is accommodated
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/04Automatic control of the travelling movement; Automatic obstacle detection
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/06Control of the cleaning action for autonomous devices; Automatic detection of the surface condition before, during or after cleaning

Definitions

  • the present disclosure belongs to the technical field of autonomous mobile cleaning equipment, and particularly relates to a cleaning equipment and a method for identifying a light-triggered event for the cleaning equipment.
  • Object recognition devices based on single or dual cameras can acquire features of multiple layers included in the entire area of the RGB image or in specific areas to identify specific objects such as adults, children, pets, tables and chairs, doors or railings, etc. .
  • the camera sends the RGB image including the user to the cloud server for accurate identification and processing by acquiring features such as the face and body of the user image
  • the data containing the user's face and special parts are preprocessed and then uploaded to the cloud, which not only still has the problem of leaking user privacy, but also further improves the accuracy of the algorithm. Requirements and computing efficiency requirements, and occupy a large processing memory, while causing heat dissipation problems, affecting the battery life of the cleaning robot.
  • the existing technology also uses non-RGB image forming technology, such as structured light, line laser or TOF, but this technology also needs to be based on CCD or COMS principle, and the camera equipment needs to be turned on all the time to identify and Tracking targets, while traditional CCD or CMOS-based camera equipment has high energy consumption, which in turn leads to high energy consumption in the tracking process, especially when it involves large wide-angle and ultra-wide-angle structures, causing energy consumption problems and heat dissipation difficulties, requiring additional heat dissipation structure.
  • non-RGB image forming technology such as structured light, line laser or TOF
  • Dynamic Vision Sensor is a vision sensor that imitates biological vision. It uses the event-driven principle to achieve fast acquisition of moving targets, and has the advantages of low latency, low storage space, and high dynamic range.
  • DVS The working principle of DVS is quite different from that of traditional frame-based image sensors.
  • Each pixel unit in DVS works independently, using the principle of event triggering, real-time field of view light intensity change detection can be realized through the differential circuit and threshold capacitance.
  • Address-Event, AE Address-Event
  • the AE data only describes the pixel units with large light intensity changes in the field of view, the coordinates and object boundaries of the moving target can be quickly obtained based on the AE data, and the fast acquisition of the moving target can be realized.
  • Dynamic visual sensors that mimic fundamental features of human visual processing have created a new paradigm in vision research, a biomimetic visual sensor that mimics the human retina based on spiking-triggered neurons. Similar to the photoreceptors in the human retina, a single DVS pixel (receptor) can generate events in response to detected changes in light, and the sensor has a pixel unit array composed of multiple pixel units, where each pixel unit is only in the When a change in light intensity is sensed, it will respond and record areas of rapid light intensity changes.
  • the dynamic vision sensor outputs an asynchronous event data stream, for example, it can be the time stamp and light intensity value of the light intensity change and the coordinate position of the triggered pixel unit.
  • DVS sensors greatly reduce redundant pixels (such as static background features) and encode objects with high temporal resolution (about 1 ⁇ s) in a frameless manner, they are well suited for motion analysis, tracking, and surveillance of moving objects. These sensors have a high dynamic operating range (120dB) and are therefore able to operate in uncontrolled environments where lighting conditions vary.
  • the response speed of DVS is no longer limited by the traditional exposure time and frame rate, and can detect high-speed objects moving at a rate of up to 10,000 frames per second; DVS has a larger dynamic range and can be accurate in low-light or high-exposure environments Sense and output scene changes; DVS has lower power consumption; since each pixel unit of DVS responds to changes in light intensity independently, DVS will not be affected by motion blur.
  • DVS is applied to the autonomous mobile cleaning equipment to perform cleaning tasks relative to the autonomous mobile cleaning equipment. Objects in relative motion are described in space and time.
  • the present disclosure provides a cleaning device and a method for identifying a light-triggered event for the cleaning device.
  • the present disclosure provides a cleaning device, comprising:
  • a cleaning device body capable of autonomous movement
  • the camera device is arranged on the main body of the cleaning equipment,
  • the camera device includes at least one dynamic vision device
  • the dynamic vision device includes:
  • an optical signal receiving device that receives the optical signal of the field of view area of the camera device
  • optical sensor acquires frame images frame by frame based on the optical signal received by the optical signal receiving device, and identifies an optical trigger event of at least one type of target object based on the frame image;
  • the cleaning device further includes a memory storing characteristic image information of at least one type of target object; and,
  • a processor the processor is configured to obtain the contour information and size information of at least one target object in at least one frame image identified as the occurrence of a light triggering event, and combine it with the characteristics of the at least one target object stored in the memory The image information is compared to obtain type information and/or position information of at least one target object in at least one frame image where a light trigger event occurs.
  • the cleaning equipment may be surface cleaning equipment, such as self-cleaning equipment and the like.
  • the camera device may also include components such as a camera window.
  • the field of view of the camera device at least covers the outer contour of the main body of the cleaning device, or at least covers the front area of the main body of the cleaning device.
  • the cleaning equipment may include multiple camera devices, or adjust the size or shape of the camera devices to obtain a larger viewing area.
  • the light sensor of the dynamic vision device is a light sensor in the form of a pixel array.
  • the processor further acquires the depth information corresponding to the size information of at least one target object in at least one frame image identified as a light trigger event, and converts the object's contour information, size and the depth information is compared with the characteristic image information of at least one target object stored in the memory to obtain the type information and/or position information of the at least one target object in at least one frame image where the light trigger event occurs.
  • the depth information is distance information between the object and the cleaning device (or a camera device of the cleaning device).
  • the cleaning device when comparing the contour information, size information, and depth information of the object with the characteristic image information of at least one target object stored in the memory, it can perform equivalent matching or according to ratio to match.
  • the characteristic image information of at least one target object stored in the memory includes characteristic contour information, characteristic size information, and characteristic depth information.
  • the acquiring the depth information corresponding to the size information of at least one target object in at least one frame image in which a light trigger event occurs includes:
  • the size information corresponds to the depth information.
  • the camera device further includes a distance measuring light source, and the depth information corresponding to the size information of at least one target object in at least one frame image where a light trigger event occurs is at least based on the Acquisition of the ranging light source.
  • the distance measuring light source is preferably a pulsed line laser light source
  • the dynamic vision device is combined with the pulsed line laser light source to acquire contour information, size information and depth information of an object.
  • the pulse line laser light source emits laser light to the object
  • the dynamic vision device acquires laser stripe information on the object during the process of acquiring frame images
  • the processor is based on the laser stripe Information Gets the depth information of the object.
  • the manner of acquiring the depth information includes acquiring by radar.
  • the camera device includes two dynamic vision devices; and,
  • Parallax images obtained based on two dynamic vision devices are used as the frame image.
  • the parallax images acquired based on two dynamic vision devices, as the frame images include:
  • intersection and union are calculated by using the time-synchronized event parallax volume of the left eye and the event parallax volume of the right eye time synchronization, and then calculate the intersection and union ratio through the intersection and union to obtain a parallax image as a frame image.
  • the light sensor identifies a light-triggered event of at least one type of target object based on the frame image, including:
  • the pixel light intensity of each pixel of the Nth frame image is used as the pixel reference light intensity, and N is a natural number greater than or equal to 1;
  • the pixel light intensity may also be expressed as pixel brightness.
  • the light intensity variation threshold can be preset, and more preferably, the light intensity variation threshold can be adjusted or modified.
  • the cleaning device in at least one embodiment of the present disclosure, if at least one trigger signal is generated based on the light intensity variation of each pixel of the N+1th frame image, the pixel light intensity of each pixel of the N+1th frame image Serves as the new pixel base light intensity.
  • the cleaning device in at least one embodiment of the present disclosure, if no trigger signal is generated based on the light intensity variation of each pixel of the N+1th frame image, the pixel light intensity of each pixel of the Nth frame image is still used as the pixel reference light intensity .
  • the pixel light intensity of each pixel of the N+2th frame image is compared with the pixel light intensity of each pixel of the Nth frame image to obtain the N+2th frame image The light intensity variation of each pixel;
  • a trigger signal is generated to indicate that the pixel has a light trigger event.
  • the pixel light intensity of each pixel of the frame of image is used as a new pixel reference light intensity.
  • a cleaning device comprising:
  • a cleaning device body capable of autonomous movement
  • the camera device is arranged on the main body of the cleaning equipment,
  • the camera device includes at least one dynamic vision device
  • the dynamic vision device includes:
  • the optical signal receiving device receives the optical signal of the field of view area of the camera device
  • optical sensor acquires frame images frame by frame based on the optical signal received by the optical signal receiving device, and identifies an optical trigger event of at least one type of target object based on the frame image;
  • a memory storing characteristic image information of at least one type of target object
  • the cleaning device further includes a processor configured to obtain the contour information and size information of at least one target object in at least one frame image identified as a light trigger event, and combine it with the stored in the memory.
  • the characteristic image information of the at least one target object is compared to obtain type information and/or position information of the at least one target object in at least one frame image where the light trigger event occurs.
  • a cleaning device comprising:
  • a cleaning device body capable of autonomous movement
  • the camera device is arranged on the main body of the cleaning equipment,
  • the camera device includes at least one dynamic vision device
  • the dynamic vision device includes:
  • the optical signal receiving device receives the optical signal of the field of view area of the camera device
  • optical sensor acquires frame images frame by frame based on the optical signal received by the optical signal receiving device, and identifies an optical trigger event of at least one type of target object based on the frame image;
  • a memory storing characteristic image information of at least one type of target object
  • a processor the processor is configured to obtain the contour information and size information of at least one target object in at least one frame image identified as the occurrence of a light triggering event, and combine it with the characteristics of the at least one target object stored in the memory The image information is compared to obtain type information and/or position information of at least one target object in at least one frame image where a light trigger event occurs.
  • the present disclosure provides a method for identifying a light-triggered event for cleaning equipment, including:
  • the cleaning equipment includes a camera device, and the camera device includes at least one dynamic vision device; the at least one dynamic vision device acquires frame images frame by frame, and the pixels of the Nth frame image
  • the pixel light intensity is used as the pixel reference light intensity, and N is a natural number greater than or equal to 1;
  • Figure 1 shows a cleaning device according to one embodiment of the present disclosure.
  • Fig. 2 shows a cleaning device according to yet another embodiment of the present disclosure.
  • Fig. 3 shows a cleaning device according to yet another embodiment of the present disclosure.
  • Fig. 4 shows a schematic diagram of a cleaning device according to an embodiment of the present disclosure.
  • FIG. 5 illustrates a method for recognizing a light-triggered event for a cleaning device according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic diagram of a light trigger event acquired by a light trigger event identification method according to an embodiment of the present disclosure.
  • cross-hatching and/or shading in the figures is generally used to clarify the boundaries between adjacent features. As such, unless stated otherwise, the presence or absence of cross-hatching or shading conveys or indicates no specific material, material properties, dimensions, proportions, commonality between the illustrated components, and/or any other characteristic of the components, Any preferences or requirements for attributes, properties, etc. Also, in the drawings, the size and relative sizes of components may be exaggerated for clarity and/or descriptive purposes. While exemplary embodiments may be implemented differently, a specific process sequence may be performed in an order different from that described. For example, two consecutively described processes may be performed substantially simultaneously or in an order reverse to that described. In addition, the same reference numerals denote the same components.
  • connection When an element is referred to as being “on” or “over”, “connected to” or “coupled to” another element, the element may be directly on, directly connected to, or Or directly bonded to the other component, or intermediate components may be present. However, when an element is referred to as being “directly on,” “directly connected to,” or “directly coupled to” another element, there are no intervening elements present. To this end, the term “connected” may refer to a physical connection, an electrical connection, etc., with or without intervening components.
  • this disclosure may use terms such as “under”, “beneath”, “below”, “under”, “above”, “on”, “on” ... above”, “higher” and “side (e.g., in “side walls”)”, etc., to describe the relationship between one component and another (other) component as shown in the drawings relation.
  • Spatially relative terms are intended to encompass different orientations of the device in use, operation and/or manufacture in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as “below” or “beneath” other elements or features would then be oriented “above” the other elements or features.
  • the exemplary term “below” can encompass both an orientation of "above” and "beneath”.
  • the device may be otherwise positioned (eg, rotated 90 degrees or at other orientations), and as such, the spatially relative descriptors used herein are interpreted accordingly.
  • the present disclosure provides a cleaning device 1000, comprising:
  • the main body of the cleaning device 1001, the main body of the cleaning device 1001 can move autonomously; and,
  • a camera device 1002 the camera device 1002 is arranged at an appropriate position of the main body of the cleaning device 1001,
  • the camera device 1002 includes at least one dynamic vision device 1021, and the dynamic vision device 1021 includes:
  • the optical signal receiving device 1022 receives the optical signal of the field of view area of the camera device; and,
  • the optical sensor 1023 acquires frame images frame by frame based on the optical signal received by the optical signal receiving device, and identifies the optical trigger event of at least one type of target object based on the frame image;
  • the cleaning device also includes a memory 1003, which stores characteristic image information of at least one type of target object; and,
  • Processor 1004 the processor is configured to obtain the contour information and size information of at least one target object in at least one frame image identified as the occurrence of a light trigger event, and perform a comparison with the characteristic image information of at least one target object stored in the memory Comparing to obtain type information and/or position information of at least one target object in at least one frame image where the light trigger event occurs.
  • the cleaning device may be a surface cleaning device, such as an autonomous cleaning device capable of autonomous movement, and the autonomous cleaning device capable of autonomous movement may be an autonomous cleaning device with a disc-shaped mopping assembly.
  • the camera device may also include components such as a camera window.
  • the field of view of the camera device at least covers the outer contour of the main body of the cleaning device, or at least covers the front area of the main body of the cleaning device.
  • the cleaning device 1000 may include multiple camera unit, or make adjustments to the size or shape of the camera unit to obtain a larger field of view.
  • the camera device of the cleaning equipment may include at least one dynamic vision device 1021 described above, and may also include an obstacle avoidance sensor, a cliff sensor, and the like.
  • the dynamic visual device 1021 can be arranged on the front side wall of the cleaning equipment such as the self-cleaning equipment, and preferably can be held on the front side wall of the self-cleaning equipment in an embedded manner. In light of this, the setting method or setting position of the camera device and the dynamic vision device 1021 can be adjusted.
  • Fig. 4 exemplarily shows a cleaning device in the form of an autonomously moving self-cleaning device, and Fig. 4 also exemplarily shows a dynamic vision device 1021 embedded on the front side wall of the autonomously moving self-cleaning device.
  • the optical signal receiving device may be a lens unit or a lens assembly of a dynamic vision device, and a larger viewing area may also be obtained by adjusting the size or shape of the lens unit.
  • the target object may be a table leg, a coffee table, a human body, a pet, and the like that can be in the field of view of the camera device of the cleaning device.
  • the light sensor of the dynamic vision device is a light sensor in the form of a pixel array.
  • the characteristic image information of at least one type of target object is preferably training image digital information, and the characteristic image information of at least one type of target object can be stored in the memory in the form of a database.
  • the autonomous cleaning device in each of the above embodiments further includes an autonomous cleaning device control device, and the autonomous cleaning device control device includes a VSLAM module that performs inside-out device tracking on the autonomous cleaning device.
  • the VSLAM module receives object characteristics of tracked objects and location data from the autonomous cleaning device, and outputs data associated with the operating environment of the autonomous cleaning device.
  • the VSLAM module outputs camera pose data and scene geometry data.
  • the camera pose number includes coordinate values indicating the viewing direction of the autonomous cleaning device. That is, in certain usage environments where user privacy permits, the pose data of the autonomous cleaning device indicates the direction in which the autonomous cleaning device is looking as an extension of the user's eyes.
  • the scene geometry data described above includes data indicative of the coordinate locations where identified surfaces and other tracked features of the autonomous cleaning device's operating environment are located.
  • the VSLAM module outputs scene geometry data in a potentially data-intensive format such as a point cloud.
  • the stability of the output of the VSLAM module is fluctuating, which can be improved by adding functions such as color recognition and light intensity recognition to the CMOS of the VSLAM module, but it will increase significantly Power consumption of self-cleaning equipment.
  • the processor further acquires the depth information corresponding to the size information of at least one target object in at least one frame image identified as the occurrence of the light trigger event, and combines the object's outline information, size information, and depth information with at least one of the memory-stored
  • the feature image information of the target object is compared to obtain type information and/or position information of at least one target object in at least one frame image where the light trigger event occurs.
  • the depth information is the distance information between the object and the cleaning device (or the camera device of the cleaning device).
  • the characteristic image information of at least one target object stored in the memory includes characteristic contour information, characteristic size information and characteristic depth information;
  • Obtaining depth information corresponding to size information of at least one target object in at least one frame image identified as a light trigger event including:
  • the size information corresponds to the depth information.
  • the feature image information of the target object stored in the memory may also include feature image information of at least one type of target object obtained through machine learning or deep learning.
  • the characteristic image information of the target object stored in the memory can be preset by the cleaning device before leaving the factory.
  • the camera device further includes a ranging light source, and the depth information corresponding to the size information of at least one target object in at least one frame image in which the light trigger event occurs is at least obtained based on the ranging light source.
  • the ranging light source is preferably a pulse line laser light source.
  • a dynamic vision device can be combined with a pulse line laser light source to obtain the outline information, size information and depth information of an object.
  • the pulse line laser light source emits laser light to the object
  • the dynamic vision device obtains the laser stripe information (such as stripe width) on the object during the process of obtaining the frame image
  • the processor obtains the depth information of the object based on the laser stripe information.
  • the dynamic vision device is combined with the pulse line laser, and by utilizing the ability of the light sensor of the dynamic vision sensor device to capture the temporal dynamics in the scene, the stable extraction of laser stripes emitted by the pulse line laser sensor to the object can be realized.
  • the dynamic vision device The adaptive temporal filter of the light sensor can reliably reconstruct the 3D environment around the sweeper, including the ground.
  • the manner of obtaining the depth information also includes obtaining through radar.
  • the laser radar should be used in conjunction with the dynamic vision device to realize localization and map construction (SLAM): including more than one laser radar (LDS), the laser radar is used for SLAM, and is used in conjunction with the camera device.
  • the camera device only For obstacle avoidance, it can be used to judge the type of object by the shape of the object, and judge the depth by the shape and size of more than one trigger point.
  • the cleaning equipment selects whether it is accessible or accessible based on the judgment of the object type and depth information. For example, when judging the room type, by identifying the object information in the room, such as identifying door frames, beds, cabinets, toilets, etc., and then judging the room type by judging the room object information, such as bedroom, kitchen, bathroom, etc. Combined with the obtained depth information, the position of the object can be judged, and then the cleaning equipment can avoid obstacles and clean according to the position of the object.
  • LDS laser radar
  • the camera device 1002 includes two dynamic vision devices 1021;
  • the parallax images obtained based on the two dynamic vision devices 1021 are used as frame images.
  • frame images including:
  • intersection and union are calculated by using the time-synchronized event parallax volume of the left eye and the event parallax volume of the right eye time synchronization, and then calculate the intersection and union ratio through the intersection and union to obtain a parallax image as a frame image.
  • the light sensor identifies the light triggering event of at least one type of target object based on the frame image, including:
  • the pixel light intensity of each pixel of the Nth frame image is used as the pixel reference light intensity, and N is a natural number greater than or equal to 1;
  • a trigger signal is then generated to indicate that a light trigger event occurs for the pixel.
  • the pixel light intensity described above may also be expressed as pixel brightness.
  • the light intensity change threshold can be preset, and more preferably, the light intensity change threshold can be adjusted or modified.
  • the pixel light intensity of each pixel in the N+1th frame image is used as a new pixel reference light intensity.
  • the pixel light intensity of each pixel in the Nth frame of image is still used as the pixel reference light intensity.
  • the pixel light intensity of each pixel of the N+2 frame image is compared with the pixel light intensity of each pixel of the N frame image to obtain the light intensity variation of each pixel of the N+2 frame image;
  • a trigger signal is then generated to indicate that a light trigger event occurs for the pixel.
  • the pixel light intensity of each pixel of the frame of image is used as a new pixel reference light intensity.
  • the recognition of the light-triggered event by the two dynamic vision devices can be realized through the above-mentioned method, and then the cleaning equipment can recognize and measure the object based on the recognition of the light-triggered event. distance.
  • the senor collects the signal, detects and outputs the event point where the pixel brightness change in the collected signal exceeds the set range, and the position of the event point where the pixel brightness change exceeds the set range is usually determined by the moving object in the scene. the corresponding location. Therefore, based on the detected event points and the pre-trained classes used to determine the objects moving relative to the autonomous mobile device to which the event points belong, the classes and locations of objects around the autonomous mobile device can be identified.
  • the cleaning equipment provided in this embodiment realizes the identification of obstacles during the cleaning process of the cleaning equipment and the judgment of the position of the obstacle relative to the cleaning equipment through the camera device, so that the cleaning equipment can enter the path planning according to the position of the obstacle to assist cleaning
  • the equipment completes the cleaning task.
  • the image acquired by the camera device is not sent to the cloud during this process, and stored and deleted by the local user to avoid spreading the user's private information; on the other hand, by using a camera device including a dynamic visual sensor, it can Realize the rapid extraction of moving targets, with the advantages of low latency, low storage, low power consumption and high efficiency.
  • FIG. 2 is a cleaning device according to yet another embodiment of the present disclosure.
  • a cleaning device 2000 includes:
  • the cleaning device main body 2001, the cleaning device main body 2001 can move autonomously; and,
  • the camera device 2002 the camera device 2002 is arranged at an appropriate position of the cleaning equipment main body 2001, wherein the camera device 2002 includes at least one dynamic vision device 2021, and the dynamic vision device 1021 includes:
  • An optical signal receiving device 2022 where the optical signal receiving device 2022 receives the optical signal of the viewing area of the camera device;
  • the light sensor 2023 acquires a frame image frame by frame based on the light signal received by the light signal receiving device, and identifies a light trigger event of at least one type of target object based on the frame image;
  • a memory 2024 stores characteristic image information of at least one type of target object
  • the cleaning device further includes a processor 2004, and the processor 2004 is configured to acquire the contour information and size information of at least one target object in at least one frame image identified as the occurrence of a light trigger event, and combine it with the at least one target object stored in the memory
  • the characteristic image information is compared to obtain type information and/or position information of at least one target object in at least one frame image where a light trigger event occurs.
  • Figure 3 is a cleaning device according to yet another embodiment of the formula.
  • a cleaning device 3000 includes:
  • a cleaning device body 3001 the cleaning device body 3001 being able to move autonomously;
  • the camera device 3002 the camera device 3002 is arranged at an appropriate position of the cleaning equipment main body 3001,
  • the camera device 3002 includes at least one dynamic vision device 3021, and the dynamic vision device 3021 includes:
  • An optical signal receiving device 3022 where the optical signal receiving device 3022 receives the optical signal of the viewing area of the camera device;
  • the light sensor 3023 acquires a frame image frame by frame based on the light signal received by the light signal receiving device, and identifies a light trigger event of at least one type of target object based on the frame image;
  • a memory 3024 stores characteristic image information of at least one type of target object.
  • the processor 3025 is configured to obtain the contour information and size information of at least one target object in at least one frame image identified as the occurrence of a light trigger event, and combine it with the characteristic image information of at least one target object stored in the memory The comparison is performed to obtain type information and/or position information of at least one target object in at least one frame image where the light trigger event occurs.
  • FIG. 5 is a light-triggered event recognition method for cleaning equipment according to one embodiment of the present disclosure.
  • the light-triggered event recognition method S100 for cleaning equipment includes:
  • the cleaning equipment includes a camera device, and the camera device includes at least one dynamic vision device;
  • At least one dynamic vision device acquires frame images frame by frame, and uses the pixel light intensity of each pixel of the Nth frame image as the pixel reference light intensity, where N is a natural number greater than or equal to 1;
  • a camera device is used to acquire frame images frame by frame, and the camera device includes a dynamic vision device or a dynamic vision sensor.
  • the method for identifying a light-triggered event for a cleaning device in this embodiment may be applicable to the cleaning device in any of the above-mentioned embodiments.
  • FIG. 6 is a schematic diagram of a light trigger event acquired by a light trigger event identification method according to an embodiment of the present disclosure.
  • the light trigger event in the figure is obtained by using the following light trigger event identification method:
  • N is a natural number greater than or equal to 1;
  • step d Compare the light intensity change amplitude value v with the trigger threshold t, if it is greater than or equal to the trigger threshold, generate the light trigger event data of the pixel at this position, and set the value of N+1 to N, and at the same time, go to step a, otherwise , set the value of N+1 to N, and at the same time, go to step b.
  • first and second are used for descriptive purposes only, and cannot be interpreted as indicating or implying relative importance or implicitly specifying the quantity of indicated technical features.
  • the features defined as “first” and “second” may explicitly or implicitly include at least one of these features.
  • “plurality” means at least two, such as two, three, etc., unless otherwise specifically defined.

Landscapes

  • Studio Devices (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

La présente invention concerne un dispositif de nettoyage comprenant : un corps principal de dispositif de nettoyage (1001) ; et un appareil de caméra (1002), qui est agencé sur le corps principal de dispositif de nettoyage. L'appareil de caméra comprend un appareil de vision dynamique (1021). L'appareil de vision dynamique comprend : un appareil de réception de signal lumineux (1022), qui reçoit un signal lumineux provenant d'une zone de champ de vision de l'appareil de caméra ; et un capteur de lumière (1023), qui acquiert une image de trame, trame par trame, sur la base du signal de lumière reçu par l'appareil de réception de signal lumineux, et reconnaît un événement déclenché par la lumière d'un objet cible sur la base de l'image de trame. Le dispositif de nettoyage comprend en outre : une mémoire (1003), qui stocke des informations d'image de caractéristiques de l'objet cible ; et un processeur (1004), qui est utilisé pour acquérir des informations d'objet cible dans une image de trame dans laquelle l'apparition d'un événement déclenché par la lumière est reconnue, et pour comparer les informations d'objet cible avec les informations d'image de caractéristiques stockées dans la mémoire, de manière à acquérir des informations de type et/ou des informations de position de l'objet cible dans l'image de trame dans laquelle se produit l'événement déclenché par la lumière.
PCT/CN2022/092021 2021-06-04 2022-05-10 Dispositif de nettoyage et procédé de reconnaissance d'événement déclenché par la lumière pour dispositif de nettoyage WO2022252937A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110623541.XA CN113378684B (zh) 2021-06-04 2021-06-04 清洁设备及用于清洁设备的光触发事件识别方法
CN202110623541.X 2021-06-04

Publications (1)

Publication Number Publication Date
WO2022252937A1 true WO2022252937A1 (fr) 2022-12-08

Family

ID=77575713

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/092021 WO2022252937A1 (fr) 2021-06-04 2022-05-10 Dispositif de nettoyage et procédé de reconnaissance d'événement déclenché par la lumière pour dispositif de nettoyage

Country Status (2)

Country Link
CN (1) CN113378684B (fr)
WO (1) WO2022252937A1 (fr)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113378684B (zh) * 2021-06-04 2024-03-29 北京顺造科技有限公司 清洁设备及用于清洁设备的光触发事件识别方法
CN114046001B (zh) * 2021-11-16 2023-03-28 重庆大学 一种建筑外墙自清洗雨棚及清洗方法
CN114259188A (zh) * 2022-01-07 2022-04-01 美智纵横科技有限责任公司 清洁设备、图像处理方法和装置、可读存储介质
CN117975920A (zh) * 2024-03-28 2024-05-03 深圳市戴乐体感科技有限公司 一种鼓槌动态识别定位方法、装置、设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019018315A1 (fr) * 2017-07-17 2019-01-24 Kaarta, Inc. Alignement de données de signal mesurées avec des données de localisation slam et utilisations associées
CN109998429A (zh) * 2018-01-05 2019-07-12 艾罗伯特公司 用于情境感知的移动清洁机器人人工智能
CN110555865A (zh) * 2019-08-07 2019-12-10 清华大学无锡应用技术研究院 一种基于帧图像的动态视觉传感器样本集建模方法
WO2020195966A1 (fr) * 2019-03-27 2020-10-01 ソニーセミコンダクタソリューションズ株式会社 Système d'imagerie, procédé de commande de système d'imagerie et système de reconnaissance d'objet
CN112631314A (zh) * 2021-03-15 2021-04-09 季华实验室 基于多线激光雷达与事件相机slam的机器人控制方法、系统
CN112805718A (zh) * 2018-10-05 2021-05-14 三星电子株式会社 自动驾驶装置的对象识别方法以及自动驾驶装置
CN113378684A (zh) * 2021-06-04 2021-09-10 北京顺造科技有限公司 清洁设备及用于清洁设备的光触发事件识别方法

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4516592B2 (ja) * 2007-12-06 2010-08-04 本田技研工業株式会社 移動型ロボット
CN100570523C (zh) * 2008-08-18 2009-12-16 浙江大学 一种基于障碍物运动预测的移动机器人避障方法
CN105389543A (zh) * 2015-10-19 2016-03-09 广东工业大学 基于全方位双目视觉深度信息融合的移动机器人避障装置
CN107025660B (zh) * 2016-02-01 2020-07-10 北京三星通信技术研究有限公司 一种确定双目动态视觉传感器图像视差的方法和装置
CN108076338B (zh) * 2016-11-14 2022-04-08 北京三星通信技术研究有限公司 图像视觉处理方法、装置及设备
US11295458B2 (en) * 2016-12-01 2022-04-05 Skydio, Inc. Object tracking by an unmanned aerial vehicle using visual sensors
WO2020009550A1 (fr) * 2018-07-06 2020-01-09 Samsung Electronics Co., Ltd. Procédé et appareil pour une capture d'image dynamique

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019018315A1 (fr) * 2017-07-17 2019-01-24 Kaarta, Inc. Alignement de données de signal mesurées avec des données de localisation slam et utilisations associées
CN109998429A (zh) * 2018-01-05 2019-07-12 艾罗伯特公司 用于情境感知的移动清洁机器人人工智能
CN112805718A (zh) * 2018-10-05 2021-05-14 三星电子株式会社 自动驾驶装置的对象识别方法以及自动驾驶装置
WO2020195966A1 (fr) * 2019-03-27 2020-10-01 ソニーセミコンダクタソリューションズ株式会社 Système d'imagerie, procédé de commande de système d'imagerie et système de reconnaissance d'objet
CN110555865A (zh) * 2019-08-07 2019-12-10 清华大学无锡应用技术研究院 一种基于帧图像的动态视觉传感器样本集建模方法
CN112631314A (zh) * 2021-03-15 2021-04-09 季华实验室 基于多线激光雷达与事件相机slam的机器人控制方法、系统
CN113378684A (zh) * 2021-06-04 2021-09-10 北京顺造科技有限公司 清洁设备及用于清洁设备的光触发事件识别方法

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CANAVOTTO ILARIA DEPARTMENT OF PHILOSOPHY UNIVERSITY OF MARYLAND COLLEGE PARK MD USA: "Pattern Recognition : 5th Asian Conference, ACPR 2019, Auckland, New Zealand, November 26–29, 2019, Revised Selected Papers, Part II", 6 October 2018, SPRINGER INTERNATIONAL PUBLISHING, Cham, ISBN: 978-3-030-41298-2, ISSN: 0302-9743, article ZHU ALEX ZIHAO; CHEN YIBO; DANIILIDIS KOSTAS: "Realtime Time Synchronized Event-Based Stereo", pages: 438 - 452, XP047635332, DOI: 10.1007/978-3-030-01231-1_27 *
KONG DELEI, FANG ZHENG: "A Review of Event-based Vision Sensors and Their Applications", XINXI YU KONGZHI = INFORMATION AND CONTROL, ZHONGGUO KEXUEYUAN, SHENYANG ZIDONGHUA YANJIUSUO,, CH, 22 September 2020 (2020-09-22), CH , pages 1 - 19, XP055956411, ISSN: 1002-0411, DOI: 10.13976/j.cnki.xk.2021.0069 *

Also Published As

Publication number Publication date
CN113378684A (zh) 2021-09-10
CN113378684B (zh) 2024-03-29

Similar Documents

Publication Publication Date Title
WO2022252937A1 (fr) Dispositif de nettoyage et procédé de reconnaissance d'événement déclenché par la lumière pour dispositif de nettoyage
JP6907325B2 (ja) 内部空間の3dグリッド表現からの2d間取り図の抽出
US9400503B2 (en) Mobile human interface robot
CN105409212B (zh) 具有多视图图像捕捉和深度感测的电子设备
JP5963372B2 (ja) 移動式ロボットを作動させて人について行くようにする方法
Darrell et al. Plan-view trajectory estimation with dense stereo background models
CN105408938B (zh) 用于2d/3d空间特征处理的系统
US10687403B2 (en) Adaptive lighting system for a mirror component and a method of controlling an adaptive lighting system
WO2019114221A1 (fr) Procédé et système de commande, et robot de nettoyage applicable
Teixeira et al. Lightweight people counting and localizing in indoor spaces using camera sensor nodes
US8929592B2 (en) Camera-based 3D climate control
CN110266394B (zh) 调节方法、终端及计算机可读存储介质
EP2577632A1 (fr) Système optique pour la détection d'occupation et procédé correspondant
Bhattacharya et al. Arrays of single pixel time-of-flight sensors for privacy preserving tracking and coarse pose estimation
Ghidoni et al. Cooperative tracking of moving objects and face detection with a dual camera sensor
CN111465154A (zh) 一种用于教室的智慧光环控制系统
TW202248966A (zh) 用於動態影像處理及分割之系統和方法
US20220245914A1 (en) Method for capturing motion of an object and a motion capture system
Wientapper et al. Linear-projection-based classification of human postures in time-of-flight data
TW202238076A (zh) 一種用於智慧無人載具系統的室內定位及物件搜尋方法
Derhgawen et al. Vision based obstacle detection using 3D HSV histograms
KR20220106217A (ko) 3차원(3d) 모델링
Woodstock Multisensor fusion for occupancy detection and activity recognition in a smart room
Méndez-Polanco et al. People detection by a mobile robot using stereo vision in dynamic indoor environments
CN109271903A (zh) 基于概率估计的红外图像人体识别方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22814984

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22/03/2024)